Florian Tramèr, ETH Zürich
From 12.00 until 13.30
At CNB/F/110 (Lunch) + CAB G 52 (Seminar), ETH Zurich
CNB/F/110 (Lunch) + CAB G 52 (Seminar), ETH Zurich
Abstract:
Deep neural networks' ability to memorize parts of their training data is a privacy concern for models trained on user data.
In this talk, I'll describe recent work on using membership inference attacks to quantify this leakage in the worst-case, and to audit empirical defenses.
Join us in CNB/F/110 (Lunch) + CAB G 52 (Seminar).