Measuring privacy leakage in neural networks

Thu 17Nov2022
Since June 2016 you need to have a valid API key enabled to display Google maps, see plugin settings

Florian Tramèr, ETH Zürich

From 12.00 until 13.30

At CNB/F/110 (Lunch) + CAB G 52 (Seminar), ETH Zurich

CNB/F/110 (Lunch) + CAB G 52 (Seminar), ETH Zurich

Abstract:

Deep neural networks' ability to memorize parts of their training data is a privacy concern for models trained on user data.
In this talk, I'll describe recent work on using membership inference attacks to quantify this leakage in the worst-case, and to audit empirical defenses.

Join us in CNB/F/110 (Lunch) + CAB G 52 (Seminar).

Download Event to Calendar