N. Asokan
From 11:00 until 12:30
At CAB H 52 (Seminar) + CNB/F/110 (Lunch) , ETH Zurich
CAB H 52 (Seminar) + CNB/F/110 (Lunch), ETH Zurich
Abstract:
The success of deep learning in many application domains has been nothing short of dramatic. This has brought the spotlight onto security and privacy concerns with machine learning (ML). One such concern is the threat of model theft. I will discuss our work on exploring the threat of model theft, especially in the form of "model extraction attacks" -- when a model is made available to customers via an inference interface, a malicious customer can use repeated queries to this interface and use the information gained to construct a surrogate model. I will also discuss possible countermeasures, focussing on deterrence mechanisms that allow for model ownership resolution (MOR). I will touch on the issue of conflicts that arise defenses against multiple different threats need to be applied simultaneously to a given ML model.
Join us in CAB H 52 (Seminar) + CNB/F/110 (Lunch).