Developments in Adversarial Machine Learning

Thu 19Sep2019
Since June 2016 you need to have a valid API key enabled to display Google maps, see plugin settings

Florian Tramèr, Stanford University

From 12.00 until 13.30

At CNB/F/110 (Lunch) + CAB/F/100.9 (Seminar), ETH Zurich

Universitätstrasse 6, 8092 Zurich

Abstract:

The past five years have seen thousands of academic papers devoted to the study of adversarial examples in machine learning. Yet, despite countless proposed defenses, robustness remains evasive even for the simplest toy threat models. I’ll discuss recent work that shows how defenses degrade when extended to multiple perturbation types, and how models can also be trained to be “too robust” on simple datasets. On the positive side, I’ll describe new ideas for protecting neural networks against special types of localized and universal perturbations that enable adversarial examples to be applied in the physical world.

Download Event to Calendar