Sebastian Meiser, University College London
From 12.00 until 13.30
At CNB/F/110 (Lunch) + CNB/F/100.9 (Seminar), ETH Zurich
Universitätstrasse 6, 8092 Zurich
Abstract:
We have a clear understanding about how to handle information security in cryptographic cases where adversaries (provably) have a negligible chance of success. Definitions are solid and all is well.
However, in many cases relevant for privacy, we cannot achieve such strong notions without paying outrageous costs: a utility approaching zero, vast communication costs, extreme limitations of the mechanisms or even hard impossibility results. Thus, weaker privacy notions have emerged and they inherently struggle with defining and confining a non-negligible adversarial advantage. The prominent privacy metrics used today (most of which are called X-differential privacy for different X) leverage a form of deniability to derive, discuss and guarantee (more or less) meaningful bounds on what an adversary can and cannot do, even if the adversarial advantage is non-negligible.
In this talk, we delve into the matter of differential privacy notions and discuss a variety of cases where definitions based on deniability are useful, if not necessary, including the classical example of statistical queries on databases, cryptographic secrecy when randomness is imperfect, anonymous communication, and side channel leakage of systems. Moreover, we analyze the deterioration of privacy guarantees under continual observation and finally ask the (open) question of how we should quantify privacy.