Human-Centred Privacy in Machine Learning

Abstract: Privacy-preserving machine learning has the potential to balance individuals’ privacy rights and companies’ economic goals. However, such technologies must inspire trust and communicate how they can match the expectations of the subjects of the data.  In this talk, I present the breadth of privacy vectors for machine learning and the implications of my work on user perspectives of the

Verifiable Fully Homomorphic Encryption

Abstract: Fully Homomorphic Encryption (FHE) is seeing increasing real-world deployment to protect data in use by allowing computation over encrypted data. However, the same malleability that enables homomorphic computations also raises integrity issues, which have so far been mostly overlooked for practical deployments. While FHE’s lack of integrity has obvious implications for correctness, it also has

Privacy in Machine Learning

Abstract: The quantification of privacy risks associated with algorithms is a core issue in data privacy, which holds immense significance for privacy experts, practitioners, and regulators. I will introduce a systematic approach to assessing the privacy risks of machine learning algorithms. I will highlight our efforts towards establishing standardized privacy auditing procedures and privacy meter

Confidential Computing for Next-Gen Data Centers

Abstract: Modern data centers have grown beyond CPU nodes to provide domain-specific accelerators such as GPUs and FPGAs to their customers. Customers are concerned about protecting their data and are willing to accept certain performance degradation for trusted execution environments (TEEs) like Intel SGX or AMD SEV. However, they face a trade-off between using accelerators

Proving Information Flow Security for Concurrent Programs

Abstract: (Program) verification is the process of proving that a program satisfies some properties by using mathematical techniques and formal reasoning, rather than relying on testing the program with inputs. Program verification is typically used to prove functional correctness properties (e.g., proving that a sorting algorithm does not crash and correctly sorts inputs), but it

It’s TEEtime: A New Architecture that Brings Sovereignty to Smartphones

Abstract: Modern smartphones are complex systems in which control over phone resources is exercised by phone manufacturers, operators, OS vendors, and users. These stakeholders have diverse and often competing interests. Barring some exceptions, users, including developers, entrust their security and privacy to OS vendors (Android and iOS) and need to accept the constraints they impose.

Designing a Provenance Analysis for SGX Enclaves

Abstract: SGX enclaves are trusted user-space memory regions that ensure isolation from the host, which is considered malicious. However, enclaves may suffer from vulnerabilities that allow adversaries to compromise their trustworthiness. Consequently, the SGX isolation may hinder defenders from recognizing an intrusion. Ideally, to identify compromised enclaves, the owner should have privileged access to the

Poisoning Web-Scale Training Datasets is Practical

Abstract: Deep learning models are often trained on distributed, webscale datasets crawled from the internet. We introduce two new dataset poisoning attacks that intentionally introduce malicious examples to degrade a model’s performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. We will discuss how the attacks work; why (we think) these