Proving Information Flow Security for Concurrent Programs

Abstract: (Program) verification is the process of proving that a program satisfies some properties by using mathematical techniques and formal reasoning, rather than relying on testing the program with inputs. Program verification is typically used to prove functional correctness properties (e.g., proving that a sorting algorithm does not crash and correctly sorts inputs), but it

It’s TEEtime: A New Architecture that Brings Sovereignty to Smartphones

Abstract: Modern smartphones are complex systems in which control over phone resources is exercised by phone manufacturers, operators, OS vendors, and users. These stakeholders have diverse and often competing interests. Barring some exceptions, users, including developers, entrust their security and privacy to OS vendors (Android and iOS) and need to accept the constraints they impose.

Designing a Provenance Analysis for SGX Enclaves

Abstract: SGX enclaves are trusted user-space memory regions that ensure isolation from the host, which is considered malicious. However, enclaves may suffer from vulnerabilities that allow adversaries to compromise their trustworthiness. Consequently, the SGX isolation may hinder defenders from recognizing an intrusion. Ideally, to identify compromised enclaves, the owner should have privileged access to the

Poisoning Web-Scale Training Datasets is Practical

Abstract: Deep learning models are often trained on distributed, webscale datasets crawled from the internet. We introduce two new dataset poisoning attacks that intentionally introduce malicious examples to degrade a model’s performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. We will discuss how the attacks work; why (we think) these

Security and Robustness of Collaborative Learning Systems

Abstract: In recent years, secure collaborative machine learning paradigms have emerged as a viable option for sensitive applications. By eliminating the need to centralize data, these paradigms protect data sovereignty and reduce risks associated with large-scale data collection. However, they also expose the learning process to active attackers, amplifying robustness issues. In this talk, I’ll

Exploiting RDMA Mistakes in NVMe-oF Storage Applications

Abstract: This work presents a security analysis of the InfiniBand architecture, a prevalent RDMA standard, and NVMe-over-Fabrics (NVMe-oF), a prominent protocol for industrial disaggregated storage that employs RDMA protocols to achieve low-latency and high-bandwidth access to remote solid-state devices. Our work, NeVerMore, discovers new vulnerabilities in RDMA protocols that unveils several attack vectors on RDMA-enabled applications

Measuring privacy leakage in neural networks

Abstract: Deep neural networks’ ability to memorize parts of their training data is a privacy concern for models trained on user data. In this talk, I’ll describe recent work on using membership inference attacks to quantify this leakage in the worst-case, and to audit empirical defenses. Join us in CNB/F/110 (Lunch) + CAB G 52 (Seminar).