Poisoning Web-Scale Training Datasets is Practical

Abstract: Deep learning models are often trained on distributed, webscale datasets crawled from the internet. We introduce two new dataset poisoning attacks that intentionally introduce malicious examples to degrade a model’s performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. We will discuss how the attacks work; why (we think) these

Security and Robustness of Collaborative Learning Systems

Abstract: In recent years, secure collaborative machine learning paradigms have emerged as a viable option for sensitive applications. By eliminating the need to centralize data, these paradigms protect data sovereignty and reduce risks associated with large-scale data collection. However, they also expose the learning process to active attackers, amplifying robustness issues. In this talk, I’ll

Exploiting RDMA Mistakes in NVMe-oF Storage Applications

Abstract: This work presents a security analysis of the InfiniBand architecture, a prevalent RDMA standard, and NVMe-over-Fabrics (NVMe-oF), a prominent protocol for industrial disaggregated storage that employs RDMA protocols to achieve low-latency and high-bandwidth access to remote solid-state devices. Our work, NeVerMore, discovers new vulnerabilities in RDMA protocols that unveils several attack vectors on RDMA-enabled applications

Measuring privacy leakage in neural networks

Abstract: Deep neural networks’ ability to memorize parts of their training data is a privacy concern for models trained on user data. In this talk, I’ll describe recent work on using membership inference attacks to quantify this leakage in the worst-case, and to audit empirical defenses. Join us in CNB/F/110 (Lunch) + CAB G 52 (Seminar).

Zero Trust in Zero Trust?

Abstract: We review the basic notions of trust, trust minimization, zero trust, and trust establishment. We showthat zero trust impossible in any enterprise network and has meaning only as an unreachable limit oftrust establishment. Hence, trust establishment — not the zero trust “buzzword” — can be a foundation ofnetwork security. We also review the key

A Flash(bot) In The Pan: Measuring MEV in Private Pools

Abstract: The rise of Ethereum has lead to a flourishing decentralized marketplace that has, unfortunately, fallen victim to frontrunning and Maximal Extractable Value (MEV) activities, where savvy particpants game transaction orderings within a block for profit. One popular solution to address such behavior is Flashbots, a private pool with infrastructure and design goals aimed at eliminating the negative externalities associated with MEV.