Security and Robustness of Collaborative Learning Systems

Abstract: In recent years, secure collaborative machine learning paradigms have emerged as a viable option for sensitive applications. By eliminating the need to centralize data, these paradigms protect data sovereignty and reduce risks associated with large-scale data collection. However, they also expose the learning process to active attackers, amplifying robustness issues. In this talk, I’ll

Exploiting RDMA Mistakes in NVMe-oF Storage Applications

Abstract: This work presents a security analysis of the InfiniBand architecture, a prevalent RDMA standard, and NVMe-over-Fabrics (NVMe-oF), a prominent protocol for industrial disaggregated storage that employs RDMA protocols to achieve low-latency and high-bandwidth access to remote solid-state devices. Our work, NeVerMore, discovers new vulnerabilities in RDMA protocols that unveils several attack vectors on RDMA-enabled applications

Measuring privacy leakage in neural networks

Abstract: Deep neural networks’ ability to memorize parts of their training data is a privacy concern for models trained on user data. In this talk, I’ll describe recent work on using membership inference attacks to quantify this leakage in the worst-case, and to audit empirical defenses. Join us in CNB/F/110 (Lunch) + CAB G 52 (Seminar).

Zero Trust in Zero Trust?

Abstract: We review the basic notions of trust, trust minimization, zero trust, and trust establishment. We showthat zero trust impossible in any enterprise network and has meaning only as an unreachable limit oftrust establishment. Hence, trust establishment — not the zero trust “buzzword” — can be a foundation ofnetwork security. We also review the key

A Flash(bot) In The Pan: Measuring MEV in Private Pools

Abstract: The rise of Ethereum has lead to a flourishing decentralized marketplace that has, unfortunately, fallen victim to frontrunning and Maximal Extractable Value (MEV) activities, where savvy particpants game transaction orderings within a block for profit. One popular solution to address such behavior is Flashbots, a private pool with infrastructure and design goals aimed at eliminating the negative externalities associated with MEV.

Code-Level Protocol Verification

Abstract: Recent bugs in implementations, such as Heartbleed or in the Matrix chat application, demonstrate that formally verifying security properties for protocol models is an important first step but not enough to also guarantee them for implementations. We present a bottom-up verification approach to prove trace-based security properties directly on the level of existing implementations.