Confidential Computing in Practice 2023

Abstract: Since the introduction of Intel SGX nearly a decade ago, the Confidential Computing industry has been growing significantly. In this talk, we’ll look at how secure architectures are being deployed and used today: what underlying primitives are available from hardware vendors, what infrastructure providers are making available, and how to write software using all

CVE-2022-23491, or Why PO boxes can’t be root certificate authorities anymore

Abstract: Mozilla curates a set of root certificate authorities to validate hostnames for TLS in the Firefox browser. Many other software projects, such as Tor Browser and ca-certificates simply follow Mozilla’s list; other entities, such as Apple and Microsoft, make their own decisions for inclusion with considerations for Mozilla’s decisions and the associated public discussion. In March 2023, Mozilla

Human-Centred Privacy in Machine Learning

Abstract: Privacy-preserving machine learning has the potential to balance individuals’ privacy rights and companies’ economic goals. However, such technologies must inspire trust and communicate how they can match the expectations of the subjects of the data.  In this talk, I present the breadth of privacy vectors for machine learning and the implications of my work on user perspectives of the

Verifiable Fully Homomorphic Encryption

Abstract: Fully Homomorphic Encryption (FHE) is seeing increasing real-world deployment to protect data in use by allowing computation over encrypted data. However, the same malleability that enables homomorphic computations also raises integrity issues, which have so far been mostly overlooked for practical deployments. While FHE’s lack of integrity has obvious implications for correctness, it also has

Privacy in Machine Learning

Abstract: The quantification of privacy risks associated with algorithms is a core issue in data privacy, which holds immense significance for privacy experts, practitioners, and regulators. I will introduce a systematic approach to assessing the privacy risks of machine learning algorithms. I will highlight our efforts towards establishing standardized privacy auditing procedures and privacy meter

Confidential Computing for Next-Gen Data Centers

Abstract: Modern data centers have grown beyond CPU nodes to provide domain-specific accelerators such as GPUs and FPGAs to their customers. Customers are concerned about protecting their data and are willing to accept certain performance degradation for trusted execution environments (TEEs) like Intel SGX or AMD SEV. However, they face a trade-off between using accelerators

Proving Information Flow Security for Concurrent Programs

Abstract: (Program) verification is the process of proving that a program satisfies some properties by using mathematical techniques and formal reasoning, rather than relying on testing the program with inputs. Program verification is typically used to prove functional correctness properties (e.g., proving that a sorting algorithm does not crash and correctly sorts inputs), but it

It’s TEEtime: A New Architecture that Brings Sovereignty to Smartphones

Abstract: Modern smartphones are complex systems in which control over phone resources is exercised by phone manufacturers, operators, OS vendors, and users. These stakeholders have diverse and often competing interests. Barring some exceptions, users, including developers, entrust their security and privacy to OS vendors (Android and iOS) and need to accept the constraints they impose.

Designing a Provenance Analysis for SGX Enclaves

Abstract: SGX enclaves are trusted user-space memory regions that ensure isolation from the host, which is considered malicious. However, enclaves may suffer from vulnerabilities that allow adversaries to compromise their trustworthiness. Consequently, the SGX isolation may hinder defenders from recognizing an intrusion. Ideally, to identify compromised enclaves, the owner should have privileged access to the