ACAI: Protecting Accelerator Execution with Arm Confidential Computing Architecture

Abstract: Trusted execution environments in several existing and upcoming CPUs demonstrate the success of confidential computing, with the caveat that tenants cannot securely use accelerators such as GPUs and FPGAs. In this paper, we reconsider the Arm Confidential Computing Architecture (CCA) design, an upcoming TEE feature in Armv9-A, to address this gap. We observe that

Chain-key Bitcoin

Abstract: Chain-key Bitcoin (ckBTC) is a Bitcoin “layer 2” built on top of the Internet Computer blockchain (IC). It enables both individuals as well as IC smart contracts to transact the deposited Bitcoin cheaply and with a much higher throughput compared to the Bitcoin network. Crucially, ckBTC has no additional trust assumptions other than those

Confidential Computing in Practice 2023

Abstract: Since the introduction of Intel SGX nearly a decade ago, the Confidential Computing industry has been growing significantly. In this talk, we’ll look at how secure architectures are being deployed and used today: what underlying primitives are available from hardware vendors, what infrastructure providers are making available, and how to write software using all

CVE-2022-23491, or Why PO boxes can’t be root certificate authorities anymore

Abstract: Mozilla curates a set of root certificate authorities to validate hostnames for TLS in the Firefox browser. Many other software projects, such as Tor Browser and ca-certificates simply follow Mozilla’s list; other entities, such as Apple and Microsoft, make their own decisions for inclusion with considerations for Mozilla’s decisions and the associated public discussion. In March 2023, Mozilla

Human-Centred Privacy in Machine Learning

Abstract: Privacy-preserving machine learning has the potential to balance individuals’ privacy rights and companies’ economic goals. However, such technologies must inspire trust and communicate how they can match the expectations of the subjects of the data.  In this talk, I present the breadth of privacy vectors for machine learning and the implications of my work on user perspectives of the