How to Crack a Train

Abstract: You’ve probably already heard the story: we got contracted to analyze a bunch of trains breaking down after being serviced by third-party workshops. We reverse engineered them and found code which simulated failures when they detected servicing attempts. We presented our findings at 37C3 and then an update of the story at 38C3. This

Trust and Authentication in Satellite Systems: Past, Present, and Future

Abstract: The security landscape of satellite systems has undergone a significant transformation in recent decades. With thousands of satellites launched annually, space-based infrastructure has become increasingly critical for communication, observation, and scientific measurement. However, this growth has also been accompanied by a decrease in the barrier to entry for attacks, driven by the widespread availability

Secure runtime auditing in remote embedded/IoT devices

Abstract: Embedded and IoT devices are becoming increasingly widespread, often supporting safety-critical operations. However, these devices typically lack the advanced security features of more powerful systems due to cost and energy constraints, making them vulnerable to software-based attacks. To address this, Control Flow Attestation (CFA) has been proposed as a cost-effective method to detect control

How to Authenticate Keys for Secure Messaging

Abstract: Modern messaging applications such as iMessage, Signal, and WhatsApp encrypt their users’ messages using cryptography that provides strong security guarantees. All these security guarantees are void, however, when inauthentic keys are used. For years, the only option to authenticate your peers’ keys was to compare safety numbers in-person, which was rarely done in practice.

Ethical, responsible, and safe requirements for AI

Abstract: Abstract: In a world increasingly shaped by AI technologies, addressing their technical, ethical, and fairness challenges becomes imperative. This talk will address the criticality of identifying ethical, responsible, and safe requirements for large language models, featuring published and ongoing research at Precog Lab https://precog.iiit.ac.in/. We’ll discuss representation steering to make interpretable changes to language