Team Atlanta placed 1st in the DARPA AI Cyber Challenge (AIxCC), earning a $4M grand prize in the final round. In this talk, I will introduce the DARPA AIxCC competition and share our technical approaches that led to victory—
specifically, how we augmented large language models (LLMs) with traditional software analysis techniques to automatically discover and repair security vulnerabilities in real-world, large-scale open-source projects.
As artificial intelligence evolves from traditional machine learning to foundation models and agentic AI, society stands at a widening frontier of both opportunity and risk. This talk will examine how accelerating capabilities,emerging autonomy, and deepening societal integration have transformed AI safety and security from isolated technical issues into systemic and socio-economic priorities. It will discuss the expanding AI attack surface across data, models, and deployment pipelines, highlighting the risk of Gen AI being misused by cyber-attackers to cyber offences. This talk will also discuss defensive approaches in response to AI risks test and evaluation, red-teaming, interpretability, monitoring etc that form the backbone of trusted AI operations. Looking ahead, it will discuss risks due to the rise of agentic AI, autonomous systems capable of goal-directed behaviour and self-adaptation and the safety and security challenges this poses.