Paper
Event
CAI Talk - Guaranteed adversarially robust training of neural networks

Guaranteed adversarially robust training of neural networks

Abstract: Deep neural networks have achieved remarkable success across domains, yet their vulnerability to adversarial attacks remains a critical challenge. Even small, carefully crafted perturbations to input data can cause models to misclassify with high confidence, raising concerns about safety, reliability, and trust in AI systems.

This talk by Dr. Tanmoy and Dr. Raman Arora from Johns Hopkins Whiting School of Engineering explores the theory and practice of guaranteed adversarially robust training. The speakers will discuss how robust optimization frameworks and novel training algorithms can provide provable guarantees against adversarial perturbations, ensuring that models maintain performance even under worst-case scenarios.

Key themes include:

  • The mathematical foundations of adversarial robustness and its implications for machine learning.

  • Practical approaches to designing training pipelines that enforce robustness without sacrificing accuracy.

  • Challenges in scaling robust training methods to large datasets and complex architectures.

  • Applications in security-sensitive domains such as healthcare, finance, and autonomous systems.

By bridging theoretical insights with real-world implementations, the presentation will highlight how adversarially robust training can pave the way toward trustworthy AI. The discussion will emphasize the importance of provable guarantees, not just empirical defenses, in building systems that are resilient, reliable, and ready for deployment in critical environments.