Aditi Raghunathan
Assistant Professor, Carnegie Mellon University

June 6, 2:00pm
Location: Santa Clara I

Panelist: Adversarial Machine Learning: Lessons Learned, Challenges, and Opportunities

As artificial intelligence (AI) continues to advance in serving a diverse range of applications including computer vision, speech recognition, healthcare and cybersecurity, adversarial machine learning (AdvML) is not just a research topic, it has become a growing concern in defense and commercial communities. Many real-world ML applications have not taken adversarial attack into account during system design, thus the ML models are extremely fragile in adversarial settings. Recent research has investigated the vulnerability of ML algorithms and various defense mechanisms. The questions surrounding this space are more pressing than ever before: Can we make AI/ML more secure? How can we make a system robust to novel or potentially adversarial inputs? Can we use AdvML to help solve some of our industrial ML challenges? How can ML systems detect and adapt to changes in the environment over time? How can we improve maintainability and interpretability of deployed models? These questions are essential to consider in designing systems for high stakes applications. In this panel, we invite the IEEE community to join our experts in AdvML to discuss the lessons learned, challenges and opportunities in building more reliable and practical ML models by leveraging ML security and adversarial machine learning.

Aditi Raghunathan is an Assistant Professor at Carnegie Mellon University. She is interested in building robust ML systems with guarantees for trustworthy real-world deployment. Previously, she was a postdoctoral researcher at Berkeley AI Research, and received her PhD from Stanford University in 2021. Her research has been recognized by the Schmidt AI2050 Early Career Fellowship, the Arthur Samuel Best Thesis Award at Stanford, a Google PhD fellowship in machine learning, and an Open Philanthropy AI fellowship.