Alexey Kurakin
Staff Research Engineer, Brain Privacy & Security, Google Research

Societal Implications of AI

June 5, 10:15am
Location: Santa Clara I

Differential Privacy and Synthetic Data

Differential privacy (DP) is a mathematical framework which provides provable privacy guarantees in data analysis and statistical applications. It has become a gold standard for privacy protection in machine learning applications. Nevertheless, applying differential privacy in practice could be challenging due to its limitations. First of all, adding differential privacy to ML is computationally expensive. Additionally, DP is usually associated with a degradation of quality of the model, which is called loss of utility. In this talk we will discuss these challenges and possible ways to overcome them in practical applications. In particular we would discuss how combining public and private data can help and how synthetic data could be utilized in privacy-preserving applications.

June 6, 2:00pm
Location: Santa Clara I

Panel Moderator: Adversarial Machine Learning: Lessons Learned, Challenges, and Opportunities

As artificial intelligence (AI) continues to advance in serving a diverse range of applications including computer vision, speech recognition, healthcare and cybersecurity, adversarial machine learning (AdvML) is not just a research topic, it has become a growing concern in defense and commercial communities. Many real-world ML applications have not taken adversarial attack into account during system design, thus the ML models are extremely fragile in adversarial settings. Recent research has investigated the vulnerability of ML algorithms and various defense mechanisms. The questions surrounding this space are more pressing than ever before: Can we make AI/ML more secure? How can we make a system robust to novel or potentially adversarial inputs? Can we use AdvML to help solve some of our industrial ML challenges? How can ML systems detect and adapt to changes in the environment over time? How can we improve maintainability and interpretability of deployed models? These questions are essential to consider in designing systems for high stakes applications. In this panel, we invite the IEEE community to join our experts in AdvML to discuss the lessons learned, challenges and opportunities in building more reliable and practical ML models by leveraging ML security and adversarial machine learning.

Alexey Kurakin is a Staff Research Engineer in Google Brain Privacy & Security team.He holds a Ph.D. degree in computer science from Moscow Institute of Physics and Technology. His current work is focused on both research and applications in the areas of adversarial machine learning and differential privacy. In particular he has multiple publications on the practical aspects of differentially private machine learning.