Panels

Evolving Landscape of Responsible AI

June 5, 2:15pm
Location: Magnolia

Moderated by Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel

Panelists:
Vishnu S. Pendyala,  San Jose State University, Chair of the IEEE Computer Society Silicon Valley chapter, and IEEE Computer Society Distinguished Contributor
Ned Hayes, Chief Executive Officer, SnowShoe.io

As per a report on Artificial Intelligence Market by Grand View Research, the global artificial intelligence market size was valued at USD 93.5 billion in 2021 and is projected to expand at a compound annual growth rate (CAGR) of 38.1% from 2022 to 2030. With the recent strides in Generative AI to create new content, ChatGPT has taken the world by storm. Yet there are daily reports of AI harm. Over 90 percent of businesses using AI say Trustworthy and explainable AI is critical to business. More than half of companies cite significant barriers to getting there including a lack of skills, inflexible governance tools, biased data, and more. Responsible AI is an evolving landscape that requires a comprehensive approach around people, processes, systems, data, and algorithms. In this panel discussion, we explore this ever-changing and complex landscape from the perspective of principles, tools and frameworks, legislatures, and standards.

Adversarial Machine Learning: Lessons Learned, Challenges & Opportunities

June 6, 2:00pm
Location: Santa Clara I

Moderated by Alexey Kurakin, Staff Research Engineer – Brain Privacy and Security at Google Research, and Catherine Huang, Senior Staff Software Engineer at Google Counter Abuse Technology

Panelists:
Dipankar Dasgupta, William Hill Professor of Computer Science, Director, Center for Information Assurance (CfIA), The University of Memphis
David Wagner, Professor, Computer Science Division, University of California, Berkeley
Aditi Raghunathan, Assistant Professor, Carnegie Mellon University

As artificial intelligence (AI) continues to advance in serving a diverse range of applications including computer vision, speech recognition, healthcare and cybersecurity, adversarial machine learning (AdvML) is not just a research topic, it has become a growing concern in defense and commercial communities. Many real-world ML applications have not taken adversarial attack into account during system design, thus the ML models are extremely fragile in adversarial settings. Recent research has investigated the vulnerability of ML algorithms and various defense mechanisms. The questions surrounding this space are more pressing than ever before: Can we make AI/ML more secure? How can we make a system robust to novel or potentially adversarial inputs? Can we use AdvML to help solve some of our industrial ML challenges? How can ML systems detect and adapt to changes in the environment over time? How can we improve maintainability and interpretability of deployed models? These questions are essential to consider in designing systems for high stakes applications. In this panel, we invite the IEEE community to join our experts in AdvML to discuss the lessons learned, challenges and opportunities in building more reliable and practical ML models by leveraging ML security and adversarial machine learning.

Power Grid Operations and Planning Under Uncertainty: How Can AI Help Address Existing Challenges?

June 6, 11:00am
Location: Santa Clara II

Moderated by Sara Eftekharnejad, Assistant Professor, Dept of Electrical Engineering & Computer Science at Syracuse University, and Chilukuri Mohan, Professor of Electrical Engineering & Computer Science at Syracuse University

Panelists:
Nancy Min, Chief Executive Officer of ecoLong
Amarsagar Reddy Ramapuram Matavalam, Assistant Professor of Electrical Engineering at Arizona State University
Bo Yang, Hitachi America

Power grid uncertainties have increased dramatically in recent years. The rapid integration of intermittent energy resources such as wind and solar energy systems is expected to increase the grid uncertainties further. These uncertainties challenge the everyday grid operation and planning if not correctly modeled and quantified. In recent years, there has been significant research and development in data-driven modeling of distributed renewable energy resources. Powered by recent advancements in artificial intelligence and machine learning, these data-driven models are especially robust to handle the dynamic nature of intermittent energy resources. In addition to generation uncertainties, extreme weather patterns have led to an increase in failure uncertainties. Hence, modeling and predicting failures in near real-time is more critical than ever in preventing widespread blackouts. However, traditional statistical techniques fail to predict future events under these interdependent uncertainties. As a result, there has been significant interest in recent years to develop more accurate data-driven failure models that are also efficient and fast enough for near real-time decision-making. This panel will discuss various AI-centric research and development efforts to address the existing challenges of power grid uncertainties. These efforts include generation forecast and modeling, cascading failure prediction, and power grid operations and planning under uncertainty. The panelists will discuss how AI could complement traditional power grid analysis techniques to address the existing problems and where AI techniques are limited in addressing those challenges. The panelists will also discuss their outreach and industry collaboration efforts.

Explainable AI: Current Challenges and Future Perspectives

June 6, 3:00pm
Location: Santa Clara I

Moderated by Jon Garibaldi, Professor of Computer Science, University of Nottingham, UK

Panelists:
Keeley Crockett, Professor in Computational Intelligence at Manchester Metropolitan University 

Alexander Gegov, Reader in Computational Intelligence, University of Portsmouth, UK
Uzay Kaymak, Professor of Information Systems, Eindhoven University of Technology, The Netherlands

This panel will discuss a wide range of aspects of Explainable AI that may include informativeness, trustworthiness, fairness, transparency, causality, transferability, reliability, accessibility, privacy, safety, verifiability and accountability. The topics discussed at the panel will cover aspects of Explainable AI that may include local and global scope, specific and agnostic models, as well as aspects of constructive, what-if, counterfactual and example-based explanations. Other potential topics may include recent developments related to real world bias of AI, how this bias is reflected in data bias, the encoding of data bias in algorithmic bias, its uncovering by Explainable AI, and how the latter can be used for closing the loop by mitigating real world bias of AI. The panel will also explore current challenges and future perspectives in Explainable AI that may include formalisation and evaluation of explanations, their adoption in industry, their potential for improving human machine collaboration and their ability to facilitate collective intelligence, responsibility, security and causality in AI.

Artificial Intelligence for Autonomous Driving

June 6, 11:00am
Location: Magnolia

Organized by: Shivam Gautam, Apoorv Singh, Nemanja Djuric, Rowan McAllister, and Shubhankar Agarwal

Moderated by Shivam Gautam, Tech Lead Manager in Perception Model Development at Latitude AI

Panelists:
Apoorv Singh, Senior ML Research Engineer and Tech Lead at Motional
Fang-Chieh Chou, Software Engineer at Door Dash Labs
Aleksandr Petiushko, Technical Lead Manager at Machine Learning Research at Nuro
Sachithra Hemachandra, Staff Tech Lead Manager, Cruise

Autonomous driving is one of the fastest-growing industries leveraging artificial intelligence solutions. Autonomous driving has been using a suite of modalities like Cameras, LiDARs, RADARs, microphones, ultrasonics, city-traffic data, and everything around in order to bring autonomous cars to a boring reality. This panel will bring together experts in the field of AI for autonomous driving to discuss the frontiers of Perception; the field of distilling sensor data into representations understandable by the autonomy stack. The panel is comprised of a diverse group with several years of experience in building robots and complex perception systems for the purpose of autonomous passenger vehicles and delivery robots. This panel will discuss challenges to developing a scalable, safe, and ethical perception system for the future. Topics will include, but are not limited to long tail problems in autonomous driving, data mining, perception architectures, ML Infrastructure, and future technologies, among others. The panelists will provide their viewpoint not only from a performance perspective but from the lens of an experienced practitioner balancing reliability with practical computing considerations. This is an excellent opportunity for attendees to gain a deeper understanding of the latest advancements in AI for autonomous driving and the pivotal role it will play in reshaping our transportation landscape.

Everything, Everywhere All at Once: AI Disruption, Ethics, and Innovation

June 5, 4:45pm
Location: Santa Clara I

Moderated by James Scrivner, CEO and Co-Founder of Scrivner Solutions, Inc.

Panelists:
Steve Chenoweth, Associate Professor, Rose-Hulman Institute of Technology
Olga Scrivner, Assistant Professor, Rose-Hulman Institute of Technology
Jordan Thayer, AI Practice Lead, SEP

To disrupt something is to disturb the normal flow of an activity or process. While in our personal lives, we generally abhor disruption, in the business world disruption is viewed as being a positive change, forcing us to abandon the status quo for a new better way of doing things. Only a few technologies have been as disruptive, both to our private lives and the business world, as Artificial Intelligence (AI). Human coexistence with AI has been seen as a “friend” or “foe”. However, with the omnipresence of AI-powered applications and their increasing accessibility and efficiency, more humans start relying on AI decisions in their daily tasks. These “black box” solutions are often believed to be mathematically pure, and thus unbiased, leading to ethical and societal implications, even reinforcing socio-cognitive fallacies. Bias creeps into these systems through their inputs, design, how they are used, and how the output is perceived by the users. Can the average developer or annotator, ensconced in their cube, going about their quotidian niche work in an organization, imagine all the perspectives needed to predict an ethical conflict resulting from that work, such as a user querying ChatGPT for “Build a marketplace on the dark web”? What are the moral obligations of an engineer building an automated system? How can we better equip students, practitioners, and business leaders to understand and discuss not only the business impacts of the technologies they build or use but also their societal impacts? How do we ensure that new technologies help us mitigate biases and differences in ability rather than exacerbating them? Our diverse panel of industry and academic experts will address these questions by telling stories from their personal experiences and discussing those experiences with each other and the audience. The panel will present the following topics for discussion: 

Disruption of Ethics Norms in Software Engineering
Steve Chenoweth, Associate Professor, Rose-Hulman Rose-Hulman Institute of Technology

Biases and Stereotypes Amplified Through AI
Olga Scrivner, Assistant Professor, Rose-Hulman Institute of Technology

Consequences of AI Gap for Small/Medium Businesses
Jordan Thayer, AI Practice Lead, SEP 

Trust and Fear in Front of AI Innovative Technology
James Scrivner, CEO, Scrivner Solutions Inc.