Panels

Toyota’s Manufacturing Agentic Journey

Team Toyota has strategically leveraged Generative AI to drive innovation and operational excellence across key domains, including manufacturing, battery plants, paint design, and enterprise knowledge management through solutions like Knowledge Bot (Kura) and Global Chatbots. By integrating Large Language Models (LLMs), Toyota has developed a robust methodology to evaluate, score, and quantify AI outputs against Toyota’s quality benchmarks, ensuring high standards and reliability in production environments. This effort highlights critical lessons in collaborative requirement discovery and outcome-defined capability selection, enabling efficient resource utilization and fostering strong enterprise leadership engagement. These practices ensure AI initiatives align with business objectives, accelerating adoption and delivering measurable value across the organization. A cornerstone of this success is the application of the agentic framework for deploying Generative AI in production, enabling autonomous AI agents to execute complex tasks and optimize processes seamlessly. This approach has transformed workflows, enhanced decision-making, and driven significant operational efficiencies. Toyota’s strategic deployment of Generative AI demonstrates a commitment to innovation, showcasing how advanced technologies can be effectively utilized to maintain quality, scale operations, and strengthen leadership in an evolving digital landscape.
Target Audience:
Generative AI Developers, Executives & Researchers.
Speakers:
Stephen Ellis

Stephen Ellis

Technical Generative AI Product Manager, Toyota Motors North America 10 years of experience in research strategy and emerging technology applications for companies ranging from startups to Fortune 50 enterprises. Former Director of the North Texas Blockchain Alliance, leading blockchain and cryptocurrency competency development among executives and software developers. Former CTO of Plymouth Artificial Intelligence, advising companies on leveraging AI for new business models. Currently enabling Generative AI solutions across Toyota’s enterprise, driving transformation in new mobility solutions and operational efficiency.
Ravi Chandu Ummadisetti

Ravi Chandu Ummadisetti

Generative AI Architect/Product Lead Strategic AI leader with 10+ years of experience driving enterprise AI/ML transformations across Automotive, Banking, Healthcare, and Telecommunications. Expert in Generative AI applications, including Retrieval-Augmented Generation (RAG), model fine-tuning, and secure AI platforms. Specialized in optimizing manufacturing, legal operations, and enterprise AI applications aligned with business goals. Strong collaborator with global technical and executive teams, delivering scalable AI solutions that drive innovation and efficiency.

Ethical and Responsible Autonomous AI for Human-AI Interaction

Topic: AI Ethics and Impact on Humans
Panelists:

Professor Keeley Crockett

Professor Keeley Crockett

Manchester Metropolitan University, UK

Leads the Machine Intelligence theme in the Centre for Advanced Computational Science. She has over 27 years of experience in Ethical and Responsible AI, computational intelligence, psychological profiling, fuzzy systems, and dialogue systems.

She is a member of the IEEE Computational Intelligence Society ADCOM (2023-25) and chairs the IEEE Technical Committee SHIELD (2023-24).

Professor Ricardo Baeza-Yates

Professor Ricardo Baeza-Yates

Northeastern University, USA

Research Professor at the Institute for Experiential AI. Former VP of Research at Yahoo Labs (2006-2016).

Co-author of the best-seller Modern Information Retrieval. Fellow of ACM (2009) and IEEE (2011).

Expert in AI bias, data science, and algorithms, actively involved in global AI ethics initiatives.

Tayo Obafemi-Ajayi

Tayo Obafemi-Ajayi

Missouri State University

Dr. Tayo Obafemi-Ajayi is the Guy Mace Professor (associate) of Electrical Engineering at Missouri State University (MSU) in the Engineering Program, a joint program with Missouri S&T, Rolla. She is the director of the Computational Learning Systems lab as well as the site coordinator of the Missouri Louis Stokes Alliance for Minority Participation (MoLSAMP) program at MSU. Her research focuses on developing explainable and ethical Machine Learning/Artificial Intelligence algorithms for broad utility in biomedical applications. She is interested in analysis of multimodal data, aiming to enhance the understanding of specific disease mechanism. She serves as Technical Representative on the Administrative committee of IEEE Engineering Medicine and Biology Society (EMBS). She is also the Chair of the IEEE Computational Intelligence Society (CIS) Technical Committee on Ethical, Legal, Social, Environmental and Human Dimensions of AI/CI (SHIELD). She is the 2024 MSU College of Natural and Applied Science Atwood Excellence in Research and Teaching award recipient.

Moderator:

Professor Jim Torresen

Professor Jim Torresen

University of Oslo, Norway

Leads the Robotics and Intelligent Systems research group. Research areas include AI ethics, machine learning, and robotics.

Has published over 300 peer-reviewed papers, delivered 48 invited talks/keynotes, and organized international AI conferences.

Member of the Norwegian Academy of Technological Sciences (NTVA) and IEEE RAS Technical Committee on Robot Ethics.

Abstract:

AI is increasingly impacting various domains, raising ethical concerns regarding privacy, fairness, transparency, safety, and security.
This panel will explore the key ethical challenges and discuss countermeasures at three levels:

  • Design phase: Developing fair and responsible AI systems.
  • User decision-making: When and where AI systems should be deployed.
  • Operational phase: Ethical reasoning in autonomous AI systems.

Panelists will share insights from their own research, highlighting both technical solutions and the human perspective.
Ethical AI should be seen not only as a challenge but as an opportunity for developing more sustainable and socially responsible AI applications.

Target Audience:

As AI transitions from research to real-world applications, ethical and legal issues become increasingly important.
This panel will provide valuable insights into the main ethical implications currently being discussed and explore how they open up new research directions in AI.

No prior knowledge of AI ethics, legal, and social issues (ELSI) is required, making the session accessible to all IEEE CAI 2025 attendees.

Perspectives on Current Global AI Governance Trends and the Way Ahead for 2025

Topic: AI Policy, Regulation, and Governance

This panel highlights global developments in AI governance over the past year in the United States, Europe, and the Asia-Pacific.
The discussion will explore relevant issues and key questions for AI governance in the coming year.

Panelists:

Mia Hoffmann

Mia Hoffmann

Research Fellow, AI Governance, Georgetown’s Center for Security and Emerging Technology

Specializing in international AI regulation and AI risk mitigation. She previously worked at the European Commission and researched AI adoption and its workforce implications.

Holds a B.Sc. in International Economics from the University of Tuebingen and an M.Sc. in Economics from Lund University.

Mina Narayanan

Mina Narayanan

Research Analyst, AI Governance, Georgetown’s Center for Security and Emerging Technology

Focuses on U.S. AI governance, including AI standards, risk management, and evaluations. Formerly worked at the U.S. Department of State and National Institute of Nursing Research.

Holds a B.Eng. in Software Engineering with a Minor in Political Science from Auburn University and an M.Sc. in Public Policy from Carnegie Mellon University.

Cole McFaul

Cole McFaul

Research Analyst, AI Governance, Georgetown’s Center for Security and Emerging Technology

Researches AI developments in the Asia-Pacific and China’s science and technology ecosystem. Previously worked at CSIS and Stanford University’s Shorenstein Asia-Pacific Research Center.

Holds a B.A. in Political Science and an M.A. in East Asian Studies from Stanford University.

Moderator:

Owen J. Daniels

Owen J. Daniels

Associate Director of Analysis and Andrew W. Marshall Fellow, Georgetown’s Center for Security and Emerging Technology

Works on military and AI governance issues. Previously researched AI ethics, autonomous weapons norms, and strategy at IDA, the Atlantic Council, and Aviation Week Magazine.

Holds a degree in International Relations with minors in Arabic and Near Eastern Studies from Princeton University and is pursuing an MPP at Georgetown.

Abstract:

Since 2023, major AI advancements have prompted global governance responses:

  • The U.S. issued new executive orders and policy frameworks to ensure AI security and trustworthiness.
  • The European Union finalized its AI Act and is drafting a code of practice for general-purpose AI.
  • China launched its Global AI Governance Initiative, shaping international AI regulatory strategies.

This panel will evaluate the progress of these initiatives, highlighting key synergies, gaps, and areas of tension.
The discussion will focus on how the U.S., EU, and China manage AI risks and benefits, and what these governance trends signal for AI in 2025.

Target Audience:

This panel is aimed at researchers and practitioners tracking AI governance trends, as well as technology developers interested in understanding
governance impacts across different regions. The discussion will offer a comprehensive overview of AI policy developments over the past year.

The Great Infrastructure Debate: Choosing the Right AI Backbone Amidst Chaos

Moderator:

Nilesh Shah

Nilesh Shah

VP of Business Development, ZeroPoint Technologies

Regular contributor to standards bodies such as SNIA, OCP, JEDEC, RISC-V, and CXL Consortium. Frequent speaker at AI and memory technology conferences. Formerly led strategic planning for Intel’s Data Center SSD products.

Panelists:

Vik Malyala

Vik Malyala

MD & President, EMEA, Supermicro

Senior Vice President of Technology and AI at Supermicro, overseeing multiple technology initiatives and partnerships.

Craig Gieringer

Craig Gieringer

Senior Vice President (Alliances, Channels, and Revenue)

Expert in high-performance computing and AI with leadership roles in three IPO-bound companies and one $700M acquisition. Former executive at META Group, BlueArc, Infinidat, Hitachi Vantara, and EMC.

Zhitao Li

Zhitao Li

Director of Engineering, AI Infrastructure, Uber

Leads Uber’s AI infrastructure team, overseeing model training, inference, accelerators, and GenAI systems. Formerly at Google’s TensorFlow Extended (TFX) team and worked on Uber’s containerization.

Emily

Emily

Chief Technology Officer, Marveri

CTO of Marveri, a legal tech startup specializing in transactional due diligence. Former PhD candidate at MIT, researching multi-modal vision-language modeling.

Verity

Verity

Research Master’s student, Carnegie Mellon University

Verity is a Research Master’s student in the Philosophy Department at Carnegie Mellon University, focusing on causality. Prior to her graduate studies, she worked at AWS RDS on the data plane for open-source database engines, with core responsibilities in operating systems, lifecycle management, and system management agents. She also has experience at Goldman Sachs, where she worked in the front office on internal financial platform infrastructure, specializing in databases, observability, and migrating financial quoting systems. At Canva, she served as a backend software engineer.
Beyond her technical experience, Verity has taken on various public-facing roles. She was the stage host for CMU’s 2024 Spring Festival Gala and Pittsburgh Chinese Culture Festival, a panelist at Goldman Sachs’ Global Female Engineering All Hands and multiple project demos, and participated in Canva’s female engineering chats. She also previously served as a student ambassador for the Australian National University, where she hosted several global webinars for prospective students.

Anil

Anil

Founder & CTO of rapt.ai

Anil is Founder & CTO of rapt.ai, building a Agentic-AI infrastructure optimization platform. Prior to this, Anil was Technical Director at Data Domain which was acquired by EMC. Anil career spans over 20 years in areas of Compute, Storage, System architecture, File systems, Schedulers & Virtualization. Anil has authored over 15 years in areas of Data pipeline, storage, flash, scheduler algorithms

Abstract:

With the rapid emergence of new LLM models and diverse cloud infrastructure options beyond traditional CPUs and GPUs,
selecting the right AI infrastructure is a crucial and contentious decision. This panel will explore strategic AI deployment
across industries including healthcare, transportation, manufacturing, and business intelligence.

Key Discussion Points:
  1. Infrastructure Battles: Evaluating cloud, on-premises, and hybrid models with insights from AWS, Google, Microsoft, CoreWeave, and Vultr.
  2. Data Sovereignty: Addressing storage, security, and compliance across platforms.
  3. Accelerator Wars: Comparing GPUs, TPUs, and emerging solutions like Groq, Furiosa AI, and Cerebras.
  4. Scalability vs. Sustainability: Balancing TCO, resource optimization, and sustainable AI growth.
  5. Model Mayhem: Choosing and fine-tuning AI models, emphasizing multimodal LLMs and integration with DevOps/SecOps.
  6. Build or Buy: The pros and cons of in-house AI development versus AI as a service.
  7. Monetizing AI: Exploring ROI strategies for enterprises and hyperscale AI operations.
  8. True Costs Unveiled: Analyzing total AI deployment costs, including infrastructure, personnel, and market readiness.
Objective:

This panel will engage in a dynamic discussion about AI infrastructure trends and challenges. Attendees will gain insights
into aligning AI initiatives with the most suitable infrastructure strategies, optimizing for both current capabilities and
future demands.

Building Trust for Human-AI Partnerships in Security

Cybersecurity offers a rich case study in human-AI collaboration with lessons that apply across domains. Through real-world examples of AI integration in threat intelligence, detection, and response, this panel explores how teams build effective partnerships with AI systems. We examine practical approaches to reducing cognitive load, improving decision-making, and creating sustainable workflows that benefit technical specialists and broader stakeholders.

Security teams face increasingly complex challenges that demand effective collaboration between humans and AI systems. This panel examines practical approaches to building successful human-AI partnerships through real-world examples from threat intelligence, security operations, and incident response.

Our diverse panel brings together academic research, operational experience, and product development insights to address key challenges:

  • Designing interfaces that adapt to different expertise levels without compromising effectiveness

  • Implementing explainable AI that builds confidence in automated decisions

  • Creating workflows that reduce analyst fatigue while maintaining human judgment

  • Developing clear metrics for evaluating AI system trustworthiness

  • Setting realistic expectations for AI capabilities across different security domains

  • Building sustainable practices that prevent burnout and enhance team resilience

Drawing from direct experience integrating AI across security functions, panelists will share specific strategies for:

  • Evaluating when and how to trust AI-generated insights

  • Establishing effective feedback loops between humans and AI systems

  • Measuring impact on team performance and analyst wellbeing

  • Scaling AI benefits across different security roles and expertise levels

Attendees will learn specific methods for evaluating AI security tools, clear criteria for assessing AI system reliability, and proven techniques for integrating AI assistance without overwhelming analysts.

Panelists:

Margaret Cunningham

Dr. Margaret Cunningham

Technical Director, Security & AI Strategy at Darktrace

Dr. Cunningham advises on AI security strategy, innovation, data security, and risk governance. Formerly a Principal Product Manager at Forcepoint and a Senior Staff Behavioral Engineer at Robinhood. She holds a Ph.D. in Applied Experimental Psychology and has multiple patents in human-centric risk modeling.

Dustin Sachs

Dr. Divya Ramjee

Assistant Professor, Rochester Institute of Technology

Dr. Ramjee leads RIT’s Technology & Policy Lab, analyzing security, AI policy, and privacy challenges. She is also an adjunct fellow at CSIS in Washington, DC, and has held senior roles in the U.S. government across various agencies.

Matthew Canham

Dr. Matthew Canham

Executive Director, Cognitive Security Institute; Affiliated Faculty, George Mason University

Former FBI Supervisory Special Agent with 21 years of research in cognitive security. He has advised NASA, DARPA, and NATO, and his expertise includes synthetic media social engineering and online influence campaigns.

Chris Puderbaugh

Chris Puderbaugh

Co-Founder, CTO, CISO at Pellonium

Cybersecurity and AI expert leading AI-driven threat detection and risk mitigation. His work focuses on AI/ML applications, cloud security, and developing AI-powered cybersecurity solutions.

Moderator:

Divya Ramjee

Heidi Trost

Owner and Principal, Voice+Code LLC

Author of Human-Centered Security and host of the Human-Centered Security podcast. UX researcher focused on designing secure and user-friendly cybersecurity solutions.

AI, Cybercrime, and Society: Closing the Gap Between Threats and Defenses‬

In this panel, we examine the changing landscape of AI-enabled cybercrime, exploring both the opportunities as‬

‭ well as the challenges that AI introduces in enabling and defending cybercrime.‬

Panelists:
 

Gil Baram

Gil Baram, PhD‬

‭ a.‬‭ Title:‬‭ Cyber Strategy and Policy. Senior Lecturer,‬‭ Bar Ilan University‬

‭ b.‬‭ Bio: Dr. Gil Baram is a senior lecturer at the Political Studies Department. She is a non-resident‬ research scholar at the Center for Long-Term Cybersecurity and the Berkeley Risk and Security Lab‬ (joint appointment) University of California Berkeley. She is also a senior adjunct research fellow at‬ the Centre of Excellence for National Security at Nanyang Technological University, Singapore.‬

‭Her research interests encompass various aspects of cyber conflict, including the impact of‬‭ technology on national security, AI-enabled cybercrime, cyber threats to space systems and more.‬

‭ c.‬‭ LinkedIn:‬‭ https://www.linkedin.com/in/dr-gil-baram-cyber/‬

Scott Hellman

Scott Hellman‬

‭ a.‬‭ Title:‬‭ FBI Cyber Supervisory Special Agent‬

‭ b.‬‭ Bio: Supervisory Special Agent Scott Hellman has spent nearly 17 years investigating criminal and‬ national security cybercrime with the FBI. Currently, Scott leads a team of cybercrime investigators in‬ the San Francisco Bay Area, where they seek to build community through outreach, and disrupt‬ cybercriminals and the services they depend on. He holds a J.D. and a Bachelor’s in chemistry.‬

‭c.‬‭ LinkedIn:‬‭ https://www.linkedin.com/in/scott-h-42baba310/‬

Vrushali Channapattan

Vrushali Channapattan‬

‭ a.‬‭ Title: Director of Engineering, Okta‬

‭ b.‬‭ Bio:‬‭ Vrushali is the Director of Engineering at Okta‬‭ leading the Data and AI org. In the past two‬‭ decades, she has led key efforts in democratizing petabyte-scale data and influenced the design of‬‭ major big data technologies, including serving on the Project Management Committee for Open‬‭ Source‬‭ Apache‬‭ Hadoop‬‭ . Prior to Okta, she spent over‬‭ nine years at Twitter, contributing to its growth‬‭ from startup to public company. She holds a Master of Science in Computer Systems Engineering‬‭ from Northeastern University in Boston.‬

‭ c.‬‭ LinkedIn‬‭ https://www.linkedin.com/in/vrushalic/‬

Nathan Wiebe

Nathan Wiebe‬

‭ a.‬‭ Title: Chief Information Security Officer, Contra Costa County, California‬

‭ b.‬‭ Bio: Nathan Wiebe‬‭ is an experienced information security‬‭ and technology executive, and‬‭ an‬‭ advocate for ethical AI implementation in the public sector, championing transparent and responsible‬‭ AI adoption. Nathan holds graduate degrees from the University of Southern California and the‬‭ University of California Berkeley, in business and cybersecurity.‬

‭ c.‬‭ LinkedIn:‬‭ https://www.linkedin.com/in/nwiebe/‬

TC Niedzialkowski‬

T.C. Niedzialkowski‬

‭ a.‬‭ Title: Head of Security & IT for Opendoor

‭ b.‬‭ Bio: TC Niedzialkowski is an experienced cybersecurity leader helping scale startups and thwart cyber‬‭ threats. In his current role, he leads cybersecurity at Thumbtack, an online home services‬‭ marketplace. Previously TC led cybersecurity at Nextdoor, a neighborhood focused social media‭ platform with 40 million weekly active users. TC has previously worked in the Federal space, leading‬‭ software security and incident response teams at the United States Federal Reserve.‬

‭ c.‬‭ LinkedIn:‬‭ https://www.linkedin.com/in/tc-niedzialkowski/‬

 

Moderator:
 

Leah Pamela Walker‬

‭Title:‬‭ Director, Berkeley Risk and Security Lab‬

Bio‬‭ : Leah Walker is the Lab Director for the Berkeley‬‭ Risk and Security Lab. She oversees the Lab’s‬‭ interdisciplinary research portfolio which includes nuclear arms control, nuclear weapons policy, defense‬‭ analyses, emerging defense technologies, the governance of emerging technologies, industrial policy, and‬‭ strategic competition. Leah also conducts research on the governance of military and commercial artificial‬‭ intelligence, Russian and Chinese nuclear posture and modernization, nuclear and radiological security, and‬‭ maritime security and strategy.‬

‭Moderator experience‬‭ : Leah Walker is an experienced‬‭ moderator, leading top professional and academic‬‭ panels, including in UC Berkeley, RSA Conference and other prestigious venues. She has been moderating‬‭ discussions for many years, and has expertise in creating engaging and thought-provoking conversations.‬

Abstract‬

‭This panel will explore the evolving landscape of AI-enabled cybercrime, highlighting its dual role as both a tool‬‭ for cybercriminals and a resource for cybersecurity professionals. Bringing together perspectives from industry,‬‭ academia, and local government, the discussion will provide a multifaceted analysis of how AI is reshaping the‬‭ digital threat landscape. Panelists will examine how Generative AI is lowering barriers to cybercrime, enabling‬‭ adversaries to automate attacks, create deepfakes, and deploy adaptive phishing campaigns with‬‭ unprecedented ease. At the same time, they will explore AI’s potential to enhance defense mechanisms,‬‭ including malware detection, threat analysis, and automated response systems.‬

‭The panel will feature a diverse group of experts offering varied perspectives on AI-enabled cybercrime. An‬‭ industry expert will present case studies of real-world AI-driven attacks, illuminating current tactics targeting‬‭ businesses and emerging threat vectors. An academic researcher will provide a theoretical framework for‬‭ evaluating AI’s role in cybersecurity, offering data-driven insights on the discrepancy between perceived and‬‭ actual AI-enabled threats. Federal and local government representatives will address public sector challenges,‬‭ exploring policy implications, privacy concerns, and the broader societal and economic impacts of AI-powered‬‭ cybercrime. This multidisciplinary approach will offer attendees a comprehensive understanding of the complex‬‭ landscape of AI in cybercrime, from practical incidents to theoretical analysis and policy considerations.‬

‭By the end of the session, attendees will gain a balanced understanding of AI’s role in both enabling and‬‭ combating cybercrime. The panel will equip participants with actionable strategies to defend against evolving‬ threats while preparing for future developments in AI technologies.‬

Target Audience‬

This session is designed for a diverse audience, including cybersecurity professionals, tech innovators, law‬‭ enforcement, and policymakers. By offering a comprehensive overview of both the threats and solutions at the‬‭ intersection of AI and cybercrime, the panel will equip attendees with practical insights and strategies to‬‭ navigate this rapidly changing field.‬

 

Bridging AI Governance: Ensuring Security, Safety, and Innovation

Abstract

As AI rapidly advances, the gap between technical expertise and policy decision-making presents a critical challenge for effective governance. While AI developers understand the technical mechanisms, vulnerabilities, and mitigation strategies of AI systems, policymakers often focus on reactive regulatory measures, struggling to address broader systemic risks proactively. This misalignment creates governance blind spots, leaving crucial AI-related challenges—such as privacy violations, security vulnerabilities, and regulatory inconsistencies—unaddressed.

This panel will explore what effective AI governance should look like to ensure safety, security, and innovation in the rapidly evolving AI landscape. Experts from both technical and policy domains will discuss strategies to enhance collaboration, frameworks for integrating technical insights into regulatory decision-making, and best practices for addressing global AI governance challenges. Attendees will gain actionable recommendations to bridge the AI policy-technology divide, fostering a governance model that is both forward-thinking and adaptable to emerging risks.

Moderator

Yuyin (Josephine) Liu

Yuyin (Josephine) Liu

Director of Global Policy, GC NEXUS LLC

Chief Commissioner of Public Policy Committee, Asia Pacific Artificial Intelligence Association

Yuyin (Josephine) Liu is a distinguished leader in AI safety, security, and governance, actively shaping global policies on emerging technologies. With extensive experience advising governments, international organizations, and industry leaders, she plays a pivotal role in developing AI regulatory frameworks that balance security, safety, and innovation.

Josephine is the Director of Global Policy at GC Nexus LLC, where she leads initiatives on AI governance, regulatory strategy, and global catastrophic risk assessment. She bridges technology and policy to ensure that emerging technologies align with security, economic resilience, and ethical considerations. Her work focuses on mitigating systemic risks and fostering responsible AI-driven economic resilience and international cooperation. Beyond her professional role, Josephine serves as the Chief Commissioner of the Public Policy Committee at the Asia-Pacific Artificial Intelligence Association (AAIA) and is an expert member of the United Nations (UN), the Center for AI and Digital Policy (CAIDP), the Council of Europe, and the Association for Computing Machinery (ACM).

With a deep background in policy development, Josephine has worked across the public sector in Asia, Europe, and U.S., equipping her with a comprehensive understanding of how clear policies and geopolitical stability are essential in mitigating global risks posed by rapid technological advancements. Her expertise spans international strategy, policy formulation, and public-private collaboration, focusing on addressing the geopolitical and economic challenges of AI and shaping policies that drive sustainable technological innovation and economic growth.

Panelists

Amreen Taneja

Amreen Taneja

Standards Lead, Digital Public Goods Alliance

As Standards Lead at the Digital Public Goods Alliance, Amreen spearheads the management, development, and promotion of the Digital Public Goods Standard, with a strong focus on ethical AI and responsible technology. She brings over 9 years of experience in innovation, digital transformation, AI governance, and technical standardization. Amreen plays a pivotal role in shaping policies for responsible AI systems, ensuring alignment with global privacy regulations, and advancing AI solutions as Digital Public Goods (DPGs).

Amreen holds an LL.M. from UC Berkeley School of Law, with dual specializations in Technology Law and International Law, equipping her to navigate complex legal, ethical, and policy landscapes in digital innovation and global development. In addition, Amreen chairs the Standard Expert Group on Privacy, developing frameworks and recommendations to ensure open-source solutions meet the DPG Standard.

Dr. Bashir Mohammed

Dr. Bashir Mohammed

Senior AI Architect, Intel

Dr. Bashir Mohammed is a Senior Staff AI Architect at Intel’s Network and Edge Group, spearheading cutting-edge innovations in AI at the edge. His work focuses on developing and deploying large language models (LLMs), large vision models (LVMs), and multi-agent workflows, driving transformative solutions across various industries.

Dr. Bashir holds a Ph.D. in Computer Science from the University of Bradford and an M.Sc. in Control Systems from the University of Sheffield, UK. He has a distinguished research background, having previously worked at Lawrence Berkeley National Laboratory, where he specialized in developing AI algorithms for intelligent networks at the edge, automatic control systems, quantum communication networks, and data provenance in high-performance computing and distributed systems.

At Berkeley Lab, he played a pivotal role in the Deep Learning and AI for High-Performance Networks (DAPHNE) project, optimizing the U.S. Department of Energy’s distributed network infrastructure. His work significantly improved high-speed big data transfers, reduced network downtime, and alleviated congestion for critical exascale scientific workflows. He also contributed to the Quantum Application Network Testbed for Novel Entanglement Technology (QUANT-NET), a groundbreaking project to build the first-ever physical quantum distributed network testbed based on entanglement, advancing the frontiers of quantum networking.

Beyond his technical contributions, Dr. Bashir is an active science communicator and policy advocate. He represented Berkeley Lab in Washington, D.C., sharing his research insights with U.S. legislators and Capitol Hill audiences. As an AI Science Policy Fellow with the Society for Industrial and Applied Mathematics (SIAM), he champions policies that support the scientific community, particularly in AI, applied mathematics, computer science, and quantum computing.

Katharina Koerner

Katharina Koerner

Senior Principal Consultant, Trace3 

Katharina Koerner is a multifaceted professional, bringing together a rich blend of skills encompassing senior management, legal acumen, and technical proficiency. Based in Silicon Valley since 2020, she has focused her career on tech policy, privacy, security, AI regulation, and the operationalization of trustworthy AI. Katharina is actively engaged with the Tech Diplomacy Network in Silicon Valley, founded to promote collaboration and dialogue between diplomats, civil society, and the tech industry.

In her current role as Senior Principal Consultant – AI Governance at Trace3 (Oct 2024 – Present), Katharina specializes in operationalizing Trustworthy AI, bridging the gap between principles and practical application. Her work focuses on fairness, robustness, transparency, and security in AI systems. She helps organizations cultivate a culture of ethical AI through tailored policies and processes, alongside the technical capabilities needed for responsible implementation. Katharina is driving the mission to foster a responsible AI ecosystem that prioritizes accountability and builds trust with stakeholders.

Katharina holds a PhD in EU Law, a JD in Law, and various certifications in information security, privacy, privacy engineering, and ML. Her career includes serving as the CEO of an international education group for 5 years and 10 years in the Austrian public service. She has also served as Principal Researcher – Technology at the International Association for Privacy Professionals (IAPP), focusing on privacy engineering, technology regulation, and AI research.

Previously, Katharina was the Corporate Development Manager at a seed-stage startup dedicated to AI strategy and governance, where she spearheaded strategic initiatives to drive innovation and foster growth. Before that, she was the AI Governance Lead at Western Governors University (WGU), where she developed and implemented governance frameworks for AI/ML systems, ensuring AI tools and internal ML systems aligned with company policies and best practices.

Shawn Haag

Shawn Haag

Executive Director, AI-CLIMATE Institute, University of Minnesota

Shawn Haag is a recognized leader in AI strategy, research program management, and workforce development. He currently serves as the Executive Director of AI-CLIMATE, a USDA-NIFA National AI Research Institute at the University of Minnesota, where he leads initiatives at the intersection of artificial intelligence, climate-smart agriculture, and forestry. His work drives innovation to address some of the most pressing challenges of our time.

For this IEEE panel, Shawn is participating in an independent capacity as a consultant facilitating collaboration between industry and academia. With over a decade of experience, he has successfully scaled research programs, built strategic partnerships, and advanced AI applications to deliver real-world impact. His expertise spans AI adoption, program management, and developing frameworks that translate technical research into actionable solutions. Shawn’s leadership has been pivotal in fostering AI-driven workforce initiatives and equipping future generations with the skills needed to navigate emerging technologies.

Dr. Tu Ouyang

Dr. Tu Ouyang

AI Security Researcher, Case Western Reserve University

Dr. Ouyang received his Ph.D degree in Computer and Data Sciences from Case Western Reserve University, USA. He has a decade of extensive research and development experience in the following areas: fraud and cyber risk-related analytics/systems/strategies, data and algorithm governance, and building large-scale intelligent and trusted systems.

 He is currently a senior staff (director level) engineer for the Intelligence team in Geico Tech, architecting enterprise AI software and services. He is also a visiting researcher at Case Western Reserve University, where he has been collaborating on multiple research initiatives on AI systems’ effectiveness, efficiency, and trustworthiness; he also co-mentoring several PhD/MS/BS students; these collaborations have led to several research publications on top-tier security and AI conferences in recent years.  His recent work and interest aim to enable meaningful services through effective, efficient, and trustworthy automation and intelligence.

Pandora’s Box or Philosopher’s Stone? – The Delicate Balance for the Human Race

Abstract: 

2024 was a mix of both positive and negative stories focused on AI. In the US a lawsuit was filed against Character.AI stating that the company’s chatbot was responsible for the suicide of 14-year of Sewell Setzer, encouraging him to take his own life. In healthcare, AI brought benefits to society from advance scanning of patients records to detect cancer, to spotting early signs of autism in children without extensive assessments. In August 2024, the EU AI Act passed introducing safeguards, and a disrupter in a fast-evolving technological space.

It’s natural for innovative and technical professionals to call for policy measures that provide certain protections that lead to responsible and trustworthy use of AI, but is there a line where overly cautious regulation could bottle up the potential of AI to enhance the output of artists, engineers, and the productivity of humankind?

This panel invites thought-provoking discussions on the delicate balance between AI governance, innovation and the human species. We’ll explore how to harness AI’s potential responsibly, ensuring that its deployment enhances human endeavours while maintaining rigorous safety and trust standards. Join us for an engaging conversation!

Moderator:

Brad Kloza

Brad Kloza

Program Director, IEEE Future Directions

Brad Kloza spent two decades as a science/technology journalist and video producer before joining IEEE staff, first as manager of IEEE.tv. Today he is Program Director in IEEE’s Future Directions department, launching and growing new strategic initiatives focused on emerging technologies. He holds degrees from Hamilton College and Columbia University, and in his spare time enjoys refurbishing, maintaining, and playing pinball machines from the 1980s and 1990s.

Panelists:

Chris Miyachi

Chris Miyachi

Senior Software Development Manager at Microsoft, US

Chris is a Senior Software Engineering Manager with experience leading projects from conception to launch and maintaining until stable and mature, and weighing design trade-offs against business requirements. Electrical engineering education and background, knowledge of embedded systems to full stack development including web services, cloud computing, and software development methodologies. Leadership, management, mentoring, and organizational experience in the workplace and with professional activities that include writing, running conferences, and assembling local affiliate groups. Current passion is AI and Machine Learning.

Dr. Dimitris Visvikis

Dr. Dimitris Visvikis

FIEEE, FIPEM, LaTIM, INSERM UMR 1101   

Responsible, team ACTION (Therapy action using multimodality imaging in oncology), President CSS7 INSERM, EiC IEEE Transactions in Radiation and Plasma Medical Sciences, Director of Research,  National Institute of Health and Medical Sciences, France

Dimitris Visvikis is a director of research with the National Institute of Health and Medical Research (INSERM) in France and co-director of the Medical Image Processing Lab in Brest (LaTIM, UMR1101). His current research interests focus on improvement in PET/CT image quantitation for specific oncology applications, such as response to therapy and radiotherapy treatment planning, through the development of methodologies for detection and correction of respiratory motion, 4D PET image reconstruction, tumor radiomics multiparametric and multimodality modelling, as well as the development of computer assisted interventional radiotherapy and Monte Carlo based dosimetry applications. He is a member of numerous professional societies such as IPEM (Fellow, Past Vice-President International), IEEE (Senior Member, Past NPSS NMISC chair), AAPM, EANM (Physics committee chair). He has won numerous awards including the SNMMI EJ Hoffman award 2020 for outstanding contributions in quantitative PET imaging, and the IEEE NPSS Shea Distinguished Member Award in 2019. He is member of numerous editorial boards and the first Editor in Chief of the IEEE Transactions in Radiation and Plasma Medical Sciences.

Prof. Keeley Crockett

Prof. Keeley Crockett

Manchester Metropolitan University, IEEE Computational Intelligence Society, Past Chair IEEE Technical Committee SHIELD (2022-2024)

Keeley Crockett SMIEEE SFHEA is a Professor in Computational Intelligence at Manchester Metropolitan University She has over 27 years’ experience of research and development in Ethical and responsible AI (for both SME’s and an advocate for citizen voice), computational intelligence algorithms and applications, including adaptive psychological profiling, fuzzy systems, semantic similarity, and dialogue systems. Keeley has led work on Place based practical Artificial Intelligence, facilitating a parliamentary inquiry with Policy Connect and the All-Party Parliamentary Group on Data Analytics (APGDA), leading to the inquiry report “Our Place Our Data: Involving Local People in Data and AI-Based Recovery”. She is one of the five EPSRC Public Engagement Champions and currently the Principal Investigator on the EPSRC “PEAs in Pods: Co-production of community based public engagement for data and AI research.” Keeley was one on the Founders of the People Panel for AI, funded originally by The Alan Turing Institute and in 2024 by Manchester City Council. She is currently working on several Innovate UK Knowledge Transfer Partnerships with business such as COUCH and Greater Manchester Combined Authority. She is a steering committee member for the UK Government Inquiry on Skills in the Age of AI.  Keeley was appointed to the UKRI’s AI & Robotics Strategic Advisory Team (SAT) on 1st April 2024.  She is a member of the IEEE Computational intelligence Society ADCOM (2023-25), Founder and past (2022-2024) Chair of the IEEE Technical Committee SHIELD (Ethical, Legal, Social, Environmental and Human Dimensions of AI/CI) and chairs the IEEE AI Coalition Responsible AI subcommitee. She is passionate about people.

Dr Catherine Huang

Dr Catherine Huang

Senior Staff Software Engineer, Google

Dr. Catherine Huang is a Senior Staff Software Engineer and Area Technical Lead in Adversarial Machine Learning at Google, with focus on making AI safe, secure and trustworthy. She currently serves as Vice-Chair of the IEEE Neural Networks Technical Committee and previously chaired the IEEE Cognitive and Developmental Systems Technical Committee. Dr. Huang holds a Ph.D. in Biomedical Engineering, and her prior roles include Principal Engineer at McAfee CTO Office and Research Scientist at Intel Labs.

Sampathkumar Veeraraghavan

Sampathkumar Veeraraghavan

President, Brahmam Innovations,  Global President, 2023 IEEE Eta Kappa Nu, Global Chair, 2021 – 2022 IEEE HAC , Founder, IEEE SIGHT week and SIGHT day.

Sampathkumar Veeraraghavan is a globally renowned technologist best known for his technological innovations and social impact in addressing global humanitarian and sustainable development challenges. He is a seasoned technology and business leader in the Artificial Intelligence (AI) industry with nearly two decades of R&D and leadership experience. Throughout his career, he has led business-critical strategic R&D programs and successfully delivered cutting-edge world-class technologies and solutions in the areas of Generative Artificial Intelligence, Conversational AI, Natural Language Understanding, Cloud computing, Data privacy, Enterprise systems, Infrastructure technologies, Assistive and Sustainable technologies. He is the founder and president of “The Brahmam Innovations,” a pioneering R&D organization that delivers next-generation social innovations to address global challenges. Sampath served as the 2023 IEEE Eta Kappa Nu (IEEE-HKN) Global President, 2023-2024 IEEE HTB Partnership Chair, 2021-2022 HAC Global Chair, and 2019-2020 IEEE SIGHT Chair. He is credited with launching several innovative global programs in the IEEE humanitarian engineering space like IEEE Humanitarian Technologies Board, SIGHT week, SIGHT Day, and Global HAC Summit. He has also served as an expert in the Global School Connectivity Initiative (GIGA), co-chaired by UNESCO, UNICEF, and ITU. He has delivered 500+ keynote/Tech talks in global forums and conferences.

AI for Reliability and Reliability of AI

Abstract:

As artificial intelligence and machine learning (AI/ML) increasingly integrate into mission-critical systems—such as NextG wireless networks, aerospace, healthcare, smart lighting/grid/homes/cities/villages—leveraging AI/ML to improve reliability while ensuring AI/ML itself is reliable has become a major challenge.

The objectives of this panel are to explore the intersection between AI/ML applications and AI/ML reliability, addressing both the opportunities and challenges of applying AI/ML in reliability-critical systems while ensuring AI/ML architectures meet reliability standards.

Moderator:

Dr. Ruolin Zhou

Dr. Ruolin Zhou

Associate Professor, University of Massachusetts Dartmouth

Expert in software-defined radio (SDR) and AI/ML for wireless communications, focusing on spectrum sensing, intelligent radio design, and wireless security.

Research funded by NSF, ONR, AFRL, ARL, and Lockheed Martin. Recipient of the 2024 IEEE Region 1 Outstanding Teaching Award and multiple best demo/team awards at IEEE conferences.

Serves as IEEE Reliability Society VP for Technical Activities, IEEE Future Networks Technical Community steering committee member, and IEEE Future Networks Entrepreneurs Mentorship program co-chair.

Panelists:

Dr. Preeti Chauhan

Dr. Preeti Chauhan

Technical Program Manager, Google

Leads AI/ML hardware quality and reliability initiatives within Google’s Data Center. Formerly led Intel’s Foveros 3D packaging and server microprocessor reliability programs.

Senior IEEE member, Co-Editor of IEEE Computer magazine, and 2025 IEEE Reliability Society VP for Meetings and Conferences.

Dr. Zhaojun Steven Li

Dr. Zhaojun Steven Li

Professor, Western New England University

Expert in AI/ML, data analytics, applied statistics, and reliability engineering. Holds ASQ Certified Reliability Engineer (CRE) and Six Sigma Black Belt certifications.

Past President of IEEE Reliability Society (2022-2024), editorial board member for IEEE Transactions on Reliability.

Dr. Jason Rupe

Dr. Jason Rupe

Dr. Jason Rupe received his Ph.D. in modeling large scale systems and networks for performance and reliability. He has held titles including senior technical staff and director at USWEST, Qwest, Polar Star Consulting, and Tenica. He was the last Managing Editor for the IEEE Transactions on Reliability, Denver Section Chair, and co-chair of IEEE Blockchain initiative. He is currently the President of the IEEE Reliability Society. At CableLabs, he is the Distinguished Technologist working on Proactive Network Maintenance, Network and Service Reliability, DOCSIS® Tools and Readiness, Optical Operations and Maintenance, and reliability advancement for the industry. He was the RS Engineer of the year for 2021, and CableLabs inventor of the year for 2020.

Rick Kuhn

Rick Kuhn

Rick Kuhn is a computer scientist in the Computer Security Division at National Institute of Standards and Technology (NIST), and is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He co-developed the role based access control (RBAC) model that is the dominant form of access control today. His current research focuses on combinatorial methods for assured autonomy (csrc.nist.gov/acts) and hardware security/functional verification. He has authored three books and more than 200 conference or journal publications on cybersecurity, software failure, and software verification and testing.  He received an MS in computer science from the University of Maryland College Park and an MBA from William & Mary. Before joining NIST, he worked as a software developer with NCR Corporation and the Johns Hopkins University Applied Physics Laboratory.

Dr. Angelos Stavrou

Dr. Angelos Stavrou

Professor, Virginia Tech Innovation Campus

Founder of Kryptowire Inc., now Quokka Inc., a VC-funded mobile security startup. Expert in large-scale system security, intrusion detection, and NextG Cyber Security.

Served as a NIST guest researcher and is a senior member of IEEE, ACM, and USENIX. He is an active member of NIST’s Mobile Security Team and has authored over 140 peer-reviewed papers.

Has been awarded research funding from NSF, DARPA, IARPA, DHS, AFOSR, ARO, and ONR. Past associate editor for IEEE Transactions on Computers, IEEE Security & Privacy, and IEEE Internet Computing.

Received multiple awards, including the IEEE Reliability Society Engineer of the Year Award, the DHS Cyber Security Division’s “Significant Government Impact Award,” and the “Bang for the Buck Award.”

Key Discussion Points:
  • The IEEE Reliability Society (RS) Roadmap for AI-driven reliability.
  • AI-driven models in device/system prognostic health management and fault detection.
  • Explainable AI paving the way to an explainable reliability.
  • Uncertainty quantification and management within AI for reliability, resilience, and security.
  • Emerging reliability standards and best practices for AI/ML deployment.
Target Audience:

This panel is intended for researchers, engineers, and policymakers working on reliable AI development and its applications in reliability-critical systems.

Becoming a More Responsible Human Through Responsible AI Teaming

Abstract:

AI has been around for a long time, while humans have existed much longer. The rapid advancement of technology and its applications has brought AI into mainstream discourse, shaping both the present and future. This convergence of human intelligence and artificial intelligence raises critical questions about their coexistence.

The increasing autonomy of AI in decision-making—such as self-driving cars, home care robotics, and drone operations—introduces challenges related to risk, responsibility, and trust in public safety. Stringent AI system requirements are not just a technical challenge but a design imperative. Building a human-AI partnership that adapts dynamically and interacts intelligently is crucial to achieving responsible AI teaming.

This panel brings together technology experts from diverse backgrounds spanning industry, academia, and government. Each panelist will provide insights into the evolving relationship between humans and AI, exploring how responsible AI integration can make humans more responsible while fostering trust and ethical considerations.

Moderator:

Kathy Grise

Kathy Grise

Senior Program Director, IEEE

Supports new and emerging IEEE initiatives, including cloud computing, big data, AI/ML, digital realities, digital twins, and public safety.

Serves as the Technical Program Chair for IEEE COMPSAC 2025 Symposium – Data Sciences, Analytics, & Technologies (DSAT).

Recipient of the 2024 IEEE Eric Herz Outstanding Staff Award, the highest recognition for IEEE staff members.

Previously held leadership positions at IBM, focusing on semiconductor R&D, process design kit enablement, and IT infrastructure implementation.

IEEE Senior Member and graduate of Washington and Jefferson College.

LinkedIn Profile

Panelists:

Dr. Ming Hou

Dr. Ming Hou

Principal Scientist, Department of National Defence, Canada

Leads research on AI and autonomy for defense applications, providing science-based policy recommendations to senior decision-makers in the Canadian Department of National Defence (DND) and the Canadian Armed Forces.

Author of the influential book “Intelligent Adaptive Systems: An Interaction-Centered Design Perspective”, shaping international defense capabilities and AI-enabled technologies.

Contributor to the United Nations White Paper on Human-Machine Interfaces in Autonomous Weapon Systems.

Recipient of the 2020 DND Science and Technology Excellence Award and the 2021 President’s Achievement Award from the Professional Institute of the Public Service of Canada.

IEEE Fellow, Distinguished Lecturer, General Chair of the 2024 IEEE International Conference on Human-Machine Systems.

LinkedIn Profile

Christine Miyachi

Christine Miyachi

Senior Software Development Manager, Microsoft + Nuance Communications

Leads a team of full-stack engineers developing AI-driven solutions in cloud computing and CI/CD on Azure.

Holds multiple patents and is an IEEE Senior Member. Past Chair of IEEE Future Directions Committee.

Holds degrees from MIT in Technology and Policy, Electrical Engineering, and System Design and Management.

Personal Website

Dr. May Dongmei Wang

Dr. May Dongmei Wang

Professor, Georgia Institute of Technology & Emory University

Leads research in biomedical big data and AI-driven Intelligent Reality (IR) for predictive, personalized, and precision health (pHealth).

Serves as Director of the Biomedical Big Data Initiative and Board Member of the American Board of AI in Medicine.

Fellow of AIMBE, IAMBE, IEEE, and Kavli Fellow. Holds 290+ publications with over 15,000 citations.

Research funded by NIH, NSF, CDC, VA, Georgia Research Alliance, and Amazon.

LinkedIn Profile

Key Takeaways:
  • Use cases and limitations of responsibly integrating AI into daily life, enhancing autonomy and creativity.
  • Optimal strategies for addressing trust, safety, and security concerns in AI-human teaming.
  • Workflows and best practices that ensure compliance with stringent AI system requirements.
Target Audience:

This panel is intended for researchers, engineers, policymakers, and industry professionals interested in responsible AI-human collaboration, AI ethics, and trust-building in AI applications.

Panel Description (AI Infrastructure) :

Title & Topic:

Sustainable AI Hardware: Innovations for Minimizing Energy Usage and Environmental Impact

Panelists:

Frank Schirrmeister, Executive Director of Strategic Programs & System Solutions, SYNOPSYS

Frank Schirrmeister leads strategic activities across system software and hardware assisted development for industries like automotive, data center and 5G/6G communications, as well as for horizontals like Artificial Intelligence / Machine Learning. Prior to Synopsys, Frank held various senior leadership positions at Arteris, Cadence Design Systems, Imperas, Chipvision, and SICAN Microelectronics, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives, and customer engagement. He holds an MSEE from the Technical University of Berlin and actively participates in cross-industry initiatives as Chair of the Design Automation Conference’s Engineering Tracks.

https://www.linkedin.com/in/frankschirrmeister/

Martin Snelgrove, Co-Founder/CEO, Hepzibah AI:

Martin did his PhD in 1982 at Toronto, on linear and adaptive filters, with a fun side project of building an “at-memory” SIMD vector-cruncher attaching an array of the biggest DRAMs of the day — 1k*1b — to 1-bit industrial controllers. Two very different technologies, but over the years he helped the LMS math of adaptive filtering gradually infiltrate SERDES and data conversion; and then it hit AI, labelled as backprop. He founded Untether AI with some friends to bring at-memory to the AI inference party, and now that it’s proven to be the energy-efficiency winner he’s building Hepzibah AI to take it to the next level: ubiquity, scalability, on-line fine-tune, dynamic scheduling — and even better PPA.

https://www.linkedin.com/in/martin-snelgrove-02794620/


Renxin Xia, VP Hardware, Untether AI:

Renxin is a technology executive with 20+ years of engineering and management experience, encompassing AI accelerators, CPU’s, FPGA’s, and SOCs. Renxin has a proven, successful track record of building organizations and delivering products at both startups and large corporations.

Renxin is currently VP Hardware Engineering at UntetherAI, where he is a member of executive staff and responsible for all hardware product and development.  Renxin was the Chief of Staff to CEO Lip-Bu Tan at Cadence Design Systems prior to Untether.  Before Cadence, he was the first VP of Engineering at SiFive, a leader in RISC-V solutions.  Renxin spent most of his career at Altera, where he led design and verification for multiple generations of FPGA’s, culminating in having overall responsibility for Stratix 10, the first flagship family using Intel process.  Renxin started his career at ESS Technology and was a co-founder/acting VP of Engineering at Centrality Communications, which was acquired by SiRF.

Renxin holds a BSEE and MSEE from Stanford University and an MBA from UC Berkeley Haas School of Business. 

https://www.linkedin.com/in/renxinxia/

Dr. Zhibin (David) Xiao, President, CASPA
Dr. Zhibin Xiao brings extensive experience in hardware-software co-design, focusing on the development of CPUs, DSPs, and specialized accelerators for applications in AI, databases, and video codecs. Dr. Xiao is a serial entrepreneur. He is the founder and CEO of a stealth-mode AI infrastructure startup dedicated to creating efficient AI software and hardware systems. Dr. Xiao’s entrepreneurial journey includes co-founding and serving as the Chief Architect of Moffett AI, where he led the development and commercialization of a pioneering deep-sparse AI Inference Chip and System. This system has been instrumental in advancing computer vision and large language models.
His earlier contributions include being part of the founding team at Alibaba’s cloud chip unit, developing Alibaba’s first AI chip, and working as a Principal Engineer at Oracle on the Software in-Silicon Team, where he developed several generations of in-memory database accelerators (DAX) for the SPARC CPU.
Dr. Xiao has published over 17 published papers and holds more than 15 US patents to his name. His academic journey took him from Zhejiang University, where he obtained his BS and MS degrees, to the University of California, Davis, where he earned a PhD in Computer Engineering.
In addition to his technical achievements, Dr. Xiao is active in the professional community as a member of the board of directors and President and Chair of the Chinese American Semiconductor Professional Association (CASPA), one of Silicon Valley’s premier non-profit professional organizations with a rich 34-year history of excellence and community engagement.

https://www.linkedin.com/in/zhibin-david-xiao-0766651a/

Moderator:

 
Garry Chan, Head of AI Initiative, ventureLAB

Garry Chan is a technology entrepreneur, investor, and advisor. He is passionate about advising, building, and scaling technology-driven businesses. He is the Chief AI Advisor/Head of AI Initiatives at ventureLAB, a leading Canadian deep tech incubator that helps founders scale globally competitive ventures. He is also the Chief Technology Officer at AI Partnerships Corp, whose mission is to make AI more accessible and affordable by building an affiliate AI network and investing in enterprise AI technology.

Garry educates, mentors, and advises tech companies directly and through universities and incubators. He designs and teaches postgraduate courses at Seneca Polytechnic and Learning Tree International.

He graduated from the University of Calgary (BSc), the Rotman School of Management at the University of Toronto (Executive MBA), and Carnegie Mellon University (MSc).

https://www.linkedin.com/in/garrychan1/

Abstract:

As AI continues to scale across industries, so does its environmental footprint. From large-scale model training to the growing demands of real-time inference, the energy cost of AI is becoming one of the field’s most pressing challenges.

This panel brings together experienced leaders from across the AI hardware landscape to share real-world perspectives on how we can design and deploy AI systems more responsibly. We’ll look at what are the cutting-edge work being done today—from energy-efficient chip architectures and system-level design, to practical hardware-software co-optimization—and what’s still needed to make AI infrastructure truly sustainable.

Key topics this panel will explore:

  • Architectural innovations in AI hardware that enable greater energy efficiency without sacrificing performance
  • System-level design strategies for scaling sustainable AI infrastructure, from edge to data-centers.
  • Collaborative approaches across industry and academia to foster sustainability

Target Audience:
This panel is designed for hardware engineers, AI researchers, system architects, and industry leaders focused on energy-efficient and sustainable AI systems. It is also relevant for policymakers and environmental advocates seeking actionable insights into minimizing the energy consumption and environmental impact of AI infrastructure.
Attendees will gain insights into innovations, strategies, and collaborations for building sustainable and scalable AI hardware ecosystems.

GenAI Empowered AI-Native RAN

 

Panel Moderator

Dr Syed Zaidi, School of EEE, Leeds University, UK
 

Abstract

The panel will discuss the interplay between accelerated compute, AI and Radio access networks. We aim to understand the new opportunities unlocked by this convergence. We will also be discussing the evolving Agentic architecture and the safety aspects for such implementation.
 

Speakers:

Mischa Dohler is now VP Emerging Technologies at Ericsson Inc. in Silicon Valley, working on cutting-edge topics of 5G/6G, AR and Generative AI. He serves on the Spectrum Advisory Board of Ofcom and on the AI/ML Technical Advisory Committee of the FCC.

He is a Fellow of the IEEE, the Royal Academy of Engineering, the Royal Society of Arts (RSA), the Institution of Engineering and Technology (IET); the AP Artificial Intelligence Association (AAIA); and a Distinguished Member of Harvard Square Leaders Excellence. He is a serial entrepreneur with 5 companies; composer & pianist with 5 albums on Spotify/iTunes; and fluent in several languages. He has had ample coverage by national and international press and media, and is featured on Amazon Prime.

He is a frequent keynote, panel and tutorial speaker, and has received numerous awards. He has pioneered several research fields, contributed to numerous wireless broadband, IoT/M2M and cyber security standards, holds a dozen patents, organized and chaired numerous conferences, was the Editor-in-Chief of two journals, has more than 300 highly-cited publications, and authored several books. He is a Top-1% Cited Innovator across all science fields globally.

He was Professor in Wireless Communications at King’s College London and Director of the Centre for Telecommunications Research from 2013-2021, driving cross-disciplinary research and innovation in technology, sciences and arts. He is the Cofounder and former CTO of the IoT-pioneering company Worldsensing; cofounder and former CTO of the AI-driven satellite company SiriusInsight.AI, and cofounder of the sustainability company Movingbeans. He also worked as a Senior Researcher at Orange/France Telecom from 2005-2008.


Richard Tong is a prominent advocate for standardization and collaboration between academia and industry in the field of artificial intelligence. He currently serves as Chair of the IEEE Artificial Intelligence Standards Committee and Chair of the IEEE 3394 LLM Agent Interface Standard Working Group. He also acts as a liaison from the IEEE Computer Society to ISO/IEC JTC1 SC42 and is the former Chair of the IEEE Learning Technology Standards Committee.

Richard is the Co-chair of several major conferences and initiatives, including the 2025 IEEE Enterprise GenAI Summit and the Education Vertical Track of the 2024 IEEE Conference on Artificial Intelligence in Singapore. He is also Co-chair of the Industry, Innovation, and Practitioner Track at the 2024 AIED Conference in Brazil.

As the Co-founder of NEOLAF, an agent company for education, and the former Chief Architect of Squirrel AI Learning, Richard has deep expertise in AI for education. He has led AI+Education R&D efforts in collaboration with Stanford MediaX and Carnegie Mellon University, where he spearheaded research on adaptive personalized education at scale. His projects have also involved institutions like UC Berkeley, University of Florida, Columbia University, and the University of Memphis.

Richard’s research interests include neuro-symbolic cognitive architecture, human-in-the-loop AI, trustworthy AI, self-improving agents, and multimodal reasoning. He is the lead researcher behind the NEOLAF agent framework and the creator of the OLAF adaptive instructional system stack.


Professor Jie Xu is a leading expert in Distributed Computing Systems with over 35 years of experience. He leads the Research Peak of Excellence at the University of Leeds, directs the EPSRC-funded White Rose Grid e-Science Centre involving Leeds, Sheffield, and York, and heads the Distributed Systems and Services (DSS) Theme at Leeds. Formerly Professor at the University of Durham, he joined Leeds in 2003. A Turing Fellow and UKCRC executive member, he has advised governments and industry globally, including Singapore IDA, Lenovo, and InnovateUK. He has held editorial roles with IEEE and ACM journals and serves on numerous prestigious IEEE steering committees. With over 300 publications and more than £25M in research funding, he has received major awards including the BCS/AT&T Brendan Murphy Prize. He also co-founded two spin-outs in AI and digital twin technologies and collaborates extensively with industry leaders such as Rolls-Royce, BAE Systems, JLR, Google, and Alibaba.


Syed A. R. Zaidi (Moderator) an Associate Professor at the University of Leeds in the broad area of Communication & Sensing for Robotics and Autonomous Systems. He co-leads UK’s Department for Science, Innovation and Technology(DSIT) and UKRI funded future  communications hub for empowering distributed cloud computing applications and research (CHEDDAR) which has received £16 Million in research funding. He also leads Emergent Compute Pillar within CHEDDAR work programme, as well as DSIT and AISI funded initiative on agentic AI for cloud native telecommunication. Earlier, from 2013-2015, he was associated with the SPCOM research group working on a US ARL-funded project in  Network Science. From 2011 to 2013, he was associated with the International University of Rabat as a research associate. He was also a visiting research scientist at Qatar Innovations and Mobility Centre from October- December 2013, working on the QNRF-funded project QSON. He completed his Doctoral Degree at the School of Electronic and Electrical Engineering. He was awarded the G. W. and F. W. Carter Prize for best thesis and best research paper. He has published 90+ papers in leading IEEE conferences and journals. From 2014-2015, he was the editor of IEEE Communication Letters and also the lead guest editor for IET Signal Processing Journal’s Special Issue on Signal Processing for Large Scale 5G Wireless Networks. He is also an editor for IET Access, Fronthaul, and Backhaul books. He is currently an Associate Technical Editor for IEEE Communication Magazine. He is Industrial chair for ICC 2026. He has been awarded COST IC0902, Royal Academy of Engineering, EPSRC, Horizon EU and DAAD grants (circa £5.5 Million) to promote his research outputs. His current research interests are in the area of GenAI for cloud-native telecommunication, as well as modelling analysis and design of large scale connected intelligent systems.