Panels
- Toyota’s…
- Algorithmic…
- Ethical…
- Perspectives…
- Great…
- Building…
- AI, Cybercrime, and Society
Toyota’s Manufacturing Agentic Journey
Team Toyota has strategically leveraged Generative AI to drive innovation and operational excellence across key domains, including manufacturing, battery plants, paint design, and enterprise knowledge management through solutions like
Knowledge Bot (Kura) and Global Chatbots. By integrating Large Language Models (LLMs), Toyota has developed a robust methodology to evaluate, score, and quantify AI outputs against Toyota’s quality benchmarks, ensuring high standards and reliability in production environments.
This effort highlights critical lessons in collaborative requirement discovery and outcome-defined capability selection, enabling efficient resource utilization and fostering strong enterprise leadership engagement. These practices ensure AI initiatives align with business objectives, accelerating adoption and delivering measurable value across the organization.
A cornerstone of this success is the application of the agentic framework for deploying Generative AI in production, enabling autonomous AI agents to execute complex tasks and optimize processes seamlessly. This approach has transformed workflows, enhanced decision-making, and driven significant operational efficiencies.
Toyota’s strategic deployment of Generative AI demonstrates a commitment to innovation, showcasing how advanced technologies can be effectively utilized to maintain quality, scale operations, and strengthen leadership in an evolving digital landscape.
Generative AI Developers, Executives & Researchers.
Stephen Ellis
Technical Generative AI Product Manager, Toyota Motors North America
10 years of experience in research strategy and emerging technology applications for companies ranging from startups to Fortune 50 enterprises.
Former Director of the North Texas Blockchain Alliance, leading blockchain and cryptocurrency competency development among executives and software developers.
Former CTO of Plymouth Artificial Intelligence, advising companies on leveraging AI for new business models.
Currently enabling Generative AI solutions across Toyota’s enterprise, driving transformation in new mobility solutions and operational efficiency.
Ravi Chandu Ummadisetti
Generative AI Architect/Product Lead
Strategic AI leader with 10+ years of experience driving enterprise AI/ML transformations across Automotive, Banking, Healthcare, and Telecommunications.
Expert in Generative AI applications, including Retrieval-Augmented Generation (RAG), model fine-tuning, and secure AI platforms.
Specialized in optimizing manufacturing, legal operations, and enterprise AI applications aligned with business goals.
Strong collaborator with global technical and executive teams, delivering scalable AI solutions that drive innovation and efficiency.
Algorithmic Bias and Discrimination in AI: Developing Regulatory Frameworks to Mitigate Harm in Africa
This research investigates the manifestations of algorithmic bias and discrimination in AI systems deployed in African contexts.
It explores how biases embedded in data, algorithms, and the socio-cultural contexts of AI development can lead to discriminatory outcomes.
- Identify and categorize biases in AI systems deployed in Africa.
- Analyze the impact of these biases on individuals and communities.
- Explore methods for identifying and mitigating algorithmic bias.
Addressing Algorithmic bias, Data bias, and Societal bias.
- AI systems deployed in Africa are susceptible to racial and gender biases.
- Biases can perpetuate existing inequalities and harm communities.
- Mitigating bias requires fair data collection, unbiased algorithms, and continuous monitoring.
- A collaborative effort among researchers, developers, policymakers, and communities is necessary.
Adv. Dennis Ramphomane
Practising advocate, specializing in AI and Intellectual Property Law.
Advocate of the High Court of South Africa. Faculty member: South African Internet Governance Forum (ZAIGF).
Member of the South African Artificial Intelligence Association.
Ms. Thoko Miya
CEO of Girlhype Women Who Code.
Chairperson of Internet Society South Africa.
UN South African Youth Ambassador.
Member of the South African Internet Governance Board.
Dr. Tabani Moyo
Regional Director, Media Institute of Southern Africa (MISA).
Serves on International Committee drafting an AI Charter on Media, chaired by Nobel Peace Prize laureate Maria Ressa.
Advisor to governments and multilateral bodies on freedom of expression and media freedom.
Dr. Teddy Nalubega
Head of AI Division at Knowledge Consulting Limited, Uganda 4
Prof Sizwe Snail
Adjunct Professor Nelson Mandela Univeristy, Qgeberha and Visiting Professor
FGV, Rio De Janeiro
Director – Snail Attorneys @ Law inc
Attorney of the High Court of South Africa
Ethical and Responsible Autonomous AI for Human-AI Interaction

Professor Keeley Crockett
Manchester Metropolitan University, UK Leads the Machine Intelligence theme in the Centre for Advanced Computational Science. She has over 27 years of experience in Ethical and Responsible AI, computational intelligence, psychological profiling, fuzzy systems, and dialogue systems. She is a member of the IEEE Computational Intelligence Society ADCOM (2023-25) and chairs the IEEE Technical Committee SHIELD (2023-24).
Professor Ricardo Baeza-Yates
Northeastern University, USA Research Professor at the Institute for Experiential AI. Former VP of Research at Yahoo Labs (2006-2016). Co-author of the best-seller Modern Information Retrieval. Fellow of ACM (2009) and IEEE (2011). Expert in AI bias, data science, and algorithms, actively involved in global AI ethics initiatives.
Professor Yew-Soon Ong
Nanyang Technological University, Singapore President Chair Professor in Computer Science at NTU and Chief AI Scientist of the Agency for Science, Technology, and Research Singapore. Founding Editor-in-Chief of IEEE Transactions on Emerging Topics in Computational Intelligence. His research focuses on AI and computational intelligence. Recipient of multiple IEEE outstanding paper awards and listed among the World’s Most Influential Scientific Minds.
Professor Jim Torresen
University of Oslo, Norway Leads the Robotics and Intelligent Systems research group. Research areas include AI ethics, machine learning, and robotics. Has published over 300 peer-reviewed papers, delivered 48 invited talks/keynotes, and organized international AI conferences. Member of the Norwegian Academy of Technological Sciences (NTVA) and IEEE RAS Technical Committee on Robot Ethics.- Design phase: Developing fair and responsible AI systems.
- User decision-making: When and where AI systems should be deployed.
- Operational phase: Ethical reasoning in autonomous AI systems.
Perspectives on Current Global AI Governance Trends and the Way Ahead for 2025
This panel highlights global developments in AI governance over the past year in the United States, Europe, and the Asia-Pacific.
The discussion will explore relevant issues and key questions for AI governance in the coming year.
Mia Hoffmann
Research Fellow, AI Governance, Georgetown’s Center for Security and Emerging Technology
Specializing in international AI regulation and AI risk mitigation. She previously worked at the European Commission and researched AI adoption and its workforce implications.
Holds a B.Sc. in International Economics from the University of Tuebingen and an M.Sc. in Economics from Lund University.
Mina Narayanan
Research Analyst, AI Governance, Georgetown’s Center for Security and Emerging Technology
Focuses on U.S. AI governance, including AI standards, risk management, and evaluations. Formerly worked at the U.S. Department of State and National Institute of Nursing Research.
Holds a B.Eng. in Software Engineering with a Minor in Political Science from Auburn University and an M.Sc. in Public Policy from Carnegie Mellon University.
Cole McFaul
Research Analyst, AI Governance, Georgetown’s Center for Security and Emerging Technology
Researches AI developments in the Asia-Pacific and China’s science and technology ecosystem. Previously worked at CSIS and Stanford University’s Shorenstein Asia-Pacific Research Center.
Holds a B.A. in Political Science and an M.A. in East Asian Studies from Stanford University.
Owen J. Daniels
Associate Director of Analysis and Andrew W. Marshall Fellow, Georgetown’s Center for Security and Emerging Technology
Works on military and AI governance issues. Previously researched AI ethics, autonomous weapons norms, and strategy at IDA, the Atlantic Council, and Aviation Week Magazine.
Holds a degree in International Relations with minors in Arabic and Near Eastern Studies from Princeton University and is pursuing an MPP at Georgetown.
Since 2023, major AI advancements have prompted global governance responses:
- The U.S. issued new executive orders and policy frameworks to ensure AI security and trustworthiness.
- The European Union finalized its AI Act and is drafting a code of practice for general-purpose AI.
- China launched its Global AI Governance Initiative, shaping international AI regulatory strategies.
This panel will evaluate the progress of these initiatives, highlighting key synergies, gaps, and areas of tension.
The discussion will focus on how the U.S., EU, and China manage AI risks and benefits, and what these governance trends signal for AI in 2025.
This panel is aimed at researchers and practitioners tracking AI governance trends, as well as technology developers interested in understanding
governance impacts across different regions. The discussion will offer a comprehensive overview of AI policy developments over the past year.
Great Infrastructure Debate: Choosing the Right AI Backbone Amidst Chaos
Nilesh Shah
VP of Business Development, ZeroPoint Technologies
Regular contributor to standards bodies such as SNIA, OCP, JEDEC, RISC-V, and CXL Consortium. Frequent speaker at AI and memory technology conferences. Formerly led strategic planning for Intel’s Data Center SSD products.
Vik Malyala
MD & President, EMEA, Supermicro
Senior Vice President of Technology and AI at Supermicro, overseeing multiple technology initiatives and partnerships.
Craig Gieringer
Senior Vice President (Alliances, Channels, and Revenue)
Expert in high-performance computing and AI with leadership roles in three IPO-bound companies and one $700M acquisition. Former executive at META Group, BlueArc, Infinidat, Hitachi Vantara, and EMC.
Zhitao Li
Director of Engineering, AI Infrastructure, Uber
Leads Uber’s AI infrastructure team, overseeing model training, inference, accelerators, and GenAI systems. Formerly at Google’s TensorFlow Extended (TFX) team and worked on Uber’s containerization.
Emily
Chief Technology Officer, Marveri
CTO of Marveri, a legal tech startup specializing in transactional due diligence. Former PhD candidate at MIT, researching multi-modal vision-language modeling.
With the rapid emergence of new LLM models and diverse cloud infrastructure options beyond traditional CPUs and GPUs,
selecting the right AI infrastructure is a crucial and contentious decision. This panel will explore strategic AI deployment
across industries including healthcare, transportation, manufacturing, and business intelligence.
- Infrastructure Battles: Evaluating cloud, on-premises, and hybrid models with insights from AWS, Google, Microsoft, CoreWeave, and Vultr.
- Data Sovereignty: Addressing storage, security, and compliance across platforms.
- Accelerator Wars: Comparing GPUs, TPUs, and emerging solutions like Groq, Furiosa AI, and Cerebras.
- Scalability vs. Sustainability: Balancing TCO, resource optimization, and sustainable AI growth.
- Model Mayhem: Choosing and fine-tuning AI models, emphasizing multimodal LLMs and integration with DevOps/SecOps.
- Build or Buy: The pros and cons of in-house AI development versus AI as a service.
- Monetizing AI: Exploring ROI strategies for enterprises and hyperscale AI operations.
- True Costs Unveiled: Analyzing total AI deployment costs, including infrastructure, personnel, and market readiness.
This panel will engage in a dynamic discussion about AI infrastructure trends and challenges. Attendees will gain insights
into aligning AI initiatives with the most suitable infrastructure strategies, optimizing for both current capabilities and
future demands.
Building Trust for Human-AI Partnerships in Security
Cybersecurity offers a rich case study in human-AI collaboration with lessons that apply across domains. Through real-world examples of AI integration in threat intelligence, detection, and response, this panel explores how teams build effective partnerships with AI systems. We examine practical approaches to reducing cognitive load, improving decision-making, and creating sustainable workflows that benefit technical specialists and broader stakeholders.
Security teams face increasingly complex challenges that demand effective collaboration between humans and AI systems. This panel examines practical approaches to building successful human-AI partnerships through real-world examples from threat intelligence, security operations, and incident response.
Our diverse panel brings together academic research, operational experience, and product development insights to address key challenges:
Designing interfaces that adapt to different expertise levels without compromising effectiveness
Implementing explainable AI that builds confidence in automated decisions
Creating workflows that reduce analyst fatigue while maintaining human judgment
Developing clear metrics for evaluating AI system trustworthiness
Setting realistic expectations for AI capabilities across different security domains
Building sustainable practices that prevent burnout and enhance team resilience
Drawing from direct experience integrating AI across security functions, panelists will share specific strategies for:
Evaluating when and how to trust AI-generated insights
Establishing effective feedback loops between humans and AI systems
Measuring impact on team performance and analyst wellbeing
Scaling AI benefits across different security roles and expertise levels
Attendees will learn specific methods for evaluating AI security tools, clear criteria for assessing AI system reliability, and proven techniques for integrating AI assistance without overwhelming analysts.
Dr. Margaret Cunningham
Technical Director, Security & AI Strategy at Darktrace
Dr. Cunningham advises on AI security strategy, innovation, data security, and risk governance. Formerly a Principal Product Manager at Forcepoint and a Senior Staff Behavioral Engineer at Robinhood. She holds a Ph.D. in Applied Experimental Psychology and has multiple patents in human-centric risk modeling.
Dr. Dustin Sachs
Chief Technologist at CyberRisk Collaborative, Adjunct Professor at Lonestar College
Dr. Sachs specializes in Cyber-Risk Behavioral Psychology, AI security, and compliance. With a Doctorate in Cybersecurity, he leads cybersecurity programs, fosters awareness, and bridges academic research with practical implementation.
Dr. Divya Ramjee
Assistant Professor, Rochester Institute of Technology
Dr. Ramjee leads RIT’s Technology & Policy Lab, analyzing security, AI policy, and privacy challenges. She is also an adjunct fellow at CSIS in Washington, DC, and has held senior roles in the U.S. government across various agencies.
Dr. Matthew Canham
Executive Director, Cognitive Security Institute; Affiliated Faculty, George Mason University
Former FBI Supervisory Special Agent with 21 years of research in cognitive security. He has advised NASA, DARPA, and NATO, and his expertise includes synthetic media social engineering and online influence campaigns.
Chris Puderbaugh
Co-Founder, CTO, CISO at Pellonium
Cybersecurity and AI expert leading AI-driven threat detection and risk mitigation. His work focuses on AI/ML applications, cloud security, and developing AI-powered cybersecurity solutions.
Heidi Trost
Owner and Principal, Voice+Code LLC
Author of Human-Centered Security and host of the Human-Centered Security podcast. UX researcher focused on designing secure and user-friendly cybersecurity solutions.
AI, Cybercrime, and Society: Closing the Gap Between Threats and Defenses
In this panel, we examine the changing landscape of AI-enabled cybercrime, exploring both the opportunities as
well as the challenges that AI introduces in enabling and defending cybercrime.
Gil Baram, PhD
a. Title: Cyber Strategy and Policy. Senior Lecturer, Bar Ilan University
b. Bio: Dr. Gil Baram is a senior lecturer at the Political Studies Department. She is a non-resident research scholar at the Center for Long-Term Cybersecurity and the Berkeley Risk and Security Lab (joint appointment) University of California Berkeley. She is also a senior adjunct research fellow at the Centre of Excellence for National Security at Nanyang Technological University, Singapore.
Her research interests encompass various aspects of cyber conflict, including the impact of technology on national security, AI-enabled cybercrime, cyber threats to space systems and more.
c. LinkedIn: https://www.linkedin.com/in/dr-gil-baram-cyber/
Scott Hellman
a. Title: FBI Cyber Supervisory Special Agent
b. Bio: Supervisory Special Agent Scott Hellman has spent nearly 17 years investigating criminal and national security cybercrime with the FBI. Currently, Scott leads a team of cybercrime investigators in the San Francisco Bay Area, where they seek to build community through outreach, and disrupt cybercriminals and the services they depend on. He holds a J.D. and a Bachelor’s in chemistry.
c. LinkedIn: https://www.linkedin.com/in/scott-h-42baba310/
Vrushali Channapattan
a. Title: Director of Engineering, Okta
b. Bio: Vrushali is the Director of Engineering at Okta leading the Data and AI org. In the past two decades, she has led key efforts in democratizing petabyte-scale data and influenced the design of major big data technologies, including serving on the Project Management Committee for Open Source Apache Hadoop . Prior to Okta, she spent over nine years at Twitter, contributing to its growth from startup to public company. She holds a Master of Science in Computer Systems Engineering from Northeastern University in Boston.
c. LinkedIn https://www.linkedin.com/in/vrushalic/
Nathan Wiebe
a. Title: Chief Information Security Officer, Contra Costa County, California
b. Bio: Nathan Wiebe is an experienced information security and technology executive, and an advocate for ethical AI implementation in the public sector, championing transparent and responsible AI adoption. Nathan holds graduate degrees from the University of Southern California and the University of California Berkeley, in business and cybersecurity.
c. LinkedIn: https://www.linkedin.com/in/nwiebe/
T.C. Niedzialkowski
a. Title: Head of Security, Thumbtack
b. Bio: TC Niedzialkowski is an experienced cybersecurity leader helping scale startups and thwart cyber threats. In his current role, he leads cybersecurity at Thumbtack, an online home services marketplace. Previously TC led cybersecurity at Nextdoor, a neighborhood focused social media platform with 40 million weekly active users. TC has previously worked in the Federal space, leading software security and incident response teams at the United States Federal Reserve.
c. LinkedIn: https://www.linkedin.com/in/tc-niedzialkowski/
Leah Pamela Walker
Title: Director, Berkeley Risk and Security Lab
Bio : Leah Walker is the Lab Director for the Berkeley Risk and Security Lab. She oversees the Lab’s interdisciplinary research portfolio which includes nuclear arms control, nuclear weapons policy, defense analyses, emerging defense technologies, the governance of emerging technologies, industrial policy, and strategic competition. Leah also conducts research on the governance of military and commercial artificial intelligence, Russian and Chinese nuclear posture and modernization, nuclear and radiological security, and maritime security and strategy.
Moderator experience : Leah Walker is an experienced moderator, leading top professional and academic panels, including in UC Berkeley, RSA Conference and other prestigious venues. She has been moderating discussions for many years, and has expertise in creating engaging and thought-provoking conversations.
Abstract
This panel will explore the evolving landscape of AI-enabled cybercrime, highlighting its dual role as both a tool for cybercriminals and a resource for cybersecurity professionals. Bringing together perspectives from industry, academia, and local government, the discussion will provide a multifaceted analysis of how AI is reshaping the digital threat landscape. Panelists will examine how Generative AI is lowering barriers to cybercrime, enabling adversaries to automate attacks, create deepfakes, and deploy adaptive phishing campaigns with unprecedented ease. At the same time, they will explore AI’s potential to enhance defense mechanisms, including malware detection, threat analysis, and automated response systems.
The panel will feature a diverse group of experts offering varied perspectives on AI-enabled cybercrime. An industry expert will present case studies of real-world AI-driven attacks, illuminating current tactics targeting businesses and emerging threat vectors. An academic researcher will provide a theoretical framework for evaluating AI’s role in cybersecurity, offering data-driven insights on the discrepancy between perceived and actual AI-enabled threats. Federal and local government representatives will address public sector challenges, exploring policy implications, privacy concerns, and the broader societal and economic impacts of AI-powered cybercrime. This multidisciplinary approach will offer attendees a comprehensive understanding of the complex landscape of AI in cybercrime, from practical incidents to theoretical analysis and policy considerations.
By the end of the session, attendees will gain a balanced understanding of AI’s role in both enabling and combating cybercrime. The panel will equip participants with actionable strategies to defend against evolving threats while preparing for future developments in AI technologies.
Target Audience
This session is designed for a diverse audience, including cybersecurity professionals, tech innovators, law enforcement, and policymakers. By offering a comprehensive overview of both the threats and solutions at the intersection of AI and cybercrime, the panel will equip attendees with practical insights and strategies to navigate this rapidly changing field.