Workshops Overview

1. Workshop AI for Systainable Energy – CAI 2025

Energy remains the cornerstone of societal prosperity, with its production, delivery, and management critical to human advancement. As we progress into 2025, the energy field continues to transform rapidly, requiring us to broaden our approach to encompass sophisticated control, optimization, and decarbonization across increasingly complex energy systems. The urgent energy demands of growing economies persist, while our planet’s finite resources and the escalating climate crisis necessitate a sharper focus on sustainability, efficiency, and resilience. In order to navigate the multifaceted socio-economic dimensions of the energy sector, trust in AI becomes a crucial aspect. Trust is necessary to ensure that AI systems are reliable, secure, and capable of making informed decisions. Additionally, transparency and accountability are key components of trust in AI. Users and stakeholders need to have confidence in the algorithms and models used in energy systems to make fair and unbiased decisions.

2. BEComLLM Workshop at IEEE CAI 2025

The BEComLLM workshop seeks to uncover the possibilities of integrating LLMs with EC to push the boundaries of AI, machine learning, and optimization. Our goal is to foster collaboration among researchers and practitioners to:

  • Leverage LLMs in EC for creating adaptive, scalable algorithms.
  • Advance the fields of LLMs and EC through hybridization.
  • Facilitate knowledge-sharing on the synergies and functions of these technologies, broadening their potential impact.

3. Workshop on Neural Architecture Search

    Neural Architecture Search (NAS) is a powerful machine learning technique that automates the design of neural network architectures instead of manually defining the network’s structure, such as layers, number of neurons, and activation functions. NAS can discover innovative architectures that surpass those designed manually, often resulting in more accurate, efficient, or straightforward models. NAS  involves searching through a predefined space of potential architectures to find the most effective one for a specific task, such as image classification or natural language processing. NAS aims to optimize aspects like accuracy, efficiency, and model size, often using techniques such as:

  • Reinforcement Learning: Agents explore different architectures and learn to improve designs based on performance feedback.

  • Evolutionary Algorithms: Inspired by natural selection, these algorithms evolve architectures over successive generations.

  • Gradient-Based Methods: Utilize differentiable architecture representations to optimize structures using gradient descent.

    The goal of NAS is to reduce the time and expertise required to design high-performing neural networks, making the process more efficient and accessible.

4. Workshop on Unmasking (Truly) Deepfakes: Not only Video Deepfakes

Deep fake detection: A (deep) Fake Problem?
Speaker: Tal Hassner (Co-founder and CTO, WEIR AI)
 
            Abstract: Generative AI is transforming online media by enabling the creation of highly convincing synthetic content. The growing prevalence of AI-generated content introduces new risks to privacy and misinformation, while traditional detection methods are rapidly becoming insufficient or obsolete. In this talk, I will present two approaches that look beyond the conventional aim of detecting fakes. The first is model attribution, where I will present a “model parsing” technique that reverse-engineers generative models from their image outputs, making it possible to trace AI-generated media back to its source and expose coordinated misinformation campaigns. The second is media provenance, where I will present work on proactive methods that embed protective watermarks into images, enabling the detection of unauthorized manipulations while preserving owners’ privacy and intellectual property rights. While the challenges of generative AI will persist, these and similar methods offer promising frameworks for addressing evolving threats, helping preserve the trustworthiness of digital media in an increasingly AI-driven world.        
            Speaker Bio: Tal Hassner is a Co-founder and CTO of WEIR AI; and ex-Meta AI senior applied research lead and ex AWS principal scientist. He is also affiliated with The Open University of Israel, Department of Mathematics and Computer Science where he was an Associate Professor until 2018. From 2015 to 2018, he was a senior computer scientist at the Information Sciences Institute (ISI) and a Visiting Associate Professor at the Institute for Robotics and Intelligent Systems, Viterbi School of Engineering, both at USC, CA, USA, working on the IARPA Janus face recognition project. His work is mostly related to Computer Vision and Machine Learning. Much of his work relates to digital face processing, including face recognition, facial attribute prediction, face alignment, and 3D reconstruction of face shapes. He also worked on problems related to text image processing (OCR), human action recognition in videos, dense correspondence estimation, feature representation and matching, and more. He is an Associate Editor for IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE-TPAMI) and IEEE Transactions on Biometrics, Behavior, and Identity Science (T-BIOM).        
Modular Minds: Rethinking Deepfake Detection Through Model Disentanglement
Speaker: Tamar Glaser (Harman International, USA)
 
Modular Minds: Rethinking Deepfake Detection Through Model Disentanglement

Speaker: Tamar Glaser (Harman International, USA)

            Abstract: As AI-generated content becomes increasingly indistinguishable from human-created media, traditional detection approaches face significant challenges. This talk explores the evolution of deepfake detection from artifact hunting to a more fundamental rethinking of the problem. The core thesis proposes a paradigm shift: rather than focusing on task-specific detection endpoints, we should prioritize the underlying representations and model disentanglement. By extracting and isolating information in projected latent spaces, we can develop more generalizable, robust, and transferable solutions that transcend specific detection tasks. This approach represents not only a technical advancement but a conceptual shift in how we understand and address the broader implications of synthetic media. The talk will conclude with reflections on future research directions and the broader impact of representation-focused approaches across multiple domains.        
            Speaker Bio: Tamar Glaser received her Master of Science degree in Electrical Engineering from the Technion-Israel Institute of Technology in 2011. Following her academic pursuits, she gained extensive industry experience at Elbit Systems (2010-2018) and Alibaba Damo Academy (2018-2021). She subsequently joined Meta (2022-2024), where she contributed to the advancement of computer vision and machine learning. Currently, at Harman International, she focuses on audio and multimodal representations. Glaser specializes in computer vision and content representation, leveraging her profound understanding of classical computer vision to address real-world challenges. Her expertise extends to responsible AI, with a particular focus on mitigating the challenges posed by generative AI, such as generated content detection, fairness and biases in generated content, and unlearning in generative models. Her research bridges the gap between classical and deep learning computer vision, leading to innovative solutions for complex industry problems. Her contributions to the field are evidenced by publications in top-tier conferences such as ICCV, ECCV, ICPR, and Nature Machine Intelligence, as well as her active involvement in organizing workshops at ECCV and CVPR.        

5. Workshop on Swarm Intelligence and Evolutionary Computation

Swarm Intelligence and Evolutionary Computation are two critical areas within Artificial Intelligence, both focused on solving optimization problems. Swarm Intelligence draws inspiration from the collective behaviors observed in nature, such as ant colonies, bird flocks and fish schools. This approach has led to algorithms like Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), which solve problems using simple individual interactions. On the other hand, Evolutionary Computation mimics the principles of natural selection and genetics, using populations of candidate solutions that evolve through mechanisms such as selection, crossover, and mutation. Popular algorithms in this category include Genetic Algorithms (GA) and Evolution Strategies (ES). Both methodologies are widely applied across fields like optimization, machine learning, and robotics, where they explore complex solution spaces by leveraging nature-inspired processes.

6. Third International Workshop on Adaptive Cyber Defense

The International Workshop on Adaptive Cyber Defense was organized to share research that explores unique applications of Artificial Intelligence (AI) and Machine Learning (ML) as foundational capabilities for the pursuit of adaptive cyber defense. The cyber domain cannot currently be reliably and effectively defended without extensive reliance on human experts. Skilled cyber defenders are in short supply and often cannot respond fast enough to cyber threats.

7. IEEE Conference on AI Workshop AI for Biology and Biomedicine

With the vast improvements in computational resources, from a hardware, software as well as conceptual perspective, it has become possible to advance and accelerate fundamental biological research, as well as biomedical research. AI and advanced computational methods have become a fundamental pillar of pertinent research, rendering the scientific method more efficient and facilitating collaboration across disciplines. In particular, experimental and medical/clinical researchers can effectively work with computational experts since computational models have increasingly gained in detail, accuracy and realism. Along those lines, biological systems such as the brain, the immune system or specific organs can be captured based on experimental data from different spatial and temporal scales. Moreover, state-of-the-art AI and bioinformatics models can be employed using large-scale data-sets, which is further facilitated by the increasing availability of public databases and practicality for collaboration across labs.

8. 2025 AI-PRS Workshop

    The AI-Driven Innovations for Power System Resilience and Security (AI-PRS) workshop, held in conjunction with the 2025 IEEE International Conference on Artificial Intelligence (CAI), aims to provide a forum for group discussions and presentations on Artificial Intelligence (AI) research, practice, education, and applications within the context of power grid security and resilience. 

    With the progressive rise of AI-driven decision-making within critical infrastructures such as the power grid, this workshop will enable scientists, engineers, students, and educators to present contemporary ideas and research results that highlight the impact of AI-enabled algorithms on resilient and secure power grid operations. The emphasis of the workshop is on fostering multi-disciplinary research for the power systems and AI communities to exchange ideas and discussions. This will be achieved by having invited speakers from different sectors of power systems and AI communities, and by soliciting research papers from different communities of power systems, security analysis, and AI/ML researchers and practitioners.

9. Workshop on Human Alignment in AI Decision-Making Systems:
An Inter-disciplinary Approach towards Trustworthy AI

    The goal of the proposed workshop is to present engaging and high-impact challenges to the community by formulating and discussing key fundamental questions from the perspectives of computer science, artificial intelligence, psychology, and the broader social sciences. Our workshop will address several important questions that have been raised in the research community, for example, is it possible to align AI to human values? If so, which values and/or attributes should we be aligning to? How does alignment work across and between individuals and levels of an organization? How can we align AI to humans in ways that are more succinct and more reliable than currently found in AI literature?  How might human alignment increase human’s likelihood to trust and/or delegate responsibility to AI? Why or why not? If so, how do we get there?  What are the ethical, legal, and societal implications of human alignment? 

10. Workshop on Fairness-Aware Federated Optimization and Learning

Optimization and machine learning problems are pervasive in economic, scientific, and engineering applications. While significant advancements have been made in both fields, traditional approaches often assume that all resources and data for a task are centralized on a single device. Unfortunately, the assumption is violated in many applications with the growing storage of personal data and computational power of edge devices. Over the past years, federated learning (FL) has become a popular machine learning paradigm that can leverage distributed data without leaking sensitive information. Similarly, federated optimization techniques are being developed to solve complex optimization problems using distributed data and computational resources. Both approaches aim to leverage collective intelligence while preserving individual privacy. Furthermore, jointly addressing optimization and learning tasks among multiple edge devices with distributed data raises concerns about data security, privacy protection and fairness. In both federated learning and data-driven optimization, outcomes can be affected by data or algorithmic biases, potentially generating unfair results. When the outcomes of these federated processes correlate with real-world rewards (e.g., financial gains or resource allocation), participants may be hesitant to collaborate if they perceive a risk of receiving disproportionately smaller benefits compared to others. As a result, it is crucial to develop new privacy preserving and fairness aware optimization and learning paradigms to leverage the power of distributed computing and storage.

11. Stable Training Paradigms for Neural Networks Reducing Instability, Increasing Capacity

    Neural networks have revolutionized artificial intelligence, excelling in a multitude of application scenarios. However, as we advance toward increasingly large foundation models—such as expansive vision transformers or massive language models—the challenges of ensuring stable training become more pronounced. Issues like loss spikes, vanishing or exploding gradients, and difficulties achieving smooth convergence can significantly prolong training cycles, ultimately undermining overall performance and reliability. Establishing stable training paradigms is therefore essential to support the growing complexity and importance of next-generation neural architectures. This workshop aims to bring together researchers and practitioners to explore strategies that enhance the stability of neural network training. We will focus on areas including data quality, advanced optimization methods, architectural innovations, and the early detection and mitigation of training instabilities. By fostering the exchange of ideas, best practices, and cutting-edge techniques, we strive to cultivate more robust and dependable models. While we particularly welcome contributions related to large foundation models, we invite insights from all domains of neural network research where stability plays a critical role.