Energy remains the cornerstone of societal prosperity, with its production, delivery, and management critical to human advancement. As we progress into 2025, the energy field continues to transform rapidly, requiring us to broaden our approach to encompass sophisticated control, optimization, and decarbonization across increasingly complex energy systems. The urgent energy demands of growing economies persist, while our planet’s finite resources and the escalating climate crisis necessitate a sharper focus on sustainability, efficiency, and resilience. In order to navigate the multifaceted socio-economic dimensions of the energy sector, trust in AI becomes a crucial aspect. Trust is necessary to ensure that AI systems are reliable, secure, and capable of making informed decisions. Additionally, transparency and accountability are key components of trust in AI. Users and stakeholders need to have confidence in the algorithms and models used in energy systems to make fair and unbiased decisions.
The BEComLLM workshop seeks to uncover the possibilities of integrating LLMs with EC to push the boundaries of AI, machine learning, and optimization. Our goal is to foster collaboration among researchers and practitioners to:
Neural Architecture Search (NAS) is a powerful machine learning technique that automates the design of neural network architectures instead of manually defining the network’s structure, such as layers, number of neurons, and activation functions. NAS can discover innovative architectures that surpass those designed manually, often resulting in more accurate, efficient, or straightforward models. NAS involves searching through a predefined space of potential architectures to find the most effective one for a specific task, such as image classification or natural language processing. NAS aims to optimize aspects like accuracy, efficiency, and model size, often using techniques such as:
Reinforcement Learning: Agents explore different architectures and learn to improve designs based on performance feedback.
Evolutionary Algorithms: Inspired by natural selection, these algorithms evolve architectures over successive generations.
Gradient-Based Methods: Utilize differentiable architecture representations to optimize structures using gradient descent.
The goal of NAS is to reduce the time and expertise required to design high-performing neural networks, making the process more efficient and accessible.
Deepfake content has seen a significant rise in the number and its impact can not be ignored. The impact of deepfake content is not limited to any specific purposes and it can be used for several malicious purposes including monetary theft, misleading voters in national elections, pornography, and harassment. The statistics show a tremendous growth in deepfake videos in just a couple of years, every form (text, video, images, and audio) of deepfake content is highly popular on social media platforms, and rising frauds due to the tremendous presence of deepfake content.
Interestingly, deepfake is not limited to any particular form of data modality; hence, we assert that treating a single form of deepfake is not sufficient and provides any partial security. It is interesting to note that existing workshops handling this space tackle image/video-based deepfakes heavily and it has also seen a sharp jump in research papers tackling only image/video-based deepfakes ignoring other modalities of deepfakes. Therefore, through this workshop, we want to bring attention to other forms of deepfakes and encourage researchers to propose solutions to counter other forms of deepfakes as well.
Swarm Intelligence and Evolutionary Computation are two critical areas within Artificial Intelligence, both focused on solving optimization problems. Swarm Intelligence draws inspiration from the collective behaviors observed in nature, such as ant colonies, bird flocks and fish schools. This approach has led to algorithms like Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), which solve problems using simple individual interactions. On the other hand, Evolutionary Computation mimics the principles of natural selection and genetics, using populations of candidate solutions that evolve through mechanisms such as selection, crossover, and mutation. Popular algorithms in this category include Genetic Algorithms (GA) and Evolution Strategies (ES). Both methodologies are widely applied across fields like optimization, machine learning, and robotics, where they explore complex solution spaces by leveraging nature-inspired processes.
The International Workshop on Adaptive Cyber Defense was organized to share research that explores unique applications of Artificial Intelligence (AI) and Machine Learning (ML) as foundational capabilities for the pursuit of adaptive cyber defense. The cyber domain cannot currently be reliably and effectively defended without extensive reliance on human experts. Skilled cyber defenders are in short supply and often cannot respond fast enough to cyber threats.
With the vast improvements in computational resources, from a hardware, software as well as conceptual perspective, it has become possible to advance and accelerate fundamental biological research, as well as biomedical research. AI and advanced computational methods have become a fundamental pillar of pertinent research, rendering the scientific method more efficient and facilitating collaboration across disciplines. In particular, experimental and medical/clinical researchers can effectively work with computational experts since computational models have increasingly gained in detail, accuracy and realism. Along those lines, biological systems such as the brain, the immune system or specific organs can be captured based on experimental data from different spatial and temporal scales. Moreover, state-of-the-art AI and bioinformatics models can be employed using large-scale data-sets, which is further facilitated by the increasing availability of public databases and practicality for collaboration across labs.
The AI-Driven Innovations for Power System Resilience and Security (AI-PRS) workshop, held in conjunction with the 2025 IEEE International Conference on Artificial Intelligence (CAI), aims to provide a forum for group discussions and presentations on Artificial Intelligence (AI) research, practice, education, and applications within the context of power grid security and resilience.
With the progressive rise of AI-driven decision-making within critical infrastructures such as the power grid, this workshop will enable scientists, engineers, students, and educators to present contemporary ideas and research results that highlight the impact of AI-enabled algorithms on resilient and secure power grid operations. The emphasis of the workshop is on fostering multi-disciplinary research for the power systems and AI communities to exchange ideas and discussions. This will be achieved by having invited speakers from different sectors of power systems and AI communities, and by soliciting research papers from different communities of power systems, security analysis, and AI/ML researchers and practitioners.
The goal of the proposed workshop is to present engaging and high-impact challenges to the community by formulating and discussing key fundamental questions from the perspectives of computer science, artificial intelligence, psychology, and the broader social sciences. Our workshop will address several important questions that have been raised in the research community, for example, is it possible to align AI to human values? If so, which values and/or attributes should we be aligning to? How does alignment work across and between individuals and levels of an organization? How can we align AI to humans in ways that are more succinct and more reliable than currently found in AI literature? How might human alignment increase human’s likelihood to trust and/or delegate responsibility to AI? Why or why not? If so, how do we get there? What are the ethical, legal, and societal implications of human alignment?
Optimization and machine learning problems are pervasive in economic, scientific, and engineering applications. While significant advancements have been made in both fields, traditional approaches often assume that all resources and data for a task are centralized on a single device. Unfortunately, the assumption is violated in many applications with the growing storage of personal data and computational power of edge devices. Over the past years, federated learning (FL) has become a popular machine learning paradigm that can leverage distributed data without leaking sensitive information. Similarly, federated optimization techniques are being developed to solve complex optimization problems using distributed data and computational resources. Both approaches aim to leverage collective intelligence while preserving individual privacy. Furthermore, jointly addressing optimization and learning tasks among multiple edge devices with distributed data raises concerns about data security, privacy protection and fairness. In both federated learning and data-driven optimization, outcomes can be affected by data or algorithmic biases, potentially generating unfair results. When the outcomes of these federated processes correlate with real-world rewards (e.g., financial gains or resource allocation), participants may be hesitant to collaborate if they perceive a risk of receiving disproportionately smaller benefits compared to others. As a result, it is crucial to develop new privacy preserving and fairness aware optimization and learning paradigms to leverage the power of distributed computing and storage.
Large Language Models (LLMs) have revolutionized artificial intelligence by achieving remarkable performance across a wide array of tasks. However, the pre-training process of these models often encounters instability issues such as loss spikes, gradient vanishing or exploding, and convergence difficulties. These instabilities not only prolong training time but also affect the overall performance and reliability of the models. As LLMs become increasingly integral to various applications, establishing stable training paradigms is essential. This workshop seeks to bring together researchers and practitioners to discuss and develop strategies for enhancing the stability of LLM pre-training. By focusing on aspects like data quality, optimizer selection, architectural innovations, and spike-awareness mechanisms, we aim to foster collaborations that lead to more robust and dependable LLMs.