Visit the website here
This tutorial addresses the urgent need for responsible research and development in the rapidly evolving field of Generative AI. Designed for AI researchers and practitioners, it offers a comprehensive exploration of Responsible AI principles tailored specifically to Generative AI technologies. Participants will gain a deep understanding of the ethical implications of advancements in large language models and generative systems, including critical issues such as bias mitigation, privacy preservation and beyond. The tutorial goes beyond theoretical discussions by offering practical, research-oriented strategies, innovative methodological frameworks and hands-on labs. Participants will learn techniques for detecting and mitigating biases in Generative AI models and gain experience applying Responsible AI principles in real-world research scenarios.
Sharmila Devi, Gopala Dhar
Visit the website here
Assessing AI systems’ trustworthiness has become crucial in today’s world, yet practical evaluation frameworks spanning the entire AI lifecycle remain scarce. Z-inspection® addresses this gap as a comprehensive ethical AI assessment methodology applicable across business, healthcare, public sector, and other domains. Based on applied ethics, Z-inspection® aligns with the European Commission’s expert group guidelines on Trustworthy AI, incorporating four key principles: respect for human autonomy, prevention of harm, fairness, and explicability. This framework has evolved into a global initiative promoting ethical AI adoption. Z-Inspection® is listed in the OECD Catalogue of AI Tools & Metrics, highlighting its importance in the field
This tutorial provides participants with an introduction to the Z-inspection® framework through interactive, hands-on exercises based on real-world scenarios. Designed for students and professionals involved in the development, deployment, or application of AI systems, the session aims to equip attendees with practical skills in ethical AI evaluation. Upon completion, participants will receive a Z-inspection® certificate and gain access to an exclusive professional network and ongoing ethical AI initiatives.
Jesmin Jahan Tithi, Partha Deka
Intel, USA
According to the recent market analysis by Global Market Insight (GMI), the global automation testing market size will be anticipated to cross USD 80 billion in 2032. With fast advance of machine learning models and AI technologies, more and more intelligent systems and applications, including smart computer vision systems, are developed for real deployment and applications. Before the deployment of these intelligent systems, it is important and critical for intelligent system testers, quality assurance engineers, and young generation to understand the issues, challenges, and needs as well as state-of-the-art AI testing tools and solutions in testing and quality assurance for modern intelligent systems and smart mobile apps, and smart machines (smart Robots, driverless AVs, and intelligent UAVs). With the big heat of ChatGPT in the business market, many people started to pay the attention on the quality of AI applications systems and deployment.
Jerry Gao
San Jose State University, USA
Jerry Gao
San Jose State University, USA
This tutorial delves into the evolution of AI-assisted programming, tracing its roots to E.W.Dijkstra’s seminal idea of computer-assisted programming and to Natural Language Processing (NLP) and probabilistic language models. It highlights the recent transformative impact of modern transformerbased large language models (LLMs) trained on Big Code, leveraging software naturalness to revolutionize tasks like code generation, completion, translation, and defect detection. Pioneering examples include GitHub Copilot (powered by OpenAI Codex), GPT models, Meta’s Code Llama, Google’s Gemini Code Assist, Amazon CodeWhisperer, Alibaba’s Qwen, and Codeium. Participants will explore advancements in contextual-aware, multilingual programming models that enhance the adaptability of both local and cloud-based LLMs in diverse ecosystems. Core LLM architectures, their downstream applications, and challenges in integrating NLP methodologies with software naturalness will be examined. The tutorial highlights reinforcement learning with human feedback, focusing on alignment techniques to enhance fairness, safety, and performance in code generation by large language models. The session demonstrates AI-assisted programming extensions to Apple’s Xcode and LLM agent development, showcasing tools like Copilot to streamline mobile development and empower participants to evaluate, benchmark, and deploy LLMs effectively. The tutorial will also focus on general techniques for benchmarking and evaluation of LLMs for AIassisted programming. Models are assessed using code-specific benchmarks such as HumanEval and CodeNet, providing standardized datasets for evaluating code generation and completion. Performance metrics like Pass@k, BLEU, CodeBLEU, and functional correctness are analyzed to quantify the quality of generated code. Real-world effectiveness is gauged through human evaluations and deployment case studies, which provide valuable insights into user experiences and practical challenges. Additionally, advanced evaluation methodologies are discussed, including fine-grained analysis to identify common errors, assess model robustness, and measure performance on adversarial inputs. Comparative studies across different programming languages and domains illustrate the adaptability and limitations of various models, including emerging LLM coding agent players, which demonstrate cutting-edge advancements in multilingual programming and cross-domain functionality. Lastly, LLMs and LLM agents have profound implications for computer science, driving advancements in the search for efficient algorithms and automating problem-solving in competitive programming. By tackling complex programming challenges, they open new avenues for understanding algorithm design, optimization, and the theoretical foundations of computation.
Chee Wei Tan
NTU Singapore
This tutorial provides a step-by-step, hands-on approach to building a Retrieval-Augmented Generation (RAG) system using popular AI tools such as LangChain, OpenAI’s ChatGPT-4, FAISS, and Streamlit. Participants will learn how to design and implement an end-to-end RAG system that efficiently retrieves information from a custom knowledge base and generates insightful responses using advanced natural language generation models. The tutorial is geared towards data scientists, machine learning engineers, and AI practitioners interested in developing interactive, intelligent applications that require sophisticated question-answering and document retrieval capabilities. By the end of the session, attendees will have a fully functional RAG application that integrates seamlessly with a user-friendly interface.
Partha Deka
Intel, USA
Sensor content in electronic devices is growing, and an increasing number of applications involve batterypowered devices. The application of sensors is typically always-on, and this requires large power efficiency within the sense-process-act chain. However, today, the processors available for handling sensors and processing sensor data are characterized by high power-per-inference consumption. Much of the inefficiency lies within how sensor data is acquired from sensors, and how the information is relayed within processing subsystems.
The architectural enhancements needed for efficiency improvements cannot be achieved without hardware-software co-design. In this tutorial, we formulate requirements for hierarchical, modular neuromorphic framework that enables concurrent hardware-software co-design in smart sensing System-on-Chip. We exploit the synergy of hardware and software to examine omnidirectional dependencies of the entire design stack (from the application, neural
network algorithm, and mapper level towards system-on-chip, sub-systems and technology options level) with the goal to optimize and/or satisfy smart sensing design constraints such as energy-efficiency, performance, cost and time-to-market frame. In particular, we highlight advantages of the concurrent design, and emphasize the synergy between i) scalable reconfigurable segmented architecture that enables real-time always-on inference
of sensor data, essential for most pervasive sensing tasks, and ii) software development kit that enables the user to build and run an end-to-end application pipeline comprising multiple processing stages, with spiking neural network accelerators being one of them. The tutorial aims to delve into the contemporary trends of neuromorphic computing, explore its capabilities and challenges, and contemplate its future directions and broader impact within the AI community, industry, and society. The target audience includes research students, early-stage researchers, and practitioners with a background in AI.
Amir Zjajo
Innatera Nanosystems B.V.