Tutorials Overview

Visit the website here
Abstract

This tutorial addresses the urgent need for responsible research and development in the rapidly evolving field of Generative AI. Designed for AI researchers and practitioners, it offers a comprehensive exploration of Responsible AI principles tailored specifically to Generative AI technologies. Participants will gain a deep understanding of the ethical implications of advancements in large language models and generative systems, including critical issues such as bias mitigation, privacy preservation and beyond. The tutorial goes beyond theoretical discussions by offering practical, research-oriented strategies, innovative methodological frameworks and hands-on labs. Participants will learn techniques for detecting and mitigating biases in Generative AI models and gain experience applying Responsible AI principles in real-world research scenarios.

Author

Sharmila Devi
Sharmila Devi is a senior AI practitioner at Google with extensive experience leading Generative AI initiatives across industries. Her portfolio includes successful implementations of cutting-edge AI solutions addressing complex business challenges. As a thought leader, she has published research papers and technical blogs in domains such as healthcare, e-commerce, and manufacturing. Passionate about knowledge sharing, she frequently speaks at industry events, offering valuable insights into AI strategy and deployment.

Gopala Dhar
 
Gopala Dhar is an AI engineering lead at Google, renowned for applying state-of-the-art AI solutions at scale. He holds four granted patents and has authored multiple publications spanning software design and hardware manufacturing, including embedded systems. Gopala is also a contributor to several open-source hardware driver projects and is actively involved in advancing practical AI applications in real-world settings.
Visit the website here
Abstract
Assessing AI systems’ trustworthiness has become crucial in today’s world, yet practical evaluation frameworks spanning the entire AI lifecycle remain scarce. Z-inspection® addresses this gap as a comprehensive ethical AI assessment methodology applicable across business, healthcare, public sector, and other domains. Based on applied ethics, Z-inspection® aligns with the European Commission’s expert group guidelines on Trustworthy AI, incorporating four key principles: respect for human autonomy, prevention of harm, fairness, and explicability. This framework has evolved into a global initiative promoting ethical AI adoption. Z-Inspection® is listed in the OECD Catalogue of AI Tools & Metrics, highlighting its importance in the field This tutorial provides participants with an introduction to the Z-inspection® framework through interactive, hands-on exercises based on real-world scenarios. Designed for students and professionals involved in the development, deployment, or application of AI systems, the session aims to equip attendees with practical skills in ethical AI evaluation. Upon completion, participants will receive a Z-inspection® certificate and gain access to an exclusive professional network and ongoing ethical AI initiatives.
Author
Dr. Jesmin Jahan Tithi, AI Research Scientist/Engineer, Intel Labs
Dr. Jesmin Jahan Tithi is an AI Research Scientist/Engineer at Intel Corporation, where she focuses on high-performance computing (HPC) and hardware-software codesign. She received her Ph.D. in Computer Science from Stony Brook University, New York, and her B.Sc. in Computer Science and Engineering from Bangladesh University of Engineering and Technology with Honors. She has also interned at Google, Intel, and the Pacific Northwest National Laboratory. Dr. Tithi is a leading expert in HPC and has made significant contributions to the field. She is the author of over 35 peer-reviewed publications and 13 approved patents, and her work has been featured in top academic conferences and journals. She is a founding member of Z-Inspection®, a certified Z-Inspection® teacher and the head of education in North America for Z-Inspection® which is an assessment process for trustworthy and ethical AI.
Partha Deka, Senior Staff Engineer, Intel Corporatio
Partha Deka is a seasoned Data Science Leader with over 15 years of experience in semiconductor supply chain and manufacturing. As a Senior Staff Engineer at Intel Corporation, Partha has led high-impact teams developing AI and machine learning solutions, resulting in substantial cost savings and process optimizations. His notable work includes creating a computer vision system that greatly improved logistics efficiency at Intel, earning his team a finalist spot for the CSCMP Innovation Award.Previously, Partha made key contributions at General Electric, where he utilized machine learning to solve complex industrial challenges. He filed several patents, including those on delivery status diagnosis and data throttling, which collectively garnered over 30 citations.A recognized thought leader in the AI community, Partha is a Senior IEEE Member, a published author, and a frequent speaker at industry conferences. He co-authored “XGBoost for Regression Predictive Modeling and Time Series Analysis,” providing comprehensive insights into XGBoost’s advanced applications. He also serves as a reviewer for the NeurIPS conference, further contributing to the advancement of AI research. Partha is actively involved with the Z-Inspection™ initiative, ensuring ethical AI assessment and promoting trustworthy AI systems. His expertise continues to shape semiconductor manufacturing through the application of advanced analytics and AI-driven innovation.

Abstract

According to the recent market analysis by Global Market Insight (GMI), computer vision market size will be anticipated to cross USD $40 billion in 2032. With the fast advance of modern computer vision models and AI technologies, more and more computer vision applications are developed for real deployment and applications. Many quality testing engineers run into challenge issues in testing and automation of smart computer vision systems while applying existing quality validation methods.  This tutorial first covers in-depth discussion on issues, challenges, and needs in testing and automation of computer vision systems. Then, it addresses several hot topics, including computer vision intelligence/features validation focuses, AI test modeling and analysis, test methods and approaches, and AI-based test generation and augmentation. Moreover, it shares innovative 3D AI test models for different computer vision intelligence, and 3D decision table generation. Furthermore, quality testing coverage criteria and standard process are discussed. Finally, a test automation tool and demos are presented.

Who should attend this tutorial?

Test engineers, quality assurance engineers, and managers who are responsible for quality testing and assurance for modern intelligent systems and AI-powered smart computer vision systems, including mobile and online applications built-in based on modern computer vision models and techniques. In addition, researchers and students are encouraged if they are interested in AI system testing, automation, and quality assurance.

What you learned from this tutorial? What is the coverage of this tutorial?

Table of contents (outline):

  • Introduction on computer vision and applications
    • Test automation market for computer vision and intelligent applications
    • An overview of computer vision and applications
    • A classification of diverse computer vision and applications
  • What to test for computer vision and applications?
    • Major test focuses and intelligence validation
    • Major challenges, issues, and needs in computer vision validation

Adequate quality needs

  • Quality testing process and validation methods
    • Computer vision quality process
    • Different computer vision approaches
    • Model-based quality testing methods for computer vision
  • AI test modeling for intelligent computer vision systems
    • Intelligence-oriented test modeling and analysis for computer vision
    • Intelligence-oriented multiple dimension test models
    • Intelligence-oriented multiple dimension decision test tables
  • Test generation and Ai-based test data generation for computer vision applications
    • AI-based test case generation for computer vision images
    • AI-based test data generation and augmentation for computer vision images
    • AI-based test generation for document-based computer vision intelligence
    • AI-based test augmentation for document-based computer vision intelligence
  • Test result validation for intelligent computer vision systems
  • Quality computer vision system validation for QoS system parameters
  • Test automation for intelligent computer vision and applications
  • Quality evaluation metrics and test coverage for computer vision

In addition, Dr. Gao will provide two show-cases and project demos on sample computer vision test automation.

Tutorial Speaker BIO:

Author

Jerry Gao
 

Jerry Gao, Professor, Computer Engineering Department and Applied Data Science Department, San Jose State University

Director of Research Center of Smart Technology and Systems

Co-Funder and CTO of ALPS-Touchtone, Inc.

Dr. Jerry Gao is a professor at San Jose State University for Computer Engineering Department and Applied Data Science Department. Now, his research interest includes Smart Machine Cloud Computing and AI, Smart Cities, Green Energy Cloud and AI Services, and AI Test Automation, Big Data Cyber Systems and Intelligence. He has published three technical books, one of the books is the first book on object-oriented software testing (1998), and his second book is titled as Testing and Quality Assurance for Component-based Software, which is the first book on component-based software systems.  hundreds (360) publications in IEEE/ACM journals, magazines, international conferences. His research work has received over 96K+ citations (in Google Scholar), and reached over 370K+ readings on ResearchGate. Since 2020, Dr. Gao has served as the chair of the steering committee board for IEEE International Congress on Intelligent Service-Oriented Systems Engineering (IEEECISOSE), and Steering Committee Board for IEEE Smart World Congress. He had over 25 years of academic research and teaching experience and over 10 years of industry working and management experience on software engineering and IT development applications.

Dr. Gao and his group has published over 18 research papers in AI Testing and Automation for modern intelligent systems. Since 2019, Dr. Gao works with Dr. Hong Zhu to establish IEEE AITest international conferences, and successful delivered annually from 2019 to 2023. In addition, Dr. Gao has delivered two keynote speeches on AI testing and Automation for international conferences, and presented one tutorial on Quality Ai Testing and Automation.

In last 10 years, Dr. Gao has played as one key organizer for several IEEE international conferences and workshops, including IEEE CISOSE2021-2023, IEEEAITest2021, IEEE BigDataService2020, IEEE Smart World Congress 2017, IEEE Smart City Innovation 2017, SEKE2010-2011, IEEE MobileCloud2013, and IEEE SOSE2010-2011.

Jerry Gao’s Google Scholar: https://scholar.google.com/citations?user=vMi9grgAAAAJ&hl=en

Jerry Gao’s ResearchGate:  https://www.researchgate.net/profile/Jerry-Gao

Visit the website here
Abstract

This tutorial delves into the evolution of AI-assisted programming, tracing its roots to E.W.Dijkstra’s seminal idea of computer-assisted programming and to Natural Language Processing (NLP) and probabilistic language models. It highlights the recent transformative impact of modern transformerbased large language models (LLMs) trained on Big Code, leveraging software naturalness to revolutionize tasks like code generation, completion, translation, and defect detection. Pioneering examples include GitHub Copilot (powered by OpenAI Codex), GPT models, Meta’s Code Llama, Google’s Gemini Code Assist, Amazon CodeWhisperer, Alibaba’s Qwen, and Codeium. Participants will explore advancements in contextual-aware, multilingual programming models that enhance the adaptability of both local and cloud-based LLMs in diverse ecosystems. Core LLM architectures, their downstream applications, and challenges in integrating NLP methodologies with software naturalness will be examined. The tutorial highlights reinforcement learning with human feedback, focusing on alignment techniques to enhance fairness, safety, and performance in code generation by large language models. The session demonstrates AI-assisted programming extensions to Apple’s Xcode and LLM agent development, showcasing tools like Copilot to streamline mobile development and empower participants to evaluate, benchmark, and deploy LLMs effectively. The tutorial will also focus on general techniques for benchmarking and evaluation of LLMs for AIassisted programming. Models are assessed using code-specific benchmarks such as HumanEval and CodeNet, providing standardized datasets for evaluating code generation and completion. Performance metrics like Pass@k, BLEU, CodeBLEU, and functional correctness are analyzed to quantify the quality of generated code. Real-world effectiveness is gauged through human evaluations and deployment case studies, which provide valuable insights into user experiences and practical challenges. Additionally, advanced evaluation methodologies are discussed, including fine-grained analysis to identify common errors, assess model robustness, and measure performance on adversarial inputs. Comparative studies across different programming languages and domains illustrate the adaptability and limitations of various models, including emerging LLM coding agent players, which demonstrate cutting-edge advancements in multilingual programming and cross-domain functionality. Lastly, LLMs and LLM agents have profound implications for computer science, driving advancements in the search for efficient algorithms and automating problem-solving in competitive programming. By tackling complex programming challenges, they open new avenues for understanding algorithm design, optimization, and the theoretical foundations of computation.

Author

Chee Wei Tan
 
Dr. Tan received the M.A and Ph.D. degrees in Electrical Engineering from Princeton University. He is currently with College of Computing and Data Science, Nanyang Technological University in Singapore. He was a postdoctoral scholar in the NetLab Group at Caltech, a senior fellow for Science at Extreme Scales program at the Institute for Pure and Applied Mathematics at UCLA, and was a visiting faculty at Tencent AI Lab and Qualcomm R&D (QRC). His research interests are distributed optimization, Generative AI, networks and edge learning.
Visit the website here
Abstract

This tutorial provides a step-by-step, hands-on approach to building a Retrieval-Augmented Generation (RAG) system using popular AI tools such as LangChain, OpenAI’s ChatGPT-4, FAISS, and Streamlit. Participants will learn how to design and implement an end-to-end RAG system that efficiently retrieves information from a custom knowledge base and generates insightful responses using advanced natural language generation models. The tutorial is geared towards data scientists, machine learning engineers, and AI practitioners interested in developing interactive, intelligent applications that require sophisticated question-answering and document retrieval capabilities. By the end of the session, attendees will have a fully functional RAG application that integrates seamlessly with a user-friendly interface.

Author

Partha Deka
 

Partha Deka is a seasoned Data Science Leader with over 15 years of experience driving innovation across the semiconductor supply chain and manufacturing sectors. Currently serving as a Senior Staff Engineer at Intel Corporation, Partha has led high-impact teams in developing cutting-edge AI and machine learning solutions, resulting in significant cost savings and process optimizations. Among his notable achievements is the development of a computer vision system that dramatically enhanced logistics efficiency at Intel, leading his team to be recognized as a finalist for the prestigious CSCMP Innovation Award.

Before his role at Intel, Partha made significant contributions at General Electric (GE), where he demonstrated his expertise in data science and machine learning. During his tenure, he filed multiple patents, including Delivery Status Diagnosis for Industrial Suppliers Using Machine Learning and Auto Throttling of Input Data and Data Execution Using Machine Learning and Artificial Intelligence. These patents have received over 30 citations, underscoring their impact and importance in the field.

A recognized thought leader in the AI community, Partha is a Senior IEEE Member, a published author, and a regular speaker at industry conferences. He is the author of the book XGBoost for Regression Predictive Modeling and Time Series Analysis, which covers foundational knowledge to advanced applications in XGBoost, including time series forecasting, feature engineering, model interpretability, and deployment techniques. His expertise has been acknowledged through his role as a paper reviewer for the prestigious NeurIPS conference, where he contributes to advancing AI and machine learning research. His work continues to shape the field, particularly in applying advanced analytics to enhance semiconductor manufacturing processes.

Visit the website here
Abstract

    Sensor content in electronic devices is growing, and an increasing number of applications involve batterypowered devices. The application of sensors is typically always-on, and this requires large power efficiency within the sense-process-act chain. However, today, the processors available for handling sensors and processing sensor data are characterized by high power-per-inference consumption. Much of the inefficiency lies within how sensor data is acquired from sensors, and how the information is relayed within processing subsystems.
The architectural enhancements needed for efficiency improvements cannot be achieved without hardware-software co-design. In this tutorial, we formulate requirements for hierarchical, modular neuromorphic framework that enables concurrent hardware-software co-design in smart sensing System-on-Chip. We exploit the synergy of hardware and software to examine omnidirectional dependencies of the entire design stack (from the application, neural
network algorithm, and mapper level towards system-on-chip, sub-systems and technology options level) with the goal to optimize and/or satisfy smart sensing design constraints such as energy-efficiency, performance, cost and time-to-market frame. In particular, we highlight advantages of the concurrent design, and emphasize the synergy between i) scalable reconfigurable segmented architecture that enables real-time always-on inference
of sensor data, essential for most pervasive sensing tasks, and ii) software development kit that enables the user to build and run an end-to-end application pipeline comprising multiple processing stages, with spiking neural network accelerators being one of them. The tutorial aims to delve into the contemporary trends of neuromorphic computing, explore its capabilities and challenges, and contemplate its future directions and broader impact within the AI community, industry, and society. The target audience includes research students, early-stage researchers, and practitioners with a background in AI.

Author

    Amir Zjajo

Affiliation

    Innatera Nanosystems B.V.