IEEE World Technology Summit
AI INFRASTRUCTURE

November 12-14, 2024  •  San Jose Convention Center

WTS 2024 Program

We have assembled a stellar list of speakers to address a long list of topics suggested by WTS sponsors. Our two-day conference is divided into four sessions: AI Applications, Silicon, Systems, and Security and Standards. (See the At-a-Glance Schedule with Dates and Times)

Tuesday Evening, Nov 12 Event Reception
Wednesday Morning, Nov 13 AI Applications
Wednesday Afternoon, Nov 13 Silicon
Thursday Morning, Nov 14 Systems
Thursday Afternoon, Nov 14 Security and Standards

Michael Condry, IEEE Fellow & WTS Chair 
The promise of AI depends on there being adequate infrastructure to support the huge computational demands. This conference focuses on the needed Silicon (semiconductors), Systems (including power and cooling), and Applications, as well as Security, Standards, and Regulation. Experts in these fields will share insights on what it will take to keep AI moving forward.  

IEEE WELCOME

Tom Coughlin, IEEE President
IEEE Industry Engagement

One of my goals this year is to improve IEEE’s engagement with industry and value to those involved in the practical applications of technology.  Towards this end IEEE is looking to strengthen Industry engagement activities throughout the organization.  In this talk I will talk about various IEEE activities in our Industry Engagement Committee, the IEEE Standards Association, IEEE members and geographical activities, educational activities and technical activities.

OVERVIEW

David Tennenhouse, Senior Advisor, National Science Foundation
AI Infrastructure: We CAN Close the AI Energy Gap
According to some pundits, growth in the demand for AI training and inference so vastly exceeds Moore’s Law that the demand can only be met with a rapidly growing population of ever more power-hungry data centers that could collectively consume a significant fraction of the world’s electricity. Our job, as the visionaries and engineers of the Infrastructure enabling AI, is to ensure that disproportionate surge in energy demand doesn’t happen, i.e., we must deliver the benefits of AI in ways that are sustainable and economically viable. This keynote will begin with a realistic discussion around the “demand” side of the equation. It will then identify an offsetting set of opportunities for innovation on the “supply” side — opportunities to so vastly improve the effectiveness of AI infrastructure that, in aggregate, they can offset the growth on the demand side. The supply side discussion will highlight examples across the infrastructure “stack”: in the models, the algorithms, the software, the individual compute nodes and in the scale-out mechanisms used at the data center level.


SESSIONS

AI Applications • Wednesday Morning, November 13

Applications utilizing Artificial Intelligence are growing rapidly but to enable them the suitable infrastructure must be there. Key growing areas including Large Language Models, Medical applications (consumer and professional), Electronic Design Automation, and Digital Twins are greatly influencing the requirements for infrastructure. This session will examine these and other topics to understand their possible growth and infrastructure needs.

CHAIR: Stoyan Nihtianov, IEEE Lifetime Senior Member

Jürgen Weichenberger, VP, Schneider Electric
Empowering Industrial Customers: Practical Applications of Generative AI
Explore the transformative potential of Generative AI with Juergen Weichenberger, VP AI Strategy & Innovation at Schneider Electric, and get valuable insights into its impact on work processes and knowledge integration. You will see how GenAI can unlock a spectrum of possibilities for industrial applications, from PLC code generation to large vision models for quality, safety, supply chain optimization, and order management. Additionally, we will delve into the advantages and challenges of GenAI in PLC code generation use cases. 

Venkat Thanvantri, Corporate VP, Cadence
Transforming Chip to Data Center Design with Generative AI
In this keynote, we will delve into advancements in Generative AI technology and its transformative impact on the electronics design technologies and methodologies. Hear how Generative AI is reshaping chip design, optimizing chip performance, and unlocking new possibilities for data center architecture. We will share real-world applications, case studies, and future trends that showcase the potential of Generative AI to drive innovation and efficiency in design workflows.  

Dan Isaacs, CTO, Digital Twin Consortium
The Evolution of Digital Twins: Applications and Opportunities for Semiconductor Manufacturing and Other Industries
As the demand for semiconductor devices continues to grow and evolve, so does the evolution of digital twins. Digital twins are a primary enabler of digital transformation, where today’s digital twin market projections are more than one hundred billion dollars by 2030. From cloud to on-premise, industry adoption of digital twins is accelerating over a spectrum of use cases, ranging from informant to performant, including the emergence of AI-infused digital twins.

This session will cover use cases and case studies based on the Digital Twin Consortium’s work product and frameworks, highlighting the value-driven evolution and trends that further fuel innovation. Attendees will learn about the advantages and practical aspects of using AI-infused digital twins to enhance semiconductor manufacturing and other industries, including healthcare and life sciences, driving the next wave of digital transformation.

 

Yu Cao, CEO, Hermes Microvision Inc., and ASML Company
Applications of AI in Holistic Lithography
Holistic Lithography consists of three key elements: lithography equipment for circuit patterning on the wafer, metrology and inspection to provide measurement data, and computational lithography for building models and performing simulations, enabling optimization and control. With the continued advancement of semiconductor manufacturing, AI techniques based on measurement data have become increasingly important in improving the accuracy and speed of the models involved, complementing approaches based on physical models. We present a few examples of deploying AI in Holistic Lithography and discuss potential next steps.  

 Ronjon Nag, Founder, R42 Group
The Future of AI Infrastructure in Medicine
AI is everywhere, and how it will be used in medicine is prominently discussed. There are many holy grail ideas such as the AI doctor, AI drug discovery, AI clinical trial supervision and  how AI can put together clinical trial. However, how will society deal with these innovations? Can an AI doctor really have empathy? Would it be secure, and how would the FDA regulate these ideas ? Are these ideas only academic toys, or could they actually reach mainstream, and when? This talk will outline where we are, possible directions and potential roadblocks. 

Silicon • Wednesday Afternoon, November 13

The underlying silicon is a critical enabler for computing in Artificial Intelligence and its infrastructure.  This session illustrates multiple perspectives on challenges in silicon technologies for AI/ML and its impact on AI/ML solutions.  We will consider different aspects including silicon architectures, development and verification challenges, packaging, and aspects of energy consumption and cooling

CHAIR: Frank Schirrmeister, Chairperson: Executive Director, Strategic Programs, System Solutions, Synopsys

Zane Ball, Corporate VP and General Manager, Intel Data Center Platform Engineering & Architecture (DPEA) Group
Reliability, Availability, and Serviceability in the Age of AI
AI is transforming every aspect of our world.  With that transformation and the benefits AI brings, comes the insatiable need for high performance compute and subsequently, the need to address reliability, availability and serviceability (RAS) in innovative and unique ways.  This keynote will explore the transformative impact of AI on RAS and the latest advancements, challenges, and opportunities in detection, design, and infrastructure. Join us as we uncover how AI is reshaping the future of our ecosystem, and how the critical and foundational work of open standards and the IEEE community are more important than ever to effectively scale to meet these demanding needs

Huiming Bu,  VP,  IBM
Semiconductor Technology Innovations to Fuel the Chip Engine for AI Infrastructure
Artificial intelligence (AI) is transforming our world and the demand for computing capability is increasing at an unprecedented pace. Here we will talk about semiconductor technology innovations in transistor and advanced packaging to meet this ever-growing demand in the era of AI. Transistor architecture breakthroughs and integrated circuit interconnect innovations have enabled tens of billions of transistors onto a chip the size of a fingernail. New technologies like High NA EUV are driving manufacturing and design research & development, accelerating transistor and interconnect roadmaps to fuel advances in semiconductor chips required for expanding AI workloads. By synergizing the full logic stack through chip package, we can make AI hardware more powerful and more energy efficient.

Woopoung (Vincent) Kim, Corporate EVP, Head of Packaging at Samsung Device Solutions Research America
Customization & Energy-efficiency Improvement with Advanced Packaging for AI Infrastructure
The AI innovation of today may bring two kind of challenges to AI Infrastructure. One is high performance to support innovation and the other is energy-efficiency to be sustainable. The performance challenge leads to use of faster silicon, more transistors, more interconnects and faster/bigger memories. The energy challenge may lead to more compact customization with shorter interconnects and more integrated components. Advanced packaging is the enablement technology for AI Infrastructure providing higher-performance and energy-efficient solutions by shortening interconnects, increasing the number of interconnects between chips and integrating more components inside packaging. The trend of 2.3D/2.5D/3D advanced packages will be discussed along with the benefit of using advanced packaging for customized memories.  

Sandeep Giri, Sysinfra PMO, Google
Next-generation Interconnect Technologies Driving the Evolution of AI Hypercomputing and TPUs
Scaling computing performance is foundational to advancing the state of the art in ML. Google TPU v5 powered by key innovations in interconnect technologies and domain specific acceleration (DSA) enabled.  We will learn about the inner workings of an AI Hypercomputer.  Its performance, scalability, efficiency, and availability make it attractive for large language models.  Since its launch, AI teams around the globe have actively used TPU v4 supercomputers for cutting-edge ML research and production workloads for cutting-edge ML research and production workloads across language models, recommender systems, and generative AI.  

Alex Starr, Corporate Fellow, AMD
Redefining Limits in Semiconductor Engineering for AI with AI   
This presentation explores the transformative impact of artificial intelligence (AI) on design and validation of AMDs AI semiconductor portfolio. Central to AMD’s approach is a “corporate shift left” initiative aimed to prioritize early integration of advanced design verification technologies, such as formal verification, hardware emulation, and now the use of AI. By applying various AI techniques on vast amounts of data from simulations, emulation, virtual platforms, specifications and the RTL designs long before production, AMD enables architects and engineers to explore design ideas and rectify potential implementation flaws earlier in the design cycle, significantly enhancing silicon quality and reducing time to market in a world of ever-increasing complexity.

Vojin Zivojnovic, Co-founder and CEO of AGGIOS
Navigating Power, Security and Thermal Considerations for the Future AI Infrastructure Semiconductor Landscape  
The rapid advancement of artificial intelligence (AI) is significantly impacting the power, thermal, and security aspects of electronic devices. This presentation will explore the critical role of energy proportionality in achieving sustainable engineering for AI Infrastructure chips and systems. As node scaling and innovative packaging accelerates, effective power management strategies including optimization of active states and thermal constraints is essential for AI Infrastructure performance. This presentation will also discuss architecture options including FPGA-based AI accelerators, extensible processors and dedicated accelerators enabling continuous innovation in power management and energy efficiency to meet the growing demands of AI applications.

Systems • Thursday Morning, November 14

Although there are many elements to providing AI solutions from servers to edge to client, one of the key current challenges rests in the server technology. Servers provide the backbone necessary for complex computation and data management that AI needs to provide its services. Both power and cooling are critical elements that if not available and managed, we will not be able to provide the computational power needed to make them happen. This session considers the challenges spanning from server to edge to client, but the primary focus is keeping the servers operating.

CHAIR: Steve Jordan, Chairperson: Jordan Consulting Group

Jessica Bian, Vice President, Grid Services, Grid-X Partners
The Utility Challenges to Deliver Power to Large Data Centers

Peter Panfil, VP of Global Power, Vertiv
Energy Independence and the BYOP Strategy
Join Vertiv’s Peter Panfil to learn about the bring-your-own-power (BYOP) strategy for data center operators. BYOP adopters deploy low- or zero-carbon distributed energy resources (DERs) on site, reducing or eliminating scope 1 and scope 2 emissions, while also reducing dependence on utility power and the friction that currently exists between operators and utilities. This presentation will feature: background on growth of data center industry; the relationship between data center operators and utilities; the BYOP ecosystem; key technologies; how it works; benefits of BYOP; good grid citizenship; future growth; and eco implications. Are you a BYOP candidate?  

Paolo Faraboschi, VP and Fellow, HP Enterprise
Reducing the Barriers to Training AI Foundation Models
The world has recently witnessed an unprecedented acceleration in computational requirements for Machine Learning and Artificial Intelligence applications. This spike in demand has imposed tremendous strain on the underlying technology stack in supply chain, GPU-accelerated hardware, software, datacenter power density, and energy consumption. If left on the current technological trajectory, future demands suggest unsustainable spending trends, further limiting the number of market participants, stifling innovation, and widening the technology gap. To address these challenges, this talk addresses some necessary changes in the AI training infrastructure throughout the technology ecosystem. These changes require advancements in supercomputing and novel AI training approaches, from high-end software to low-level hardware, microprocessor, and chip design, while improving the energy efficiency required to achieve a sustainable infrastructure. The talk proposes an analytical framework that quantitatively highlights the challenges and points to the opportunities to reduce the barriers to entry for training large language models.  

Ali Heydari, Distinguished Engineer, NVIDIA
How Generative AI and Accelerated Compute Are Creating the Next Generation of Liquid Cooled Data Centers    
Higher density, accelerated compute, and AI infrastructure for data centers are pushing the limits of power capacity and cooling technologies.  These challenges start at the chip and work their way up to the grid.  AI chips will need more advanced cooling technologies than used today for peak performance.  As the use of AI grows, data centers could consume a significant portion of global demand for electricity, so improved efficiency is essential to relieve pressure on the grid. In this presentation we discuss the energy optimization opportunities of air-liquid hybrid cooling as compared to pure air cooling for data centers.  A gradual transition from 100% air cooling to 25%–75% air and liquid cooling has been studied to understand the changes in IT, fan, facility, and total data center power consumption.  Various system design optimizations such as supply air temperature (SAT), facility chiller water temperature, economization, and secondary fluid temperature are considered to highlight the importance of proper setpoint conditions on both primary and secondary sides.

Matt Davidsaver, GE Vernova, Project Manager
Gas Turbine Prime Power Solutions for Large Datacenters: Low Carbon, Dispatchable Power, with Examples from Other Industries 
Natural Gas Combined Cycle (NGCC) gas turbine power plants are used for highly flexible and baseload power for a full range of energy requirements, from a few megawatts to gigawatts.  GE Vernova will discuss options for reliable, Low Carbon, dispatchable power via similar installations used by other industries with large power needs.  We will explore how pre-combustion (hydrogen fuel) and post-combustion (carbon capture) with NGCCs can meet a datacenter’s needs for both 24×7 and low carbon power.  

Security and Standards • Thursday Afternoon, November 14

There are concerns about risks involving both the development and use of AI. Everyone working on AI Infrastructure needs to be mindful of these concerns. With respect to Security, this session considers cybersecurity risks affecting the computation involved in providing AI products and services and also possible adverse impacts from the use or abuse of AI. Also, the potential to use AI to detect and defend against malicious and inadvertent attacks. With respect to Standards, this session includes work by IEEE on the development of standards for AI and also covers international, national, and state government activity to create laws and regulations to guide and control AI.

CHAIR: David Snyder, Chairperson: President, 42TEK LLC

Omar Santos, Distinguished Engineer, Cisco
Cybersecurity for Agentic AI
Many AI deployments have been narrow in scope, focusing on augmenting specific tasks rather than radically reimagining work, but now we are seeing the early signs of true agentic workflows. In this presentation, we’ll explore the cybersecurity risks of agentic AI—intelligent systems capable of autonomous action. As AI agents become more integrated into critical infrastructure and decision-making processes, they pose unique threats. We’ll examine how their autonomy can be exploited for malicious purposes, leading to potential data breaches, service disruptions, and even physical damage. The focus will be on understanding the attack vectors specific to agentic AI, including compromised algorithms, data poisoning, and adversarial attacks. We’ll also discuss mitigation strategies to safeguard against these vulnerabilities, emphasizing the need for robust security protocols, continuous monitoring, and ethical AI design. This talk aims to raise awareness and prepare for the cybersecurity challenges of increasingly autonomous AI systems.  

Jaya Baloo, CSO, Rapid7
Our Secure Quantum Future
This talk will dive into the transformative world of quantum computing and its impact on digital security, specifically the enormous impact to cybersecurity. This session will introduce the fundamentals of quantum technology, examine the potential risks it poses to today’s encryption standards, and discuss the evolution towards quantum-resistant cryptography. Participants will leave with a solid understanding of how to safeguard digital communications against the impending quantum advancements, ensuring a secure future in the quantum age.  

Wally Rhines, CEO, Cornami, Inc.
Secure Sharing of GenAI/LLMs with Post Quantum Encryption
Combining proprietary corporate data with LLMs is driving revolutionary change.  But the fine-tuning data that corporations are building on top of foundation models is so sensitive that most organizations are reluctant to host the resulting LLMs in the cloud or other publicly accessible platforms.  The evolving solution is the use of fully homomorphic encryption (FHE) to provide quantum proof protection so that all queries and computation are limited to data that remains encrypted at all times.  It is not practical to encrypt multi-terabyte foundation models so techniques have been developed to encrypt only the fine tuning data.  This evolution of technology provides the ability to share proprietary models and data without exposing the actual data and, ultimately, the ability to charge users for queries made to the protected model. Dr. Rhines will discuss how the challenges of FHE encrypted model processing have been overcome and what capabilities will be available in the future.  

Richard Tong, Chair, IEEE Artificial Intelligence Standards Committee
Developing and Deploying Industry Standards for Artificial Intelligence: Challenges, Strategies, and Future Directions
The IEEE Computer Society’s Artificial Intelligence Standards Committee (C/AISC) is one of the leading international organizations for the development of industry standards for AI. Established in 2020, C/AISC oversees more than 50 standard working groups covering various aspects of AI standardization, including foundations, governance, horizontal and vertical interoperability, evaluation, security, and domain applications. These standards aim to provide guidelines for the responsible development and deployment of AI across industries. 

This talk will focus on the standard development efforts and strategies to meet the current challenges, such as the rapid pace of AI advancement and the need for international cooperation and collaboration between industry, government, and international bodies to shape the future of AI standards and ensuring the effective, efficient, enabling, safe, secure, and trustworthy development and deployment of AI technologies.  

David Sanker, Patent Attorney, SankerIP
AI and Intellectual Property
The U.S. Constitution provides the basis for intellectual property rights in the United States, and the specific laws relating to IP rights have adapted over the past 235 years due to advances in technology and other geopolitical changes.  AI creates new challenges for IP rights, as expressed in the U.S. Executive Order on AI and the AU AI Act.  Important considerations include: (i) can you patent AI algorithms or inventions that use AI (and if so, how); (ii) can you get intellectual property protection for inventions or creative works that were fully or partially created by AI (and if so, how); (iii) with respect to patents, how does AI affect what is considered “obvious” (and is therefore not patentable); and (iv) how is AI affecting the work of IP attorneys (e.g., drafting new patent applications or drafting responses to office actions).  While AI is evolving quickly, laws relating to AI are moving much more slowly.  People involved in AI and IP need to understand how to work within the existing laws right now while preparing for how the laws are changing.  

Brandie Nonnecke, Founding Director, CITRIS Policy Lab
The Laws, Policies & Regulations Shaping the Future of AI
As AI technologies continue to advance at an unprecedented pace, governments and organizations worldwide are grappling with how to effectively govern their development and deployment. This presentation will delve into key legislative frameworks, regulatory initiatives, and policy trends shaping the future of AI. Learn about the impact of landmark laws and governance strategies, such as the EU AI Act, the NIST AI Risk Management Framework, and recent executive orders from the White House and the State of California. Additionally, this presentation will address the challenges and opportunities in aligning international standards, fostering public-private collaborations, and safeguarding human rights in the age of AI, providing attendees with key insights into the critical legal and regulatory considerations that will define AI’s trajectory in the coming years.  

© Copyright 2024 IEEE – All rights reserved.