DOE, LLNL Star at First Artificial-Intelligence Expo

Courtesy of LLNL

Lawrence Livermore National Laboratory (LLNL) Director Kim Budil and other LLNL staff joined Department of Energy (DOE) Deputy Secretary David Turk, National Nuclear Security Administration (NNSA) Administrator Jill Hruby, DOE Under Secretary for Science and Innovation Geraldine Richmond, DOE Director of the Office of Critical and Emerging Technologies Helena Fu, U.S. Senate majority leader Chuck Schumer and White House Office of Science and Technology Policy Director Arati Prabhakar at the recent Special Competitive Studies Project (SCSP) AI Expo for National Competitiveness.

Held in Washington, D.C., on May 7-8, SCSP's first global technology conference aimed "to convene and build relationships around artificial intelligence (AI), technology and U.S. and allied competitiveness," according to the event's website. The expo drew thousands of registrants from industry, academia and government agencies including several national labs.

LLNL is rapidly expanding research investments to build transformative AI-driven solutions to critical national security challenges. While developing these novel scientific AI tools, the Lab also is doing deliberate research to ensure that solutions are both safe and trustworthy for LLNL's high-consequence missions.

"We have conducted significant research over the last decade that shows us the huge potential for AI to transform the full range of LLNL missions. To completely realize this potential, we are working toward a DOE-wide, nation-scale AI effort to support both national security and broader U.S. technoeconomic competitiveness on the global stage," said Brian Spears, who leads LLNL's AI Innovation Incubator (AI3).

Budil and Spears took center stage to discuss the ways LLNL leverages AI tools to improve stockpile science, fusion targets, disease therapies and more. They explained how DOE's unique combination of multimodal data, experimental facilities, high-consequence research and supercomputing power can help shape the national dialogue around AI.

Turk announced DOE's new Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative, which will build foundational AI capabilities tailored for national security needs. Turk described a future where experimental facilities and massive computational resources are coupled via safe, secure foundational AI models to drive science forward - a future already in evidence at LLNL in fusion research, advanced manufacturing, bioresilience and other areas.

"FASST will bring the full force of the national labs to bear on AI technologies, ensuring our competitiveness in the global AI arena. FASST will expand our existing efforts to grow intellectual and hiring pipelines into the national security enterprise and build a community of experts that apply AI to our mission spaces," said Brian Giera, director of LLNL's Data Science Institute (DSI).

With only brief instructions from LLNL researchers Haichao Miao (left) and Peer-Timo Bremer (center), DOE deputy secretary David Turk donned VR goggles to interact in real-time with LLNL staff in Livermore. (Photo courtesy of SCSP)

Meanwhile on the exhibition floor, DOE's booth was quite popular thanks to two interactive demonstrations developed by LLNL teams. In one demo, virtual reality (VR) goggles and wireless handheld controllers allowed users to manipulate digital twins of parts at LLNL's Advanced Manufacturing Laboratory (AML). Communicating remotely and in real-time with postdoctoral researcher Vuthea Chheang at the AML in Livermore, a user could pick up and inspect a part - for example, to check the quality of a metal lattice before it is 3D-printed.

"Attendees experienced firsthand how VR facilitates the intricate inspection process of these complex components, effectively managing and visualizing extensive data streams and accelerating advanced manufacturing processes," said researcher Haichao Miao, who leads VR research at LLNL and created the demo with Chheang.

VR and digital twins go hand in hand. "Together these technologies allow us to perform intuitive inspections of both the manufacturing process and the resulting parts, such as X-ray vision and arbitrary magnification. They provide a shared, collaborative environment between geographically distributed sites," said Peer-Timo Bremer, who serves on the AI3 and DSI advisory councils.

LLNL also demonstrated a tabletop self-driving lab in collaboration with NVIDIA through a partnership built by AI3. Named the Sidekick, the device mimics a high-repetition laser experiment in a cheap, easy-to-deploy package that implements the actual experimental control environment without safety or security risks. Initially designed to help researchers in the Advanced Photon Technology program explore self-driving technologies, the Sidekick combines a pulse-shaped laser with custom diagnostics and edge computing technologies that allow researchers to develop AI-optimization playbooks offline that can then be deployed at state-of-the-art experiments without consuming valuable machine time.

The result is a broadly applicable platform that helps researchers explore novel AI-based solutions in areas such as manufacturing, material design and accelerators. The current incarnation on display at the expo was developed by LLNL researchers Abhik Sarkar, Mason Sage and Aldair Gongora together with Scott Feister at California State University, Channel Islands and NVIDIA.

"AI-enabled self-driving laboratories have been gaining traction in the DOE labs, as they have the potential to accelerate science by several orders of magnitude. Sidekick systems are a step towards making large scientific facilities accessible," Sarkar said.

Bremer added: "Using autonomous AI agents to supervise or even control robotic platforms and fast experiments promises significant advances, yet developing these techniques directly at state-of-the-art facilities is costly and time-consuming, and often poses safety concerns. Sidekick systems address this challenge by providing an equivalent platform that can be deployed by virtually anybody anywhere."

From left: Haichao Miao (LLNL), Joshua Porterfield (DOE), Peer-Timo Bremer (LLNL) and Helena Fu (DOE) at the DOE exhibition booth. (Photo courtesy of SCSP)

The expo came on the heels of LLNL's AI safety workshop on April 19, where industry and academic insiders convened alongside national labs to shine a light on balancing the potential dangers of AI systems with their promise for innovation, the policy issues that arise with this technology and the need to build more secure and reliable models. The event drew notable figures, including Turing Award winner and AI giant Yoshua Bengio, Reith Lecturer Stuart Russell and UK AI Safety Institute Research Director Yarin Gal. The momentum will continue with summer workshops organized by the DSI and AI3, both aimed at strengthening existing collaborations and creating new ones.

"These dialogues around AI threats and opportunities are critical to our efforts to build innovative tools that promise both incredible transformation as well as safe, reliable results," Spears said. "It's exciting to give the global community a glimpse of the very deep AI thought leadership LLNL is providing for the nation and the world."

- Holly Auten

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.