ORNL Snags Part of DOE's $67M for AI Science Research

The Department of Energy announced a $67 million investment in several AI projects from institutions in both government and academia as part of its AI for Science initiative. Six ORNL-led (or co-led) projects received funding. Credit: Getty Images

The Department of Energy took a major step in establishing artificial intelligence as a priority in the coming decades.

Specifically, DOE announced a $67 million investment in a number of AI projects from institutions in both government and academia as part of its AI for Science initiative, with the Department's Oak Ridge National Laboratory among those leading the way. The goal of this funding is to establish foundational models in research areas such as scientific machine learning, large language models for high-performance computing and automating laboratory workflow.

In total, six ORNL-led (or co-led) projects received funding, including:

  • ENGAGE: (E)nergy-efficient (N)ovel Al(g)orithms and (A)rchitectures for (G)raph L(e)arning
  • DyGenAI: Dynamic Generative Artificial Intelligence for Prediction and Control; of High-Dimensional Nonlinear Complex Systems
  • SciGPT: Scalable Foundational Model for Scientific Machine Learning
  • Productive AI-Assisted HPC Software Ecosystem
  • Privacy-Preserving Federated Learning for Science: Building Sustainable and Trustworthy Foundation Models
  • Durban: Enhancing Performance Portability in HPC (high-performance computing) Software with Artificial Intelligence

The projects were chosen via competitive peer review under the DOE Funding Opportunity Announcement, or FOA, for Advancements in Artificial Intelligence for Science . Funding for these projects from DOE lasts up to three years.

"This announcement is very important for the lab because we've been hearing about the progress of AI for many years now," said William Godoy, senior computer scientist at ORNL. "But we were still working on what AI means for HPC, considering the niche nature of HPC systems."

For Godoy and his team, that means more research into how to best use LLMs on systems like Frontier, the first supercomputer to reach the exascale barrier. Godoy said that shortly after the release of ChatGPT, an AI-powered chatbot, many of his colleagues in the national laboratory community started examining how LLMs could be created in conjunction with DOE's mission.

Godoy will use the new funding for his project to work alongside his contemporaries at Lawrence Livermore National Laboratory, along with HPC and AI experts from the University of Maryland and Northeastern University, to identify the best strategies for creating LLMs designed specifically for HPC. ORNL's Pedro Valero Lara, a senior computer scientist who works with Godoy, said these LLMs can also be used for programming language translation, such as translating legacy HPC Fortran codes into more modern and capable C++ codes.

"If I give a piece of code implemented in one particular language, I can ask the LLMs to make the translation from that language to another language," said Valero Lara. "Just by making this translation of code, we can increment the performance by an order of magnitude." He added that building this capability in LLMs for specific HPC targets was part of the larger goal to support HPC more broadly.

Godoy echoed this sentiment and said the work is intended to strengthen the AI-powered collaboration across the national laboratory ecosystem and the future HPC workforce, including interns, which are using LLMs as a new ubiquitous modality for their own learning.

"Our goal is to build synergies across projects because these projects tend to be large, multidisciplinary and complex, so we can be more impactful together," Godoy said. "We are also working with the ORNL-led Durban project to leverage the value of AI for our HPC mission."

The same rings true for other projects that were awarded funding through this latest round of investment from DOE into advancing AI.

Olivera Kotevska, a research scientist in the Computer Science and Mathematics Division at ORNL who leads the Privacy-Preserving Federated Learning for Science project, stressed the importance of supporting this type of work in advancing AI broadly.

"This support enables our team to advance cutting-edge research in privacy-preserving AI, which is crucial for safeguarding sensitive scientific data while fostering collaboration across institutions," Kotevska said. "Broadly, this project positions ORNL at the forefront of developing sustainable, trustworthy AI solutions that can have a wide-reaching impact on scientific discovery and national security. Additionally, it strengthens ORNL's leadership in building trustworthy AI systems for science, benefiting both the lab and the broader scientific community."

Prasanna Balaprakash, director of AI programs at ORNL who leads the lab's AI Initiative, praised ORNL's vast capabilities and deep history in AI research .

"The six awards cover all five areas of the FOA, a unique distinction for ORNL," said Balaprakash. "These awards are a testament to ORNL's AI expertise and capabilities, solidifying its position as a major leader in AI for science. Several of the projects have been supported by ORNL's AI Initiative - a lab-directed research and development investment focused on developing secure, trustworthy and energy-efficient AI solutions to address problems of national importance."

UT-Battelle manages ORNL for the Department of Energy's Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science . - Mark Alewine

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.