More than 3,200 Lawrence Livermore National Laboratory (LLNL) employees participated in the first-ever aiEDGE for Innovation Day on March 26 - an event aimed at empowering and equipping the Lab's workforce to integrate AI into their daily work.
The hybrid event, hosted with industry partners OpenAI and Anthropic, gave employees exclusive access to the companies' latest advanced generative and reasoning AI models. At locations across the Lab and virtually, employees from a broad array of job roles explored the models through demonstrations and breakout sessions focused on real-world applications, evaluating the models for their ability to enhance productivity, streamline operations and accelerate scientific research.
Thinking big with AI
The event's morning session, emceed by AI Innovation Incubator Director Brian Spears, kicked off with a welcome address from LLNL Director Kim Budil. Speaking before a packed auditorium, Budil emphasized the transformative power of AI and encouraged employees to think big, think differently and adopt an explorer's mindset to "push the boundaries of what's possible" in applying AI to both technical and non-technical work.
"This is going to be a foundational shift in all of our national security missions. We will do things differently, and we will do things differently sooner than you might imagine," Budil said. "Our tagline for this year is 'Creating the Future,' and adding this tool to our toolkit is part of how we're going to do that … There is really no limit to what we can accomplish together."
Following a brief talk by LLNL's Chief Information Security Officer Matt Myrick on secure and ethical use of AI, keynote speakers from OpenAI and Anthropic headlined the morning session, emphasizing the rapid evolution of AI tools, the need for user feedback, the prioritization of data privacy and national security, and an ongoing effort to make cutting-edge models accessible for sensitive scientific work.
OpenAI's Chief Product Officer Kevin Weil expressed excitement about partnering with the scientific community, highlighting how OpenAI's GPT-4o and advanced reasoning models - like o1 and its successors - are already accelerating complex research, supporting hypothesis generation and automating routine tasks. He stressed how emerging AI agents can augment researchers' capabilities, simplify workflows and help the U.S. remain a global leader in innovation and national security. A live demo by OpenAI's Felipe Millon further showcased the models' capabilities, including chemistry applications, code generation, image manipulation and the use of agents for deep research.
"We're already working together with Lawrence Livermore and the national labs using ChatGPT for day-to-day unclassified work - but this partnership goes beyond that," Weil said after the event. "We want to help the scientists here at Lawrence Livermore do their vital fundamental work, the deep science and the work connected to national security. We're going to see just how much AI can supercharge the work that scientists at Lawrence Livermore do in their most important capacity."

Anthropic's Head of Global Affairs, Michael Sellitto, delivered a keynote focused on the geostrategic implications of AI development, highlighting the importance of U.S. leadership in frontier model development, Anthropic's classified deployments, and real-world use cases in medicine and software development that demonstrate how AI can fundamentally change how organizations work. A demo from Anthropic's Igor Kofman showed how the company's "Claude Code" - an autonomous AI coding assistant - can understand, navigate and improve complex software repositories, even performing bug fixes, website edits and project contributions with minimal human intervention.
"What we're most excited about is how we can enable some of the world's best scientists to build on top of our models to help us understand what they are good at, and what kinds of things we should fix in the next generation," Sellitto said. "Days like today are a good opportunity to do some quick experimentation, but we really want to scale that up. For the technology to be broadly beneficial for society and really accelerate the work that's done at the national labs, there needs to be a deeper partnership where this is just a daily part of everyone's work."
Since AI models aren't perfect and still require a human "in the loop" to evaluate responses and review for accuracy, partnering national labs with industry is invaluable because it tests the models on difficult problems the labs solve every day, feeding data back into the models to improve them for future critical science applications, speakers explained.
Breakout sessions explore AI in action
As the day progressed, employees from a wide range of disciplines - and given enterprise access to models like OpenAI's GPT-4o1 and Anthropic's Claude 3.7 Sonnet - participated in hands-on breakout sessions across the Lab, learning about the models' capabilities and exploring AI's applications to real use cases.
With nearly two dozen tracks, the afternoon offered something for everyone, including sessions on administrative tools, scientific modeling, technical procedure writing and AI-driven programming. Each breakout featured guided demonstrations and discussion delving into how AI could streamline processes, increase efficiency and support the Lab's mission.
LLNL computational physicist Evan Gonzalez attended a session on programming and code development with a goal in mind: using the ChatGPT reasoning models to draft an outline for implementing a particle transport code, a task he said would normally take him weeks.
"Today, I'm looking to understand how to use these things better, see some examples of how other people are working on their tools and hopefully come out with a project draft for my summer student," Gonzalez said. "In future years, I'll hopefully know how to do this even better and make it easier for the students, and I can spend less time prepping for them to code."
Robert Carson, a Lab computational engineer, used Anthropic's Claude Code to build on an existing project. Though the model required some guidance and "a little handholding," Carson said it delivered acceptable starting points that saved him time.
"I think these events are very useful because there are a lot of people here from various different fields, people that maybe aren't as used to it or are trying to learn new tools, and this gives people an actual opportunity to learn," Carson said.

During a popular session on administrative applications, attendees tested the models on tasks like writing emails, drafting job interview questions and creating meeting minutes. Other sessions examined using AI to develop PowerPoint presentations and improve business acumen. In a session on technical writing, employees used and compared the models to create procedural documents.
Sylvia Wu, who performs document control and records management for the Lab's Defense Technologies Engineering Division, said she joined the session to see how AI could make her technical procedure writing more precise and effective. She enjoyed the collaborative nature of the session and found the tools particularly useful for overcoming creative blocks.
"I had something I was struggling through, and I asked the model to help create a flow," Wu said. "I definitely will be using AI more often. It gave me different ways to see things - especially helpful in those times when you're tired or stuck."
"I'm very grateful that they're offering these opportunities, and that management is giving us that time to be able to participate," Wu added.
At a session applying AI to research and scholarly writing, staff scientist Aditya Prajapati used ChatGPT to create a 20-page mock Laboratory Directed Research and Development (LDRD) proposal, complete with literature review, metrics table and experimental plan - all in a matter of minutes, a task he said could typically take months on a real project.
"It did the grunt work that would've taken me a long, long time," Prajapati said. "If we get access to this internally, that would be a game changer. We have to think big - and also about how to implement the tools and find ways to incorporate this in our workflow, so that we can adapt to this technology and fast track whatever we are doing."
Building momentum for AI at LLNL
Designed to encourage employees to incorporate AI into their everyday work and stay ahead in the rapidly evolving field of AI, aiEDGE for Innovation Day reinforced the goals of the Lab's aiEDGE campaign: the exploration of AI's potential to transform workflows, boost collaboration and drive innovation. Throughout the event, one key theme stood out: AI is not just for scientists or coders - it's a tool with applications across the Lab's entire workforce.
Kathryn Whitaker, LLNL's deputy division leader for Enterprise Application Services, said as an IT professional, she values Lab-wide engagement with AI and is eager to explore further applications within operational and support systems.
"AI is such a huge thing that we're all rallying around very quickly here, so it's great to have these events so we can all see how we can utilize the technology," Whitaker said. "We've really seen the power of AI and as a tool what it can provide us as far as speed and efficiency. Today didn't necessarily change my mind about it, but it provided some more ideas around how to use it."
Looking ahead to an AI-fueled future
A roundtable discussion led by LLNL's Spears and Chief Technology Officer Greg Herweg wrapped up the event, where participants asked questions, shared experiences and reflected on lessons learned, acknowledging successes and challenges, including technical hiccups.
Use cases shared during the conversation highlighted both promising applications and current limitations in AI models. Herweg said the Lab is actively exploring pilot programs for tools like ChatGPT, Claude and Microsoft's Copilot, with plans to expand access and capabilities to users over time.
Training materials, including session recordings and slides, were made available for continued learning after the event. Calling for sustained user feedback to help shape future AI efforts at the Lab, Spears discussed the concept of a "ragged frontier" in AI, a term popularized by AI thought leader Ethan Mollick, where models perform impressively in some domains but struggle in others.
"We have a homework assignment for ourselves," Spears said. "I would ask everybody to continue, in a call-to-action sense, to think about your missions and recognize also - in that 'ragged frontier' sense - that just because you find a weakness today, that doesn't mean that you couldn't use it for something that's very deep and successful tomorrow. Keep going back and giving yourself a chance to be a better user. Give the tool a chance to get better and be more capable, and let's figure out how we can transform our missions."