AI Models to Mimic Scientific Thinking with New Data

Simons Foundation

What can exploding stars teach us about how blood flows through an artery? Or swimming bacteria about how the ocean's layers mix? A collaboration of researchers from universities, science philanthropies and national laboratories has reached an important milestone toward training artificial intelligence models to find and exploit transferable knowledge between seemingly disparate fields to drive scientific discovery.

This initiative, called Polymathic AI , uses technology similar to that powering large language models such as OpenAI's ChatGPT or Google's Gemini. But instead of ingesting text, the project's models learn using scientific datasets from across astrophysics, biology, acoustics, chemistry, fluid dynamics and more, essentially giving the models cross-disciplinary scientific knowledge.

"These groundbreaking datasets are by far the most diverse large-scale collections of high-quality data for machine learning training ever assembled for these fields," says Polymathic AI member Michael McCabe , a research engineer at the Flatiron Institute in New York City. "Curating these datasets is a critical step in creating multidisciplinary AI models that will enable new discoveries about our universe."

Today, the Polymathic AI team released two of its open-source training dataset collections to the public — a colossal 115 terabytes in total from dozens of sources — for the scientific community to use to train AI models and enable new scientific discoveries. (For comparison, GPT-3 used 45 terabytes of uncompressed, unformatted text for training, which ended up being around 0.5 terabytes after filtering.)

"The freely available datasets are an unprecedented resource for developing sophisticated machine learning models that can then tackle a wide range of scientific problems," says Polymathic AI member Ruben Ohana , a research fellow at the Flatiron Institute 's Center for Computational Mathematics (CCM). "The machine learning community has always been open-sourced; that's why it's been so fast-paced compared to other fields. We feel that sharing this data open source will benefit the machine learning and scientific communities. It's a win-win situation — you have machine learning that can develop new models, and at the same time, scientific communities can see what machine learning can do for them."

The full datasets are available to download for free from the Flatiron Institute and accessible on HuggingFace , a platform hosting AI models and datasets. The Polymathic AI team provides further information about the datasets in two papers accepted for presentation at the leading machine learning conference, NeurIPS , to be held in December in Vancouver, Canada.

"We've seen again and again that the most effective way to advance machine learning is to take difficult challenges and make them accessible to the wider research community," says McCabe. "Each time a new benchmark is released, it initially seems like an insurmountable problem, but once a challenge is made accessible to the broader community, we see more and more people digging in and accelerating progress faster than any individual group could alone."

The Polymathic AI project is run by researchers from the Simons Foundation and its Flatiron Institute, New York University, the University of Cambridge, Princeton University, the French Centre National de la Recherche Scientifique and the Lawrence Berkeley National Laboratory.

AI tools such as machine learning are increasingly common in scientific research, including being recognized in two of this year's Nobel Prizes . Still, such tools are typically purpose-built for a specific application and trained using data from that field. The Polymathic AI project instead aims to develop models that are truly polymathic, like people whose expert knowledge spans multiple areas. The project's team itself reflects intellectual diversity, with physicists, astrophysicists, mathematicians, computer scientists and neuroscientists.

The first of the two new training dataset collections focuses on astrophysics. Dubbed the Multimodal Universe , the dataset contains hundreds of millions of astronomical observations and measurements , such as portraits of galaxies taken by NASA's James Webb Space Telescope and measurements of our galaxy's stars made by the European Space Agency's Gaia spacecraft.

"Machine learning has been happening for around 10 years in astrophysics, but it's still very hard to use across instruments, across missions and across scientific disciplines," says Polymathic AI research scientist Francois Lanusse . "Datasets like the Multimodal Universe are what will allow us to build models that natively understand all of these data and can be used as a Swiss Army knife for astrophysics."

In total, the dataset clocks in at 100 terabytes and was a major undertaking. "Our work, from around a dozen institutes and two dozen researchers, paves a path for machine learning to become a core component of modern astronomy," says Polymathic AI member Micah Bowles, a Schmidt AI in Science Fellow at the University of Oxford. "Assembling this dataset was only possible through a broad collaboration of not only the Polymathic AI team but many expert astronomers from around the world."

The other collection — called the Well — comprises over 15 terabytes of data from 16 diverse datasets . These datasets contain numerical simulations of biological systems, fluid dynamics, acoustic scattering, supernova explosions and other complicated processes. While these diverse datasets may seem disconnected at first, they all require the modeling of mathematical equations called partial differential equations. Such equations pop up in problems related to everything from quantum mechanics to embryo development and can be incredibly difficult to solve, even for supercomputers. One of the goals of the Well is to enable AI models to churn out approximate solutions to these equations quickly and accurately.

"This dataset encompasses a diverse range of physics simulations designed to address key limitations of current machine [learning] models," says Polymathic AI member Rudy Morel , a CCM research fellow. "We are eager to see models that perform well across all these scenarios, as it would be a significant step forward."

Gathering the data for those datasets posed a challenge, says Ohana. The team collaborated with scientists to gather and create data for the project. "The creators of numerical simulations are sometimes skeptical of machine learning because of all the hype, but they're curious about it and how it can benefit their research and accelerate scientific discovery," he says.

The Polymathic AI team itself is now using the datasets to train AI models. In the coming months, they will deploy these models on various tasks to see how successful these well-rounded, well-trained AIs are at tackling complex scientific problems.

"Understanding how machine learning models generalize and interpolate across datasets from different physical systems is an exciting research challenge," says Polymathic AI member Régaldo-Saint Blancard , a CCM research fellow.

The Polymathic AI team has begun training machine learning models using the datasets, and "the early results have been very exciting," says Polymathic AI project lead Shirley Ho , a group leader at the Flatiron Institute's Center for Computational Astrophysics . "I'm also looking forward to seeing what other AI scientists will do with these datasets. Just like the Protein Data Bank spawned AlphaFold, I'm excited to see what the Well and the Multimodal Universe will help create." Ho will give a talk at the NeurIPS meeting highlighting the use and incredible potential of the work.


About the Flatiron Institute

The Flatiron Institute is the research division of the Simons Foundation . The institute's mission is to advance scientific research through computational methods, including data analysis, theory, modeling and simulation. The institute comprises centers devoted to problems in astrophysics, biology, mathematics, neuroscience and quantum physics.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.