Anthropologist Rodrigo Ochigame studies how AI is changing the practice of scientific research. From astrophysics to mathematics to climate science, they find that the adoption of new AI models is raising questions about what counts as reliable scientific evidence.
In 2019, the world saw the first-ever image of a black hole: an ominous shadow surrounded by a fiery orange ring. In 2023, a team of scientists released a 'sharper' image based on the same data, now using AI techniques. Because the data - combined from eight astronomical observatories around the Earth - is extremely noisy and limited, scientists must develop computer algorithms and AI models that make a lot of assumptions in order to produce an image. Depending on the algorithms and models chosen, the results can look very different.
Anthropologist Rodrigo Ochigame studies how AI is changing the practice of science in diverse fields, including black hole imaging. In addition to interviewing scientists, Ochigame has made alternative black hole images based on the same data. By experimenting with different algorithms and models, Ochigame explores what the black hole images would have looked like if the scientists had made different choices. The larger point is that the adoption of new AI models is raising questions about what should and shouldn't count as reliable scientific evidence.
Besides black hole imaging, Ochigame studies other fields of science that are using AI in novel ways. For example, they examine the work of mathematicians who use AI to discover and prove new theorems, and environmental scientists who use AI to try to predict the impact of climate change in complex ecosystems. All these cases have one thing in common: the scientists cannot build conventional AI models that simply reproduce patterns found in existing data sets. 'I'm interested in fields where conventional "ground truth" data is unavailable', they explain. 'There are no previously trusted images of black holes. And climate models must consider potential extreme scenarios that have never happened before.'
You're looking over the shoulders of scientists using AI in their research, and asking critical questions. Are they ok with that?
'Sometimes people can be hesitant to accept an anthropologist into their midst. But I've been fortunately welcomed by the people I study. One reason is that I'm not only an observer but also an active participant. I try to contribute to their discussions, and sometimes even build computational models myself. Moreover, the questions I ask are questions that the scientists themselves see as unresolved. They also want to figure out the answers. And scientists can be very open-minded and curious to hear anthropological insights about their own fields.'
What limitations are there in the use of AI when conducting scientific research?
'It is important not to treat the results of AI models as definitive evidence, without first questioning how the models work and where the data came from. For example, the 'sharper' black hole image was generated by a machine-learning model trained with images from simulations. Only after the model was trained with simulations, it was applied to actual observational data. This is an interesting phenomenon: scientists are increasingly using simulations as training data, as if those simulations were the so-called "ground truth". I'm not opposed to this kind of research, but I think the results should be interpreted cautiously.'
Does science place too much faith in the use of AI systems right now?
'I would say so. There are many instances where the application of AI is too uncritical. I recurringly see scientific claims that are not justified by the data or the algorithms used. Scientists often have incentives to aggrandise their claims. The applications of AI that I find completely unjustifiable are so numerous that I prefer to study the more complicated cases. I'm most curious about cases where I cannot easily form an opinion.'
Earlier on, you looked into the development of alternatives to the computational models which are nowadays considered fundamental in many scientific fields, such as computer science.
'That was my PhD research, which questioned the supposed universality of the most commonly used formal models of computing and AI, such as mathematical logic, Turing machines, game theory, and neural networks. I found that before those models became orthodox, researchers from around the world had questioned some of the most basic assumptions in those models. And some researchers even developed their own alternative models. For example, Brazilian mathematicians developed systems of logic that allowed for partial contradictions, and Indian scientists developed nonbinary models of computation. It is not necessarily a goal of my work that people will take these unorthodox models and implement them now. But sometimes when people get inspired and incorporate those ideas into their work in unexpected ways, I'm happy to see that.'
Some of Rodrigo Ochigame's alternative images of the black hole are now on display at the Rijksmuseum Boerhaave in Leiden as part of the exhibition 'Towards the Black Hole'.
Header image credits: first M87 black hole image by the Event Horizon Telescope, 2019 (left); M87 black hole image based on the PRIMO machine-learning algorithm by Lia Medeiros et al., 2023 (center); black hole accretion simulation by Hotaka Shiokawa (right).
Text: Jan Joost Aten