When trying to solve problems, artificial intelligence often uses neural networks to process data and make decisions in a way that mimics the human brain.
In his latest research, Binghamton University Assistant Professor Sadamori Kojaku challenges a fundamental assumption in AI circles - that more complex neural networks are always better.
The paper, published in Nature Communications, shows that simple neural networks can find communities in complex networks with theoretical optimality, questioning the common view that more complex models outperform simpler ones.
"What we found was that the training matters, not the programming architecture itself," said Kojaku, who joined the faculty of the Thomas J. Watson College of Engineering and Applied Science's School of Systems Science and Industrial Engineering in Fall 2023.
"There are many ways to teach a neural network, but we found that one best teaching method is contrastive learning, where you present real data and fake data so the neural network is trained to differentiate the two. This simple training achieves optimal performance."
Understanding how AIs work is fundamental to establishing trust when it makes decisions in critical areas such as healthcare or electrical grids.
Right now, the exact route that AIs use to get their conclusions is inside what programmers call a "black box." Data input leads to a result, but the pathway between those points can be mysterious.
"Our work unboxes the neural networks and then tries to interpret how it works to provide a guarantee that this neural network works optimally for this specific task," Kojaku said. "This is our first work that tries to hammer on the black box."
Also contributing to the paper are Professors Filippo Radicchi, Yong-Yeol Ahn and Santo Fortunato from Indiana University, where Kojaku served as a postdoctoral fellow after earning his PhD at Hokkaido University in Japan and before coming to Binghamton.
Working with Nature Communications - a highly regarded online scientific journal - to publish the research required Kojaku to make 18 months' worth of revisions based on feedback from reviewers.
"Through this experience, I learned that it's sometimes really effective just to be stubborn if I think it's good to fight for the idea," he said. "When I was a student, a professor told us that an idea is like a baby, and you need to defend the idea."
Kojaku's interests include not just AI and neural networks but also complex networks in general, such as social networks, transportation networks, financial networks and other communities with different nodes that are densely connected.
How a community is structured affects the dynamics of a network - for instance, when rumors propagate or an event resonates throughout the economy. This principle applies to his research into the "science of science," where he explores how discoveries circulate among scientists and lead to technological upgrades or entirely new branches of research. Many ideas are spread in a very unscientific manner, such as personal contact at conferences or following the prestige hierarchy from top universities down but rarely the other way around.
"Society is not just a bunch of people, but a bunch of people interacting with each other, and this interaction gives rise to many interesting phenomena," he said. "I'm interested in how innovation happens, how scientific discoveries occur through the interactions between humans, and the environmental factors that give rise to innovation."