The announcement of the artificial intelligence researchers John Hopfield and Geoffrey Hinton as this year's Nobel laureates in physics spurred celebration and consternation over the status of AI in science and society. In Japan, however, another feeling dominates: frustration .
"Japanese researchers should also have won," an editorial in the Asahi Shimbun newspaper proclaimed. Congratulating Hopfield and Hinton , the Japanese Neural Network Society added pointedly: "We must not forget the role played by pioneer Japanese researchers in erecting the foundations of neural network research."
Neural networks are at the centre of contemporary AI. They are models for machines to learn independently through structures that, if often only loosely, are inspired by the human brain.
So who are these pioneering Japanese AI researchers?
In 1967, Shun'ichi Amari proposed a method of adaptive pattern classification , which enables neural networks to self-adjust the way they categorise patterns, through exposure to repeated training examples. Amari's research anticipated a similar method known as "backpropagation," one of Hinton's key contributions to the field.
In 1972, Amari outlined a learning algorithm (a set of rules for carrying out a particular task) that was mathematically equivalent to Hopfield's 1982 paper cited by the Nobel on associative memory, which allowed neural networks to recognise patterns despite partial or corrupted inputs.
The North American researchers were working separately to groups in Japan, coming to their conclusions independently.
Later, in 1979, Kunihiko Fukushima created the world's first multilayer convolutional neural network . This technology has been the backbone of the recent boom in deep learning, an AI approach which has given rise to neural networks that learn without supervision, through more complex architectures. If this year's Nobel was for "foundational discoveries and inventions that enable machine learning with artificial neural networks," why not award Amari and Fukushima?
One-sided perspectives
The AI community itself has been debating this question. There are cogent arguments as to why Hopfield and Hinton better fit the Nobel "physics" category, and why national balance mattered, given the peace prize went to Japan's Nihon Hidankyō.
Why, then, should we still be worried?
The answer lies in the risks of historical one-sidededness. Our standard account of artificial neural networks is a North Atlantic-based - and, overwhelmingly, North American - history. AI experienced a period of rapid development in the 1950s and 1960s.
By 1970, it entered an "AI Winter", during which research stagnated. Winter finally changed to spring in the 1980s, through the likes of Hopfield and Hinton. The latter researcher's links to Google and OpenAI are said to have fed into the current boom in AI based on neural networks.
And yet, it was precisely during this alleged "winter" that Finnish, Japanese, and Ukrainian researchers - among others - established the foundations of deep learning. Integrating these developments into our histories of AI is essential as society confronts this transformative technology. We must expand what we mean when we talk about AI in ways different from the current vision offered by Silicon Valley.
For the past year, Yasuhiro Okazawa, from Kyoto University, Masahiro Maejima, from the National Museum of Nature and Science, Tokyo, and I have led an oral history project centered on Kunihiko Fukushima and the lab at NHK where he developed the Neocognitron, a visual pattern recognition system that became the basis for convolutional neural networks.
NHK is Japan's public broadcaster, equivalent to the BBC. Much to our surprise, we discovered that the context from which Fukushima's research emerged had roots in psychological and physiological studies of television audiences. This led NHK to create, in 1965, a laboratory for the " bionics of vision ". Here, television engineers could contribute towards advancing knowledge of human psychology and physiology (how living organisms function).
Indeed, Fukushima saw his own work as dedicated to understanding biological organisms rather than AI in the strict sense. Neural networks were conceived as "simulations" of how visual information processing might work in the brain, and thought to help advance physiological research . The Neocognitron specifically aimed to help settle debates about whether complex sensory stimuli corresponded to the activation of one particular neuron (nerve cell) in the brain, or to a pattern of activation distributed across a population of neurons.
Human approaches
The engineer Takayuki Itō, who worked under Fukushima, characterised his mentor's approach as a "human science". But during the 1960s, American researchers abandoned artificial neural networks based on human models. They cared more about applying statistical methods to large data sets , rather than patient study of the brain's complexities. In this way, emulating human cognition became merely a casual metaphor.
When Fukushima visited the US in 1968, he found few researchers who were sympathetic to his human brain-centred approach to AI, and many mistook his work for "medical engineering." His lack of interest in upscaling the Neocognitron with bigger data sets eventually placed him at odds with NHK's increasing demand for applied AI-based technologies, leading to his resignation in 1988.
For Fukushima, developing neural networks was never about their practical use in society, for instance, in replacing human labour and for decision making. Rather, they represented an attempt to grasp what made advanced vertebrates like humans unique, and in this way make engineering more human.
Indeed, as Takayuki Itō noted in one of our interviews, this "human science" approach may lend itself to a closer embrace of diversity. Although Fukushima himself did not pursue this path, Itō's work since the late 1990s has focused on "accessibility" in relation to the cognitive traits of the elderly and disabled . This work also recognises types of intelligence different from mainstream AI research.
Fukushima today keeps a measured distance from machine learning . "My position," he says, "was always to learn from the brain." Compared to Fukushima, AI researchers outside Japan took short cuts. The more that mainstream AI research leaves the human brain behind, the more it yields technologies that are difficult to understand and control. Shorn of its roots in biological processes, we can no longer explain why AI works and how it makes decisions. This is known as the "black box" problem .
Would a return to a "human science" approach solve some of these problems? Probably not by itself, because the genie is out of the bottle. But amid global concerns about superintelligent AI resulting in the end of humanity, we should consider a global history replete with alternative understandings of AI. The latter is a history sadly left uncelebrated by this year's Nobel prize in physics.