Neural Network Pioneers Win Physics Nobel

The 2024 Nobel Prize in Physics has been awarded to scientists John Hopfield and Geoffrey Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks".

Author

  • Aaron J. Snoswell

    Research Fellow in AI Accountability, Queensland University of Technology

Inspired by ideas from physics and biology, Hopfield and Hinton developed computer systems that can memorise and learn from patterns in data. Despite never directly collaborating, they built on each other's work to develop the foundations of the current boom in machine learning and artificial intelligence (AI).

What are neural networks? (And what do they have to do with physics?)

Artificial neural networks are behind much of the AI technology we use today.

In the same way your brain has neuronal cells linked by synapses, artificial neural networks have digital neurons connected in various configurations. Each individual neuron doesn't do much. Instead, the magic lies in the pattern and strength of the connections between them.

Neurons in an artificial neural network are "activated" by input signals. These activations cascade from one neuron to the next in ways that can transform and process the input information. As a result, the network can carry out computational tasks such as classification, prediction and making decisions.

Most of the history of machine learning has been about finding ever more sophisticated ways to form and update these connections between artificial neurons.

While the foundational idea of linking together systems of nodes to store and process information came from biology, the mathematics used to form and update these links came from physics.

Networks that can remember

John Hopfield (born 1933) is a US theoretical physicist who made important contributions over his career in the field of biological physics. However, the Nobel Physics prize was for his work developing Hopfield networks in 1982.

Hopfield networks were one of the earliest kinds of artificial neural networks. Inspired by principles from neurobiology and molecular physics, these systems demonstrated for the first time how a computer could use a "network" of nodes to remember and recall information.

The networks Hopfield developed could memorise data (such as a collection of black and white images). These images could be "recalled" by association when the network is prompted with a similar image.

Although of limited practical use, Hopfield networks demonstrated that this type of ANN could store and retrieve data in new ways. They laid the foundation for later work by Hinton.

Machines that can learn

Geoff Hinton (born 1947), sometimes called one of the "godfathers of AI", is a British-Canadian computer scientist who has made a number of important contributions to the field. In 2018, along with Yoshua Bengio and Yann LeCun, he was awarded the Turing Award (the highest honour in computer science) for his efforts to advance machine learning generally, and specifically a branch of it called deep learning.

The Nobel Prize in Physics, however, is specifically for his work with Terrence Sejnowski and other colleagues in 1984, developing Boltzmann machines.

These are an extension of the Hopfield network that demonstrated the idea of machine learning - a system that lets a computer learn not from a programmer, but from examples of data. Drawing from ideas in the energy dynamics of statistical physics, Hinton showed how this early generative computer model could learn to store data over time by being shown examples of things to remember.

The Boltzmann machine, like the Hopfield network before it, did not have immediate practical applications. However, a modified form (called the restricted Boltzmann machine) was useful in some applied problems.

More important was the conceptual breakthrough that an artificial neural network could learn from data. Hinton continued to develop this idea. He later published influential papers on backpropagation (the learning process used in modern machine learning systems) and convolutional neural networks (the main type of neural network used today for AI systems that work with image and video data).

Why this prize, now?

Hopfield networks and Boltzmann machines seem whimsical compared to today's feats of AI. Hopfield's network contained only 30 neurons (he tried to make one with 100 nodes, but it was too much for the computing resources of the time), whereas modern systems such as ChatGPT can have millions. However, today's Nobel prize underscores just how important these early contributions were to the field.

While recent rapid progress in AI - familiar to most of us from generative AI systems such as ChatGPT - might seem like vindication for the early proponents of neural networks, Hinton at least has expressed concern. In 2023, after quitting a decade-long stint at Google's AI branch, he said he was scared by the rate of development and joined the growing throng of voices calling for more proactive AI regulation.

After receiving the Nobel prize, Hinton said AI will be "like the Industrial Revolution but instead of our physical capabilities, it's going to exceed our intellectual capabilities". He also said he still worries that the consequences of his work might be "systems that are more intelligent than us that might eventually take control".

The Conversation

Aaron J. Snoswell receives funding from OpenAI in 2024.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).