Ad image

Hopfield and Hinton How has AI changed our world?

MONews
8 Min Read

If your jaw dropped while watching the latest AI-generated video, if a fraud detection system saved your bank balance from a criminal, or if being able to dictate a text message while on the run made your day a little easier, then you’ve heard it from many scientists, mathematicians, and more. , Thanks to the engineers.

But two names stand out for their fundamental contributions to the deep learning technologies that make these experiences possible. Physicist at Princeton University John Hopfield University of Toronto computer scientist Geoffrey Hinton.

The two researchers won the Nobel Prize in Physics October 8, 2024 Thank you for your pioneering research in the field of artificial neural networks.

Artificial neural networks were modeled after biological neural networks, but the two researchers’ work was based on statistical physics, which earned them the physics prize.

Anders Irbaeck speaks to the media during the presentation of the 2024 Nobel Prize in Physics on October 8, 2024 in Stockholm, Sweden. (Jonathan Nackstrand/Getty Images)

How neurons calculate

Artificial neural networks originated from the study of biological neurons in the living brain. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts proposed the following: A simple model of how neurons work.

In the McCulloch-Pitts model, neurons are connected to and can receive signals from neighboring neurons. Those signals can then be combined to send signals to other neurons.

But there is a twist. The idea is that signals from different neighbors can be weighed differently. Imagine you’re trying to decide whether or not to buy the new best-selling cell phone. You talk to your friends and ask for recommendations.

A simple strategy is to collect all your friends’ recommendations and decide to go with what the majority says. For example, I asked three friends, Alice, Bob, and Charlie, and they answered yes, yes, and no respectively. This makes you decide to buy the phone because there are two positives and one negative.

However, some friends may be more trustworthy as they have in-depth knowledge about technological devices. So you might decide to give their recommendations more weight.

For example, if Charlie is very knowledgeable, you can count his objections three times, and now your decision is not to buy the phone (two times against, three times against).

Unfortunately, if you have a friend who you completely distrust when it comes to technical matters, you might give that friend negative weight. So their yay is considered nay and their nay is considered yay.

Once you decide for yourself if a new phone is a good choice, other friends can ask for recommendations.

Similarly, in artificial and biological neural networks, neurons can send signals to other neurons by aggregating signals from their neighbors.

This feature leads to an important difference. Are there cycles in your network? For example, if I ask Alice, Bob, and Charlie today, and tomorrow Alice asks me for a recommendation, there is a cycle from Alice to me, and from me back to Alice.

Diagram showing four circles stacked vertically with lines of different colors connected to each other.
In a recurrent neural network, neurons communicate back and forth rather than in just one direction.
(Jawash/Wikimedia, CC BY-SA)

If the connections between neurons do not have cycles, computer scientists call them feedforward neural networks. Neurons in a feedforward neural network can be arranged in multiple layers.

The first layer consists of inputs. The second layer receives signals from the first layer. The last layer represents the output of the network.

However, if the network has cycles, computer scientists call it a recurrent neural network, and the arrangement of neurons can be more complex than a feedforward neural network.

hopfield network

The initial inspiration for artificial neural networks came from biology, but advances soon began to occur in other fields as well. This included logic, mathematics, and physics.

Physicist John Hopfield used ideas from physics to study certain phenomena. Types of Recurrent Neural NetworksIt is now called the Hopfield network. In particular, he studied their dynamics – what happens to networks over time.

These dynamics are also important when information spreads through social networks. We all know that memes go viral and create echo chambers on online social networks. These are all collective phenomena that ultimately arise from the simple exchange of information between people in a network.

Hopfield was a pioneer in: Using models from physicsIt was developed specifically to study magnetism and understand the dynamics of recurrent neural networks. He also said that their dynamic Giving such neural networks a form of memory.

Boltzmann machine and backpropagation

In the 1980s, Geoffrey Hinton, computational neurobiologist Terrence Sejnowski, and others extended Hopfield’s ideas to create a new class of models: Boltzmann machineNamed after a 19th century physicist Ludwig Boltzmann.

As the name suggests, the design of these models is rooted in the statistical physics pioneered by Boltzmann.

Unlike Hopfield networks, which can store patterns and correct errors in them like a spell checker, Boltzmann machines can generate new patterns, planting the seeds of the modern generative AI revolution.

Hinton was also part of another innovation that occurred in the 1980s. Backpropagation. To make artificial neural networks perform interesting tasks, we need to somehow choose appropriate weights for the connections between them.

Backpropagation is a key algorithm that allows weight selection based on network performance on the training dataset. However, training artificial neural networks consisting of multiple layers has remained challenging.

In the 2000s, Hinton and his colleagues cleverly Learning multilayer networks using Boltzmann machines First, we pre-train the network layer by layer, and then we use another fine-tuning algorithm on top of the pre-trained network to further adjust the weights.

Multilayer networks were renamed deep networks and the deep learning revolution began.

allowfullscreen=”allowfullscreen”frameborder=”0″>

A computer scientist explains. machine learning To children, high school students, college students, graduate students, and fellow professionals.

AI pays off physics

The Nobel Prize in Physics demonstrates how ideas from physics have contributed to the development of deep learning. Now deep learning is beginning to give back to physics by accurately and quickly simulating systems ranging from molecules and materials to the entire Earth’s climate.

By awarding the Nobel Prize in Physics to Hopfield and Hinton, the Prize Committee expressed its hope for humanity’s potential to leverage these advances to advance human well-being and build a sustainable world.conversation

no wayMbuz TewariProfessor of Statistics, University of Michigan

This article is republished from: conversation Under Creative Commons License. read original article.

Share This Article
Leave a comment