Google’s DeepMind is using AI to explore dopamine’s role in learning

Skychain Official Channel
4 min readMay 16, 2018

--

Deep learning algorithms can outperform humans in a number of areas, from classifying images and reading lips to diagnosing diabetes. But despite such superhuman levels of proficiency, they’re disadvantaged in the rate at which they learn. It takes the best machine learning algorithms hundreds of hours to master classic video games that take the average person just a few hours.

It might have something to do with the dopamine, according to a research published by the specialists of DeepMind Company in the Nature Neuroscience journal.

Recently, AI systems have mastered a range of video-games such as Atari classics Breakout and Pong. But as impressive as this performance is, AI still relies on the equivalent of thousands of hours of gameplay to reach and surpass the performance of human video game players. In contrast, we can usually grasp the basics of a video game we have never played before in a matter of minutes.

Matt Botvinick — DeepMind and Gatsby Computational Neuroscience Unit, UCL, London, U.K.

Meta-learning, or the process of learning quickly from examples and deriving rules from those examples over time, is thought to be one of the reasons that humans attain new knowledge more efficiently than their computer counterparts. But the mechanisms of meta-learning are not yet clear to the scientists.

In an attempt to shed light on this process, researchers at DeepMind in London modeled human physiology using a recurrent neural network.

A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a sequence. This allows it to exhibit dynamic temporal behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. Wikipedia.org

The reward prediction error of the network — the signal that mathematically optimizes the algorithm over time through trial and error — stood in for dopamine, the chemical in the brain that affects emotions, movements, and sensations of pain and pleasure, which is thought to play a key role in the learning process.

The researchers set the system loose on six neuroscientific meta-learning experiments, comparing its performance to that of animals that had been subjected to the same test. One of the tests, known as the Harlow Experiment, tasked the algorithm with choosing between two randomly selected images, one of which was associated with a reward. In the original experiment, the subjects (a group of monkeys) quickly learned a strategy for picking objects: choosing an object randomly the first time, then choosing reward-associated objects the second and each subsequent time thereafter.

Above: DeepMind’s neural network shifts its gaze toward the reward-associated image.

The algorithm, using the virtual computer screen performed much like the animals, making reward-associated choices from new images it hadn’t seen before. Moreover, the researchers noted, the learning took place in the recurrent neural network, supporting the theory that dopamine plays a key role in meta-learning.

The AI system behaved the same even when the weights — the strength between two neural network nodes, akin to the amount of influence one firing neuron in the brain has on another — were frozen. In animals, dopamine is believed to reinforce behaviors by strengthening synaptic links in the prefrontal cortex. But the consistency of the neural network’s behavior suggests that dopamine also conveys and encodes information about tasks and rule structures, according to the researchers.

Neuroscientists have long observed similar patterns of neural activations in the prefrontal cortex, which is quick to adapt and flexible, but have struggled to find an adequate explanation for why that’s the case, the DeepMind team wrote in a blog post.

The idea that the prefrontal cortex isn’t relying on slow synaptic weight changes to learn rule structures, but is using abstract model-based information directly encoded in dopamine, offers a more satisfactory reason for its versatility.

The idea that AI systems mimic human biology isn’t new, of course. A study conducted by researchers at Radboud University in the Netherlands found that recurrent neural networks can predict how the human brain processes sensory information, particularly visual stimuli. But for the most part, those discoveries have informed machine learning rather than neuroscientific research.

The dopamine study, the paper’s authors wrote, shows that medicine has as much to gain from neural network research as computer science does.

Unfortunately, big-picture changes usually come more slowly in medicine due to entrenched hospitals and insurers and the necessary diligence required when dealing with a person’s well-being.

This work was completed by Jane X. Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Demis Hassabis and Matthew Botvinick.

Download the Nature Neuroscience paper here.

Download the original paper here.

Article is written with the information provided by: Venturebeat, DeepMind, Hightech.fm and Wikipedia.org

Join Skychain on social media: Twitter, Facebook, Telegram

Egor Chertov, Skychain team

More where this came from

This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.

Follow our publication to see more product & design stories featured by the Journal team.

--

--

Skychain Official Channel
Skychain Official Channel

Written by Skychain Official Channel

Blockchain infrastructure aimed to host, train and use artificial intelligence (AI) in healthcare. Our website: https://skychain.global/

Responses (2)