Google’s DeepMind is using AI to explore dopamine’s role in learning

Recently, AI systems have mastered a range of video-games such as Atari classics Breakout and Pong. But as impressive as this performance is, AI still relies on the equivalent of thousands of hours of gameplay to reach and surpass the performance of human video game players. In contrast, we can usually grasp the basics of a video game we have never played before in a matter of minutes.

Matt Botvinick — DeepMind and Gatsby Computational Neuroscience Unit, UCL, London, U.K.
Above: DeepMind’s neural network shifts its gaze toward the reward-associated image.

Neuroscientists have long observed similar patterns of neural activations in the prefrontal cortex, which is quick to adapt and flexible, but have struggled to find an adequate explanation for why that’s the case, the DeepMind team wrote in a blog post.

The idea that the prefrontal cortex isn’t relying on slow synaptic weight changes to learn rule structures, but is using abstract model-based information directly encoded in dopamine, offers a more satisfactory reason for its versatility.

More where this came from

This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Skychain Official Channel

Skychain Official Channel

Blockchain infrastructure aimed to host, train and use artificial intelligence (AI) in healthcare. Our website: