Current News

/

ArcaMax

Hopkins neurologist designed artificial intelligence to work like a human brain

Karl Hille, The Baltimore Sun on

Published in News & Features

BALTIMORE — Artificial intelligence systems designed to physically imitate natural brains can simulate human brain activity before being trained, according to new research from Johns Hopkins University.

“The work that we’re doing brings AI closer to human thinking,” said Mick Bonner, who teaches cognitive science at Hopkins. “What I would like to do is to create an AI system that learns to adapt to the world in ways that are similar to how the human brain does.”

His team published their work in November in Nature Machine Intelligence. Their research was supported by a JHU Catalyst award.

Inspired loosely by the human brain, neural networks are composed of multiple layers composed of millions of simple processing nodes, according to a Massachusetts Institute of Technology explainer page. These nodes, like neurons, are densely interconnected with multiple nodes in the layers above and below them. Data passes from nodes below and is analyzed and weighted by the nodes above based on the network’s training, before arriving transformed at the output layer.

These models, Bonner said, require data centers that use as many resources as small cities. To be effective at picking out and imitating patterns, the networks need to be trained on massive amounts of categorized data until they learn to identify new data.

“The amount of energy consumption for any type of tasks that AI does is much larger than the brain uses for the same type of task,” he said. “We learn from very little. That’s what makes human intelligence very different from artificial intelligence.”

Bonner’s neural network incorporates a very broad base layer bearing many more neurons than any of the layers it feeds. According to traditional AI design wisdom, he said that a broad base layer shouldn’t provide any additional benefit.

 

“But it does,” he said. “It gives you a huge boost in performance, especially in the ability to predict human thinking. Each neuron only sees one part of an image, like a single pixel, and it seems like just this constraint alone already breaks down images in a very useful way. Each additional layer can extract increasingly complex features useful for classifying images and segmenting images into similar groupings for object recognition or other purposes.”

Bonner’s team first built dozens of unique neural network architectures, then tested them, untrained, with images of objects, people and animals. They compared the results to the brain activity of humans and primates observing the same images. Their model performed as well as untrained conventional AI systems that have been exposed to millions or billions of images during training.

“Through evolution, our brains formed to have built-in structures and learning algorithms that are very efficient,” Bonner explained. “It starts with a wiring diagram that’s encoded to maximize efficiency. Before any learning happens, it’s already breaking up information in very useful ways.”

Next, they plan to develop and refine training algorithms modeled after biology to continue improving their deep learning framework.

--------------


©2025 The Baltimore Sun. Visit at baltimoresun.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus