Thursday, December 16, 2021

Interview: From Neurons to Artificial Visual Systems, with a Dash of Inspired Science Outreach

Article: Seeing Shapes: Understanding brain’s visual system could inform development of better artificial systems
Source: Harvard Medical School
Published: December 7, 2021 

The brain's visual system is central to our perception and interpretation of the world we see. Yet, how it gathers and integrates visual information into a cohesive whole still remains largely a mystery. Harvard Medicine News  interviewed Carlos Ponce, M.D., Ph.D., assistant professor of neurobiology at HMS, about his interest in the visual system and how he uses this information to build better computational models. Ponce explains that his research is focused on the brain's ventral stream, or parts of the visual system that analyze and categorize shapes, whether they be faces, objects, scenes, etc. He uses macaque monkeys as animal models, since they are an experimental model with brains most similar to humans. Using images as stimuli and electrophysiology recordings, Ponce studies how the responses of neurons approximate our visual perception. As he explains, "The pictures represent a hypothesis, and the neural response is an evaluation of that." However, images that are selected by human minds cannot fully capture the range of variability and nuance in visual stimuli that the brain encounters in the world; the images we select as input are limited by our imaginations and biases. This is where advances in computational modeling come in handy. The models not only learn from the input of millions of pictures, but can also generate entirely new images.

A computational model linked to visual neurons in the macaque brain
incrementally reveals an image of an eye, synthesized by the model
from information encoded by visual neurons
In an earlier paper from 2019, Ponce presents how applying this "cooperation between neurons and machine intelligence" to neurons in part of the macaque brain that responds to faces produced, from initial noise, a computational model that contained features of a face. "Our discovery was that you can couple computational models to neurons in the macaque brain that are visually responsive, and have the neurons guide the model to create pictures that activate them best," the assistant professor explains, "However, we were puzzled by some of the pictures that were created. Some made a lot of sense, like parts of faces or bodies, but others didn’t look like any one object. Instead, they were patterns that cut across semantic categories...We realized that the neurons in the macaque brain are learning specific motifs that don’t necessarily fit our language. The neurons have a language of their own that is about describing the statistics of the natural world." In a sequel of the research, published this year, Ponce applied computational models to both the posterior (upstream) parts of the macaque brain that process simple objects and the more anterior (downstream) parts that process complex shapes. In doing so, he was able to quantify the information coming from the neurons as having an intermediate level of complexity, somewhere between the simplicity of a line and the complexity of a photograph. In other experiments, he found that macaques prefer to look at parts of pictures that were similar to the features encoded by their neurons. "That gives us a clue that during development, the brain extracts important patterns from the world and stores those patterns in neurons," he says.

When asked what he wants to do next, Ponce answers that he is intrigued by many questions, such as extending his individual-neuron recordings to characterize full populations of neurons, or reconstructing images of what the brain sees based on the pattern of activity of neurons. He is also interested in how clusters of neurons that share a function develop where they do in the brain, and hopes that his approach will help to map that topography. Once we are able to characterize the patterns and networks of the brain, he states, we can then develop computational models to improve artificial visual systems. Ponce recalls his own medical training to connect his research to clinical applications, for example, to save lives through improved screening that "doesn't miss anything." Finally, he references his own childhood and inspiration from scientists to motivate his work in science outreach to youth, to inspire young students every year toward careers in science.


My rating of this study: 🌸🌸🌸🌸🌸

Rose O, Johnson J, Wang B, et al. "Visual prototypes in the ventral stream are attuned to complexity and gaze behavior." Nature Communications.  12:6723. 18 November 2021. https://doi.org/10.1038/s41467-021-27027-8 

No comments:

Post a Comment