Shelly Fan reports in Singularity Hub:
Both disciplines are solving the same central problem—intelligence—but coming from different angles, and at different levels of abstraction. In AI, scientists look to crack efficient, effective learning mathematically using the language of machines. In neuroscience, we are trying to understand intelligence by looking at the sole existing proof that the problem is solvable.AI models are acting as virtual brains to guide hypotheses experiments. AI is also speeding up research in brain mapping projects, solving the problem with more accurate results. AI will likely provide solutions that neuroscientists can experimentally confirm.
DeepMind’s Demis Hassabis once pointed to the human brain as a paramount inspiration for building AI with human-like intelligence. He’s not the only one. The meteoric success of deep learning showcases how insights from neuroscience—memory, learning, decision-making, vision—can be distilled into algorithms that bestow silicon minds with a shadow of our cognitive prowess.
But what about the other way around?
This month, the prestigious journal Nature published an entire series highlighting the symbiotic growth between neuroscience and AI. It’s been a long time coming. At their core, both disciplines are solving the same central problem—intelligence—but coming from different angles, and at different levels of abstraction. In AI, scientists look to crack the mysteries of efficient, effective learning mathematically using the language of machines. In neuroscience, we dissect the ins and outs of the three-pound fatty goop between our ears, trying to understand intelligence by looking at the sole existing proof that the problem is solvable. We’re living proof that AI is possible.
To Dr. Chethan Pandarinath, a biomedical engineer at Georgia Tech in Atlanta who parses brain signals to control machine limbs, the neuro-AI connection is coming full circle.
AI is rapidly gaining ground as an invaluable tool in neuroscience in two main ways. One is technical. Thanks to its ability to churn through extravagant amounts of data to find patterns, AI is increasingly adopted into methods that manage and make sense of brain activity.
The second is perhaps more exciting: as algorithms increasingly evolve brain-like outputs, they become hotbeds to test fundamental, overarching ideas in neuroscience. In some cases, AI abstracts high-level concepts of what we think our brains do, even if how algorithms go about their computation is vastly different from us.
We’ve previously talked a ton about how neuroscience inspires AI. Here are three ways AI is giving back.
1. Wrangling Data
Although mathematical equations are baked into the very beginning of neuroscience, the experimental arm of the field has always relied on observing biological players—receptors, neurotransmitters, signaling molecules—to solve the brain’s mysteries.
Then came the big data era.
Rather than studying individual proteins and brain regions, suddenly neuroscientists had the tools to profile single neurons at their genetic level, or digitally reconstruct massive portions of neural connections. In biochemistry, for example, there’s the rise of “omics,” the brain-wide study of a certain level of biology—genomics, epigenomics, metabolomics, stuff-we’ll-soon-profile-omics. For mapping neural connections, detailing how neurons connect physically is rapidly becoming old-school; the trick now is to further correlate brain atlases to other functional maps such as brain-wide gene expression over increasingly longer timescales. Trying to parse neural signaling—and linking particular activation patterns to sensations, movement, or even specific memories—is occurring at a scale of hundreds of neurons, if not more.
Take brain implants to control robotic arms. Pandarinath, for example, constantly deals with trying to make sense of signals measured from 200 or so neurons in a pool of 10 to 100 million that control arm movements. Here’s where AI comes in: algorithms can help identify underlying structures in the data—even when buried under noise—to extract electrical fingerprints that correlate to specific, minute behaviors. The computer can also figure out how these activation patterns change over time, generating a fine-grained instruction manual for controlling an arm.
Similarly, AI is also massively speeding up research in brain mapping projects or functional brain imaging, which easily deals with terabytes of image files that need to be processed, reconstructed, and annotated. Computer vision is even helping analyze smaller-scale but high-volume images that examine neural death or protein levels. Rather than spending ridiculous man-hours on mindless work, researchers can focus on what’s important—solving the problem—with more accurate results.
As of now, most datasets in neuroscience are in non-standard, proprietary forms buried in local hard drives. As projects like Neurodata Without Borders (NWB), initiated in part by brain-machine interface pioneer Dr. Loren Frank at UCSF, gain steam to standardize data architecture, more data will be uploaded to the cloud with labels that are amenable to machine learning. Translation? There’s going to be a lot more AI-generated brain insights.
2. Solving Senses and Movement
Modern deep learning has its roots in studies examining vision pathways way back in the 1960s. So it’s no surprise that neuroscientists are now using AI to re-examine ideas on how our brains process senses and movements.
A Harvard team led by Dr. Margaret Livingstone, for example, fashioned a biologically-based algorithm, XDREAM, to understand the “visual alphabet” of a type of mysterious visual cells. The algorithm eventually cooked up a slew of images that mashed together faces with gratings and abstract shapes into a previously unknown cellular language, providing researchers with a solid idea to further test. Another recent study using convolutional neural networks found that contrary to popular belief, the human visual system naturally embeds information about emotion.
These studies aren’t just theoretical. Understanding how our brains process senses and movement is crucial in making more life-like, mind-controlled prosthetics. Similar to Livingstone, Dr. Daniel Yamins at Stanford University also tackled vision, but with the goal of understanding neural network activity underlying object recognition. He built and trained a deep neural net according to our current understanding of the visual system’s architecture, and found that the network’s neural activity matched biological ones recorded from monkeys as they performed a similar object discrimination task. A few years later, he did the same for the auditory cortex.
Algorithms that mimic vision or hearing can spur ideas on how the brain solves these tasks. “If you can train a neural network to do it,” said Dr. David Sussillo, a computational neuroscientist at Google Brain, “then perhaps you can understand how that network functions, and then use that to understand the biological data.”
In other words, AI models are acting as virtual brains to guide hypotheses and experiments. Rather than testing hypotheses immediately on animals, AI models can act as a middle stand-in that captures a basic representation of brain activity. These brain simulations allow “dry-runs” to perturb neural activity and observe what happens, without sticking electrodes into people. The idea—though still nascent—is gaining steam and already being commercially pursued.
The results form a bridge towards linking smart prosthetics with the brain: mind-controlled robotic limbs or exoskeletons, visual or hearing prosthetics that bypass the eyes and ears to directly activate relevant brain regions. Just last week UCSF, with funding from Facebook, unveiled a system that can accurately decode speech by reading brain waves.
3. Cracking Neural Codes
Although much more complicated, the same strategy for solving senses and movement can also help crack more abstract brain functions. For example, mimicking the neural circuits underlying memory in computer chips could potentially off-load memories or other higher cognitive processes onto “memory patches” to be delivered back in old age or after brain damage. DARPA has these experiments underway. Other experimental brain prostheses include smart implants that measure neural activity for signs of impending epilepsy or depression to deliver a well-timed zap to counteract these episodes before onset.
In both cases, AI is helping neuroscientists crack the so-called neural code—the activation patterns of individual groups of neurons that underlie a thought or behavior. Brain implants have been around for decades, but the latest infusion of AI is making their internal processes, such as identifying electrical spikes of activity, far more effective.
Finally, one more slightly controversial approach looks to mathematical ideas that drive human-like performance in AI and ask if they’re also present in our brains. Cognitive scientists have long wondered if Bayesian inference, a mathematical way to incorporate evidence into existing ideas, also influences how we perceive the world or make decisions. The observation that AI inspired by the theorem can sometimes mimic human cognition is rekindling that debate.
Of course, just because AI output resembles the brain doesn’t mean that’s how the brain works. Unlike machine intelligence, our brains are the result of evolutionary pressure. It’s likely that aspects of how we efficiently learn are intimately linked to survival instincts, something that AI models might not be able to capture.
Rather than delivering immediate answers to the brain’s mysteries, AI will more likely provide one or more solutions that neuroscientists can experimentally confirm. Regardless, their impact is already fundamentally changing the neuroscience ecosphere and will only continue to grow.
“I want to be part of that,” said Sussillo.
0 comments:
Post a Comment