Cade Metz reports in Wired:
“Generative adversarial networks" create realistic images, while a second AI analyzes the results and tries to determine whether the images are real or fake. Because the second AI is working so hard to identify images as fake, the first learns to mimic the real in ways it couldn’t on its own. In the process, these two neural networks can push AI toward a day when computers declare independence from their human teachers.
The day Richard Feynman died, the blackboard in his classroom read: “What I cannot create, I do not understand.”
When Ian Goodfellow explains the research he’s doing at Google Brain, the central artificial intelligence lab at the internet’s most powerful company, he points to this aphorism from the iconic physicist, Caltech professor, and best-selling author. But Goodfellow isn’t referring to himself—or any other human being inside Google. He’s talking about the machines: “What an AI cannot create, it does not understand.”
Goodfellow is among the world’s most important AI researchers, and after a brief stint at OpenAI—the Google Brain competitor bootstrapped by Elon Musk and Sam Altman—he has returned to Google, building a new research group that explores “generative models.” These are systems that create photos, sounds, and other representations of the real world. Nodding to Feynman, Goodfellow describes this effort as an important path to all sorts of artificial intelligence.
‘This encourages the AI to learn about the structure of the world that actually exists.’ Ian Goodfellow, Google“If an AI can imagine the world in realistic detail—learn how to imagine realistic images and realistic sounds—this encourages the AI to learn about the structure of the world that actually exists,” he explains. “It can help the AI understand the images that it sees or sounds that it hears.”
In 2014, while still a PhD student at the University of Montreal, Goodfellow dreamed up an AI technique called “generative adversarial networks,” or GANs, after a slightly drunken argument at a bar. But however beer-soaked its origins, it’s a wonderfully elegant idea: One AI works to create, say, realistic images, while a second AI analyzes the results and tries to determine whether the images are real or fake. “You can think of this like an artist and an art critic,” Goodfellow says. “The generative model wants to fool the art critic—trick the art critic into thinking the images it generates are real.” Because the second AI is working so hard to identify images as fake, the first learns to mimic the real in ways it couldn’t on its own. In the process, these two neural networks can push AI toward a day when computers declare independence from their human teachers.
Yann LeCun, who oversees AI research at Facebook, has called GANs “the coolest idea in deep learning in the last 20 years.” Deep learning is the breed of AI that’s shifting the direction of all the internet’s biggest companies, including Google, Microsoft, and Amazon, as well as Facebook. Goodfellow’s ideas are still very much under development, but they’ve rapidly spread across the AI community. Many researchers, including LeCun, believe they can lead to “unsupervised learning,” an enormous aspiration in the field of AI research: machines learning without direct help from humans.
Getting It Right
The idea came to Goodfellow at a Montreal bar called Les 3 Brasseurs, or The 3 Brewers. His friend Razvan Pascanu, now a researcher at DeepMind, Google’s other AI lab, had finished his PhD, and many other friends had gathered to see him off. One of them was describing a new research project, an effort to mathematically determine everything that goes into a photograph. The idea was to then feed these statistics into a machine so that it could create photographs on its own. A bit tipsy, Goodfellow said that this would never work—that there were too many statistics to consider, that no one could possibly record them all. But in the moment, he decided there was a better way: Neural networks could teach the machine how to build realistic photos.A neural network is a complex mathematical system that learns tasks by analyzing vast amounts of data, from recognizing faces in photos to understanding spoken words. Standing there in the bar, Goodfellow decided that while one neural network learned to build realistic photos, a second could play the adversary, trying to determine whether these images were fake and, in essence, feeding its judgments into the first. In this way, he said, it could eventually teach the first neural network to generate fake images indistinguishable from the real thing.
An argument ensued. Goodfellow’s friends were just as adamant that this method wouldn’t work, either. So when he got home that night, he built the thing. “I went home still a little bit drunk. And my girlfriend had already gone to sleep. And I was sitting there thinking: ‘My friends at the bar are wrong!'” he remembers. “I stayed up and coded GANs on my laptop.” The way he tells it, the code worked on the first try. “That was really, really lucky,” he says, “because if it hadn’t of worked, I might have given up on the idea.”
He and a few other researchers published a paper describing the idea later that year. In the three years since, hundreds of other papers have explored the concept. In that first paper, the two neural networks worked to produce a system that could generate realistic images of handwritten digits. Now, researchers are applying the idea to photos of everything from cats to volcanos to entire galaxies. It has even assisted with astronomy experiments and helped simulate particle physics.
But this is still a very difficult thing to pull off. It involves training not just one neural network, but two at the same time. At Google, as he builds a new group focused on GANs and related research, Goodfellow hopes to refine the process. “The main thing that I, as a machine learning researcher, have to contend with is coming up with a way to make them very reliable to train,” he says.
The end result: Services that are far better at not just generating images and sounds but recognizing images and sounds, a path to systems that can learn more with less help from humans. “The models learn to understand the structure of the world,” Goodfellow says. “And that can help systems learn without being explicitly told as much.”
GANs could even deliver unsupervised learning, something that doesn’t really exist today. Currently, neural network can learn to recognize cats by analyzing several million cat photos, but humans must carefully identify the images and label them as cat photos. People are still very much in the mix, and that’s often a problem, whether the issue is bias or the sheer scale of human labor needed to train an AI. Researchers like LeCun are pushing toward systems that can learn without such heavy human involvement, something that could accelerate the evolution of AI. But that’s just the start. GANs bring so many other possibilities as well. David Kale, an AI researcher at the University of Southern California, believes that the idea could help he and his fellow researchers build healthcare AI without infringing on patient privacy. Basically, GANs could produce fake healthcare records. Machine learning systems could then train on these fakes rather than the real thing. “Instead of dumping patient records on the internet for everyone to play with, why don’t we train GANs on that data and create an entirely synthetic dataset and make that available to researchers?” Kale says. “And why don’t we do it in a way so that any model trained on that dataset will be indistinguishable from one trained on the original data?”
Though many researchers are exploring the ideas behind GANs, it’s telling that Goodfellow is intent on building his group at Google in particular. He was one of the researchers who left Google for OpenAI, a lab that promised to openly share its research with the world at large. But less than a year later, he returned to Google, because that’s where all his collaborators were. “It’s not fun to spend your whole day in video calls,” he says. “It’s not the best way to get things done.”
Sharing is important. But so too is close collaboration—whether you’re AI researcher or neural network.
0 comments:
Post a Comment