Mark Bergen reports in re/code:
Deep learning trains machines to recognize patterns in the data, then classify and categorize them, all on their very own (so with less engineering labor). And second, it enables the process to unfold huge reams of previously unmanageable data.
Have you ever begun a Google search, only to click on the words the box lays before you? Tagged a friend’s face when Facebook prompted it? Have you spoken to your iPhone?
Well done! You’ve activated Skynet.
Okay, not quite. The artificial intelligence technology behind these tools is neither self-aware nor homicidal. But they are driven by a computational technique called machine learning, which is, at its simplest, a way to teach machines to teach themselves. Without the aid of human programming, the AI can process new data.
Within machine learning, there’s a branch called deep learning that makes the process much more powerful. First, it takes the AI a step further: Deep learning trains machines to recognize patterns in the data, then classify and categorize them, all on their very own (so with less engineering labor). And second, it enables the process to unfold huge reams of previously unmanageable data.
Get used to the term. As we reported today, deep learning has captured the imagination — and the ample dollars — of Silicon Valley giants, who are now beginning to bake that far-out research into the devices we use every day.Google was the first to pull deep learning into its research arm, with the Brain team that came out of Google X. The technology now sits behind 100 different teams inside the behemoth.
Machine learning is Google’s lifeblood. It flows behind so much of the company, from search to decisions about its massive data centers to its self-driving cars. The software in the autonomous car (coupled with lasers and cameras) lets the car see other cars on the road, as well as pedestrians, bicyclists and trash cans, and learn what they are. Deep learning accelerates that process.
Google Ventures is backing startups relying on the self-learning AI in new frontiers like video (Clarifai) and satellite imaging (Orbital Insight). And Google even shelled out $400 million for DeepMind, a band of deep-learning wizards whose only output at the time was two research papers (one of them features an uninformed algorithm beating an Atari game).
Now a host of other companies — Facebook, IBM, Amazon, Twitter, Uber, Baidu, even Apple (sources say!) — are racing to catch up to Mountain View, finding ways that deep learning can keep their products innovative and propel their businesses forward.
So what, precisely, is our imminent AI overlord? Allow us.
Dissecting deep learning
Andrew Ng, the Stanford computer scientist behind Google’s deep learning “Brain” team and now Baidu’s chief scientist, boiled down the concept nicely for Re/code’s Kara Swisher last year. Here’s his definition:
It’s a learning technology that works by loosely simulating the brain. Your brain and mine work by having massive amounts of neurons, jam-packed, talking to each other. And deep learning works by having a loose simulation of neurons — hundreds of thousands of millions of neurons — simulating the computer, talking to each other.The brain metaphor is imperfect, and it irritates some neural-network experts. But it’s the best we have. In essence, the method behind deep learning is bequeathing machines with layers of artificial neurons that can absorb the world around it, much like we paltry humans did from birth. A toddler meets a puppy, and a parent labels it so; neurons flare. Next time the child sees a dog, it knows that’s a dog. And that’s a dog. That’s a cat.
Like so much tech innovation, the military was a pioneer. Decades ago, governments developed software to let machinery distinguish between a tank and a school bus. An important distinction.
A more benign example: You may have seen Google’s “deep dream” images floating around. Google released them to flaunt the image recognition of its neural nets. Then it open-sourced the tech a week later, granting us a parade of trippy clips, such as this treatment of Terry Gilliam’s nod to Vegas:
Essentially, the engineers trained machines on the deep learning process in reverse, letting the AI go to town on its own projections.
Memo Akten, a designer in Turkey, created his own video with the source code and clearly articulated its logic:
This is almost like asking you to draw what you think you see in the clouds, and then asking you to look at your drawing and then draw a new image of what you think you are seeing in your drawing. And repeating this.It’s not an expert. Google’s brain kept trying to attach muscular arms to images of standalone barbells, since those are what often are attached to barbells. And Google had to apologize very publicly when an African-American user of Photos, the new storage app that relies on the algorithmic technique, saw his friend tagged as a gorilla.
But that last sentence was not even fully accurate. It would be accurate, if instead of asking you to draw what you think you saw in the clouds, we scanned your brain, looked at a particular group of neurons which we know responds to a particular pattern, then we reconstructed an image based on the firing patterns of those neurons, and gave that image to you to look at. And then we scanned the same neurons again to produce a new image and showed you that etc.
Still, the progress the tech has made is enough to get many AI scientists and tech CEOs very giddy indeed.
What does it mean?
Okay, so what does a deep learned machine do?
Deep learning’s particular strength is dealing with what’s called “unsupervised” data. Supervised learning is simpler: Engineers can program a device to talk to you by shoving in inputs about speech, vocal intonations and the like. The trickier part is getting machines to process data at the same rate without this input — i.e., from the mountains of words spoken on YouTube. (Google is working on this, naturally.)
Simply put, it lets tech companies do far more with the specified deep data they already have. Amazon has commerce, Uber and Lyft have transit, Netflix has media consumption. Each company has ambitious recommendation engine machinery. Google uses AI to make search and Android smarter; Facebook uses it to improve social products; Twitter to scrub porn. All of them are vying for new, more efficient (and profitable) ways to sell ads. Google is laying the bricks for using the process on human genetics.
Many of these advances rely on image and speech recognition, the first two prominent appearances of deep learning’s fruits. The next will be around natural-language processing, the umbrella term for using machine smarts to decipher human language, in all its flavors and tongues, spoken and written. Last week, Google announced that it was deploying its artificial neural networks, or its “brain,” to better detect email spam.
When he spoke to Swisher, Ng showed Baidu’s gains in the natural-language field: Software that instantly rendered the English he spoke into the phone’s Mandarin. (It was less effective in noisy places, Ng said.)
One of Google’s untold AI initiatives, judging from a recent research paper, is focused on building a conversational robot that could field IT queries, and metaphysical ones:
“What is the purpose of life?” it is asked. “To live forever.”
Before it can do that, Google and its rivals may need to train their machines to teach themselves lobbying. Facebook’s latest deep learning innovation — the ability to decipher faces when only a fifth of their surface is unveiled — cannot live in France, the native country of its AI head, Yann LeCun.
The European Union has banned it.
0 comments:
Post a Comment