Dave Gershgorn reports in Quartz:
It’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer: really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems.
Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows.
Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems. The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.
The problem DeepMind’s research tackles is called “catastrophic forgetting,” the company writes. If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge. Modern artificial neural networks use millions of mathematic equations to calculate patterns in data, which could be the pixels that make a face or the series of words that make a sentence. These equations are connected in various ways, and are so dependent on some equations that they’ll begin to fail when even slightly tweaked for a different task. DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.
The DeepMind paper borrows this idea from research on the mammalian brain, but haven’t quite mimicked its results. The authors concede that when testing on Atari games, one neural network that learns a variety of games doesn’t perform as well as neural networks specifically trained on each game. Further work is needed on deciding which information is important and which isn’t, but DeepMind considers this a large first step in tackling the larger problem.
2 comments:
I realize that nothing is fair but I'm still trying. I just want to do my best and what is possible
povaup
دانلود جدیدترین آهنگ زنگ موبایل (یکجا)
Post a Comment