But what was once conjecture now appears to be fact, at least in certain circumstances. That clarification, however, is significant. Because that's how it starts. Give 'em an inch and they take your job.
But before we give way to the customary paranoia-tinged despair, the technological extrapolation of Wayne's World's 'We're Not Worthy!" - it is perhaps useful to note that we humans are doing the programming that makes this possible. The logical question then, is at what point research will establish that humans learn best from robots. Now that is going to spark some interesting conversations. JL
Joshua Brustein reports in Business Week:
Computers that learned from other computers did so more than twice as quickly as they could learn from a human. Over time, they also were able to outperform the computers that taught them.
Earlier this year, a vaguely humanoid robot served juice to a researcher lying on a hospital bed. The robot then uploaded its memory of the experience to a system of cloud servers, essentially a shared global brain. When the next juice-serving robot came along, it had already downloaded the memory and knew where to find the juice and how to get to the bed.
“It’s very difficult for a robot to assess the quality of this information. It could be a fictitious text or a piece of malicious code.”—Moritz TenorthThe phenomenon of robots teaching one another is known as transfer learning, and it could prove increasingly useful as more people begin to rely on robots for medical care and other services. A robot facing a row of unfamiliar objects could locate the one it needs, check with the cloud about the best strategy for grasping it, and pick it up even if it hadn’t been trained to do so directly, says Gajan Mohanarajah, who worked with the juice-serving robots while pursuing a Ph.D. at Swiss university ETH Zurich. He’s spent more than four years working to develop the technology as part of RoboEarth, a project undertaken by academics at six European universities and funded by the European Union.
The initial RoboEarth project officially ended with the juice experiment, but spinoff groups are working to close the sizable gaps in the development of a multipurpose transfer-learning system. Robots using it would have to be able to communicate in the same language and maintain consistent wireless connectivity. And what ifhackers can compromise these systems full of patient needs or other sensitive data? “The security aspect has to be dealt with very delicately, in terms of building a public network,” says Mohanarajah, adding that a logical first step would be a series of private networks operated by individual companies or government agencies.
Moritz Tenorth, a researcher at the Institute for Artificial Intelligence at the University of Bremen in Germany, helped develop RoboEarth’s common machine language and is now working on a related project called RoboHow. His goal is tobuild a system that can take information from the Web, convert it into a format that robots can understand, and upload it to a memory bank in the cloud. The challenge is to include all the steps that a person might not have to think about: Most recipes for pancakes don’t come with instructions on operating a stove or griddle, using a spatula, or identifying the right ingredients.
If RoboHow succeeds, a robot could eventually be able to access the Internet and finddirections for itself. For now, humans are curating the data. “It’s very difficult for a robot to assess the quality of this information. It could be a fictitious text or a piece of malicious code,” says Tenorth. “I’m not sure we want to hook up robots to the open Web right now. I think their heads would explode.”
The field has attracted interest from some big names. Microsoft Research (MSFT) uses transfer learning, mostly experimenting with one-on-one robot teaching. Researchers at Google (GOOG) have been working on cloud-based robotics research for several years. Dystopians will always be reminded of the rise of Skynet, the genocidal artificial-intelligence system from the Terminator movies. Days before Google’s annual developers conference, which ended on June 26, the Verge, a tech news website, ran a fictional account of the company accidentally creating such a machine. Google has acquired a handful of robotics companies in the past year but didn’t talk much about its plans for them at the conference. This didn’t stop a protester from interrupting a presentation to accuse the company of building “robots that kill people.”
Matthew Taylor, an assistant professor ofcomputer science atWashington State University , showed in April that a computer could teach another one how toplay Pac-Man just by observing the other computer’s gameplay and offering tips, like a person would. Computers that learned the game from other computers did so more than twice as quickly as they could learn from a human. Over time, they also were able to outperform the computers that taught them. Taylor says he hears the Skynet comparison all the time, but it gives robots too much credit. “It’s a long way off and very unlikely,” he says of a future machine uprising. “But I think it’s good people are asking the question.”
0 comments:
Post a Comment