A Blog by Jonathan Low

 

Jul 14, 2015

Artificial Intelligence? What If the Real Threat Technology Poses Is Artificial Stupidity?

Much of the current literature and commentary focuses on the threat to humans of robots run amok with the intent to kill.

The real danger, as the following article explains, is that they will do more harm based on their limitations than their putative capabilities. JL

Quentin Hardy comments in the New York Times:

The real worry is a computer program rapidly overdoing a single task, with no context. In other words, something really dumb happens, at a global scale. The misunderstanding is thinking that there is only a threat if there is consciousness.
In October, Elon Musk called artificial intelligence “our greatest existential threat,” and equated making machines that think with “summoning the demon.” In December, Stephen Hawking said “full artificial intelligence could spell the end of the human race.” And this year, Bill Gates said he was “concerned about super intelligence,” which he appeared to think was just a few decades away.
But if the human race is at peril from killer robots, the problem is probably not artificial intelligence. It is more likely to be artificial stupidity. The difference between those two ideas says much about how we think about computers.
In the kind of artificial intelligence, or A.I., that most people seem to worry about, computers decide people are a bad idea, so they kill them. That is undeniably bad for the human race, but it is a potentially smart move by the computers.
But the real worry, specialists in the field say, is a computer program rapidly overdoing a single task, with no context. A machine that makes paper clips proceeds unfettered, one example goes, and becomes so proficient that overnight we are drowning in paper clips.
In other words, something really dumb happens, at a global scale. As for those “Terminator” robots you tend to see on scary news stories about an A.I. apocalypse, forget it.
“What you should fear is a computer that is competent in one very narrow area, to a bad degree,” said Max Tegmark, a professor of physics at the Massachusetts Institute of Technology and the president of the Future of Life Institute, a group dedicated to limiting the risks from A.I.
In late June, when a worker in Germany was killed by an assembly line robot, Mr. Tegmark said, “it was an example of a machine being stupid, not doing something mean but treating a person like a piece of metal.”
His institute recently disbursed much of the $10 million that Mr. Musk, the founder of Tesla and SpaceX, gave it to think of ways to prevent autonomous programs from going rogue. Yet even Mr. Musk, along with other luminaries in science and tech, like Mr. Hawking and Mr. Gates, seems to be focused on the wrong potential threat.
There is little sense among practitioners in the field of artificial intelligence that machines are anywhere close to acquiring the kind of consciousness where they could form lethal opinions about their makers.
“These doomsday scenarios confuse the science with remote philosophical problems about the mind and consciousness,” Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a nonprofit that explores artificial intelligence, said. “If more people learned how to write software, they’d see how literal-minded these overgrown pencils we call computers actually are.”
What accounts for the confusion? One big reason is the way computer scientists work. “The term ‘A.I.’ came about in the 1950s, when people thought machines that think were around the corner,” Mr. Etzioni said. “Now we’re stuck with it.”
It is still a hallmark of the business. Google’s advanced A.I. work is at a company it acquired called DeepMind. A pioneering company in the field was called Thinking Machines. Researchers are pursuing something called Deep Learning, another suggestion that we are birthing intelligence.
Deep Learning relies on a hierarchical reasoning technique called neural networks, suggesting the neurons of a brain. Comparing a node in a neural network to a neuron, though, is at best like comparing a toaster to the space shuttle.
In fairness, the kind of work DeepMind is doing, along with much other work in the burgeoning field of machine learning, does involve spotting patterns, suggesting actions and making predictions. That is akin to the mental stuff people do.
It is among the most exciting fields in tech. There is a pattern-finding race among Amazon, Facebook and Google. Companies including Uber and General Electric are staking much of their future on machine learning.
But machine learning is automation, a better version of what computers have always done. The “learning” is not stored and generalized in the ways that make people smart.
DeepMind made a program that mastered simple video games, but it never took the learning from one game into another. The 22 rungs of a neural net it climbs to figure out what is in a picture do not operate much like human image recognition and are still easily defeated.
Moving out of that stupidity to a broader humanlike capability is called “transfer learning.” It is at best in the research phase.
“People in A.I. know that a chess-playing computer still doesn’t yearn to capture a queen,” said Stuart Russell, a professor of Computer Science at the University of California, Berkeley. He is also on the Future of Life’s board and is a recipient of some of Mr. Musk’s grant. He seeks mathematical ways to ensure dumb programs don’t conflict with our complex human values.
“What the paper clip program lacks is a background value structure,” he said. “The misunderstanding is thinking that there is only a threat if there is consciousness.”

0 comments:

Post a Comment