Izabella Kaminska comments in the Financial Times:
Do the interests of an elite club of Silicon Valley billionaires really align with the interests of humanity?
Cynics might suggest this is mostly an attempt to stop the advantages of artificial intelligence being monopolised by a single corporation or person.“Silicon Valley in move to keep AI safe ”Financial Times, December 12Is artificial intelligence really something we should be worrying about?
No — what?
That scientists have been claiming artificial intelligence is 40 years away for about 40 years already.Right. So why all the fuss?
High-profile entrepreneurs and scientists have been sounding the alarm about the existential risk artificial intelligence poses to humanity. Elon Musk, the crusader behind electric-car company Tesla, tweeted last year that artificial intelligence was “potentially more dangerous than nukes”. And the physicist, Stephen Hawking, has warned that the “development of full artificial intelligence could spell the end of the human race”.What’s the sudden risk factor?
Advances in deep learning systems — a form of algorithmic technology — have enabled computers to engage in tasks only humans could do previously. These breakthroughs relate to the ability to detect nuanced differences in the world, such as recognising visual patterns or sounds, and then being able to respond fluidly to them. In many cases this has created a convincing impression of conscious action. And because these systems look and feel intelligent — even though there’s no real intelligence underpinning any of it — people are growing antsy.
So are these breakthroughs being misrepresented?
Computer scientists will tell you that machine-learning approaches being used by the likes of Google’s DeepMind are not that new. What is new is the processing speeds and the size of the data sets being crunched by algorithms. Mr Musk and Mr Hawking are among those who believe that an autonomous “superintelligence” program could sneak up on us since they might learn from themselves exponentially.But is it fair to make such an assumption?
Scientists such as Simon Stringer at the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence are sceptical about the idea that exponential computing forces combined with machine-learning strategies will lead to the sort of artificial intelligence that could pose a threat to humans. These models have many applications — the ability to trade on markets or navigate self-driving cars — but don’t solve the problem of replicating consciousness. And all of them depend on humans for their intellect. A truly conscious system would have to understand the sensory world in the same way the human brain does. Mr Stringer’s work is focused on achieving this but he doesn’t believe it will result in a superintelligence. He’s aiming for the intelligence of a rat.
Isn’t it wise to hedge the risk just in case?
Mr Musk and Silicon Valley financier Peter Thiel certainly think so, which is why they and others have announced a $1bn project to advance digital intelligence in the way that is most likely to benefit humanity as a whole. Their project, OpenAI, will be a non-profit movement that conducts open-source artificial intelligence research and shares its findings with everyone. Other funders include Amazon Web Services and Infosys, as well as individual investors such as Sam Altman, president of Y Combinator, a California-based start-up incubator.Do the interests of an elite club of Silicon Valley billionaires really align with the interests of humanity?
Cynics might suggest this is mostly an attempt to stop the advantages of artificial intelligence being monopolised by a single corporation or person. A plan to freely share artificial intelligence technology isn’t the same as stopping the knowledge of how to create an artificial intelligence system being used by a bad actor; or preventing an autonomous artificial intelligence from annihilating humankind. At best it allows for the simultaneous emergence of many types of artificial intelligence, with the good ones who like humans protecting us from the bad ones that don’t.
But what if they all collude against mankind?
Exactly. But that’s an inconvenient question for Silicon Valley billionaires.
How come?
If artificial intelligence really poses an existential risk to humanity, the best thing would be to suspend all development. But think what a global moratorium on artificial intelligence development — much like the Paris climate deal — would do for technology companies’ valuations.
They’d become the new fossil fuel companies. So no big surprise if they form a lobby group to promote responsible development.
0 comments:
Post a Comment