Technology has begun to play a far more significant role in our lives than earlier thought leaders might have imagined. The courts are struggling to keep up with the extension of physical behaviors and attitudes to the realm of the digital. This is, at some level, a manifestation of the need for our administrative and bureaucratic systems to reflect the reality of the lives they were established to help manage. But beneath that surface, the growing capabilities that technology has provided machines are raising questions about who should be judging whom.
Driverless cars, unmanned drone aircraft, interconnected home electronics, algorithmic management of investments, education and health care: all raise issues about how to define what behaviors are appropriate and who should decide. Even more troubling - if only because of their seeming impossibility - are concerns about whether devices can or should be able to make moral decisions. Whose morality, exactly, and by whose leave?
In giving so much power to the machines who make our lives more convenient we have inadvertently - or perhaps indifferently - ceded authority to them. The question now is whether they can or will listen - and whether, to the extent that we differ, humans will have the power to assert their views. To say nothing of what power and whose views. JL
Anjana Ahuja comments in the Financial Times:
If we are to give robots morality, then whose morals should be burnt into the machines? Universities, including Tufts and Yale, are researching whether robots can be taught right from wrong. A more automated world might, in a strange way, be a more humane one.
It will be an interesting year for the X-47B. The new unmanned aircraft, developed by Northrop Grumman, will be put through its paces on a US warship to check it can do all the things existing aircraft can: take off and land safely, maintain a holding pattern, and “clear the deck” for the next aircraft in just 90 seconds.
This uber-drone is perhaps the most advanced autonomous machine in existence, possibly outshining Google’s driverless car. Both examples of artificial intelligence will have spurred signatories to the open letter on AI just published by the Future of Life Institute, a collective of influential thinkers on the existential risks to humanity.The letter is timely, given that the technology is outpacing society’s ability to deal with it. Who, for example, is liable if a driverless car crashes? This is unclear, even though four US states have given the legal go-ahead for testing on public roads. The UK is likely to grant similar approval this year. Legal clarity is necessary for consumer confidence, on which a future industry depends.And what if a driverless car, in order to avoid a potentially fatal collision, has to mount the pavement? Should it be installed with ethics software so that, given the choice between mowing down an adult or a child, it opts for the adult?
These are the longer-term, more challenging questions posed by AI, and society, rather than Silicon Valley investors, should dictate how quickly they are answered. If we are to give robots morality, then whose morals should be burnt into the machines? What about sabotage?
The idea of a moral machine fascinates because in an age where machines can already do much of what humans can — drive, fly aircraft, run, recognise images, process speech and even translate — there are still capacities, such as moral reasoning, that elude them.
Indeed, as automation increases, that omission might itself be immoral. For example, if drones become our battlefield emissaries, they may have to make decisions that human operators cannot anticipate or code for. And those decisions might be “moral” ones, such as which one of two lives to save. Scientists at Bristol Robotics Laboratory showed last year that a robot trained to save a person (in reality another robot) from falling down a hole was perfectly able to save one but struggled when faced with two people heading holewards. It sometimes dithered so long that it saved neither. Surely a robot programmed to save one life is better than a perplexed robot that can save none?The idea of a moral machine fascinates because in an age where machines can already do much of what humans can moral reasoning eludes them
So artificial morality seems a natural, if controversial, next step. In fact, five universities, including Tufts and Yale, are researching whether robots can be taught right from wrong. But this is happening in a regulatory vacuum. Ryan Calo, law professor at Washington university, has proposed a Federal Robotics Commission to oversee developments.As the industry flourishes, we must have some means of holding researchers — many of them in rich, private corporations — to account. But any scrutiny should also challenge our assumptions about the superiority of human agency. Google Chauffeur might not instinctively avoid a pedestrian but it will not fall asleep at the wheel. A robot soldier, equipped with a moral code but devoid of emotion, will never pull the trigger in fear, anger or panic. A more automated world might, in a strange way, be a more humane one.
2 comments:
Off course machine works better than human being as they have greater and better efficiency, power and energy.
Well yes, but can they make moral judgments within the context of the culture in which they operate? And how to proceed in a global economy where culture and interpretation can be diametrically opposed?
Post a Comment