Especially by large beeping technological devices which evince no appreciation for the special attention to which every human believes they are entitled as it explains and rewards or forgives whatever behavior they exhibit.
Research suggests that simply putting what appear to be eyes or other human features on a robot may increase positive feelings towards them - and presumably reduce attacks. JL
Aaron Krumins reports in ExtremeTech:
Human behavior is closely tied to the feeling of being observed. A 300-pound security robot (pictured above) found itself on the receiving end of drunken attack by an irate Californian. The man launched himself headlong into the droid and temporarily putting it out of commission. The company making these robots has reported a string of similar incidents.
In the midst of fulminating about an approaching robot apocalypse, a strange and disturbing counter trend is taking place: the rise of violence against robots. While fears of robots often turn upon suppositions that are still largely unproven, such as whether robots will achieve super-intelligence or come to dominate the workplace, the latter phenomena is fast accruing a solid and tangible corpus of evidence. At least three important questions emerge from the consideration of violence against robots– is it a real trend, is it worth caring about, and what, if anything, should be done about it.
The trend
Robots have long been a common feature of the manufacturing world, where they can be found tirelessly churning out Tesla’s latest supercar, assembling Samsung’s refrigerators, or laboring over a whole galaxy of other commodities. Only more recently have robots found themselves performing jobs in the public eye — jobs like delivering pizza and patrolling malls. And perhaps because of this, it’s reasonable to think a certain amount of vandalism would accompany that exposure. This said, already the manner and substance of these attacks is taking a disturbing trend.
Last week, on a spring day in Mountain View, California, a 300-pound security robot (pictured above) found itself on the receiving end of drunken attack by an irate Californian. During the incident, the man launched himself headlong into the droid, taking it to the ground and temporarily putting it out of commission. The company making these security robots, Knightscope, has reported a string of similar incidents. While some, like an attempt to spray paint the droid by a throng of juveniles, seem fairly innocuous, other incidents suggest a more ominous portent.
Another example: In 2014, two Canadian researchers undertook a social experiment to better understand how robots would integrate in society. They sent an innocuous-looking robot on a hitchhiking mission, traveling across Canada and the United States. The happy droid was able to traverse the Canadian frontier unmolested, but upon reaching the United States, it met a gruesome and untimely end. Found mangled, with its arms torn off, the hitchhiking bot seems to have served its purpose — revealing the dark underbelly of human-robot relations.
Does it matter?
To some, this spat of violence against robots may seem unspectacular, if not a little comic. Is there really cause for concern? Despite the bad press they have received at the hands of luminaries like Elon Musk and now most recently Alibaba CEO Jack Ma, robots and AI are likely humanity’s best hope for overcoming many of the ills currently bedeviling society. In a recent phone interview with Martial Hebert, director of the Robotics Institute at Carnegie Mellon University, he lamented the lack of attention for the ways robots stand to improve society.
Take, for instance, world hunger. Automated farming technology and smarter algorithms can drastically increase crop yields throughout hungry, developing countries. John Deer’s autonomous tractor is already making inroads in this direction. There’s also the glaring lack of competent medical coverage in certain parts of the world. The advent of machine learning algorithms for diagnosing disease could have a major impact in providing a cheap source of medical expertise to people with little or no access to health care.
Closer to home, the looming elder care crises in places like the United States and Japan has created a demand for algorithms and robots to mind our seniors. Already, K4CONNECT data analytics and automation software is enabling senior living communities to monitor and differentiate their clientele, while Toyota’s HSR robot can perform simple manual operations like opening curtains and bringing water to a bedridden person.
So wherever one stands on a potential robot apocalypse, at least in the near term, it’s likely robots will play a key role in solving many of society’s problems. How humans respond and react to their robotic assistants will be a question of increasing importance. In that light, the recent uptick in violence against robots is more than a little unsettling. If people choose to rail against robots rather than embrace them (not literally, though that certainly is a possibility given the advent of sexbots), the transition, to a smart, AI-driven society, will likely by a tumultuous one.
Can it be changed?
If we accept the premise that robotic technology and AI should be cautiously embraced, albeit with stringent standards to ensure safety and privacy, than what can be done to stem the tide of violence to our mechanical cousins? Education is an obvious answer, and important steps are already being taken in this direction. If you just have a few minutes, Deeplearning.TV has a number of simple, explanatory videos which can give almost any one a grasp of the emerging trend in AI, and our own ExtremeTech’s cohort has authored several excellent expositions on the topic. Given the wealth of information existing on the internet, and the widespread open sourcing of the relevant technologies, it’s more a lack of motivation rather than teaching material that will keep newcomers in the dark.
But apart from education, several other means may prove useful in saving robots from misguided human aggression. While much has been made of the so called uncanny valley–the idea that when robots too closely resemble human physiognomy, the eerie factor goes through the roof–there might also be benefits to forming robots in our likeness. None of the robots that suffered recent attacks possessed anything too closely resembling human features, not even a pair of stylized eyes. And why should they? While the Knightscope security droid did contain cameras, in fact a small arsenal of them, there’s no obvious reason why these cameras should appear in the same format as the mammalian oculus.
However, much recent research however suggests human behavior, especially the kind we find exemplary, is closely tied to the feeling of being observed by our fellow humans. We may think of morality as something that transcends biology, something that existing in the domain of pure reason or religious ethic. But the burden of evidence suggests that the impulse towards charity and altruism is, in fact, an artifact of our biology.
A study published in Biology Letters demonstrated that merely placing a picture of human eyes on a donation box in a university coffee room nearly tripled the amount people paid as compared with control images. The sight of another pair of human eyes is often enough to stimulate charitable behavior. And most people, that is people who are not clinical psychopaths, have a deeply instinctual aversion to injuring other humans. This instinct explains the reason we flinch at the site of another person slamming their finger in a door. This is not some kind of learned kindness; its biology. It also explains why most soldiers have a deep aversion to firing their weapons at other humans, despite being trained to do so. Simply putting a pair of human like eyes on a robot or giving it a more human shape could thereby reduce the impulse towards robot bashing.
Equipping robots with a more human form may conjure up scenes from The Terminator. But the irony is that these measures–far from being a harbinger of robot domination–could be the necessary design choices to protect robots from misguided human aggression.
0 comments:
Post a Comment