Adrienne LaFrance reports in The Atlantic:
Computer scientists who focus on machine learning have all kinds of examples of how a computer’s way of seeing is a surprise to the human who programmed it. What building a robot in a person's image can reveal about identity and humanity.
On a hot Saturday evening in early August, an endearing Canadian robotics project came to a grisly end in Philadelphia.
A talking robot, HitchBot, had been decapitated and dismembered, its smooth bucket body separated from blue pool-noodle limbs. The child-sized bot was designed to keep up with humans in simple conversations, automatically take photos, and track its own location via GPS, but it relied on people to physically move across the country. Sporting cheery yellow wellington boots and matching gloves, HitchBot had already hitchhiked across the Netherlands, Canada, and Germany without incident—only to meet a violent end in the United States city known, sometimes ironically, for its brotherly love.
Public outcry was swift. Philadelphians were embarrassed. People were angry. The headlines reflected all this: “Hitchbot is murdered in Philadelphia,” “Innocent Hitchhiking Robot Murdered by Americ,” “Who Killed Hitchbot?”People weren’t just looking for the vandal, they were looking for a killer. The robot was mourned.
HitchBot’s demise, of course, reveals more about humans than it does about robots. That was the point of the experiment from the start. “It’s a very important question to say, do we trust robots? In science, we sometimes flip around questions and hope to gain new insight,” Frauke Zeller, HitchBot’s co-creator told The Salem News back in July. So this time the question was: Can robots trust humans? The answer was predictable. Not always.
Humans have grieved robots before, in part because we are hardwired to look for meaning. Robots are an extension of human endeavors, often built to do things we cannot—they crawl Mars for us, handle nuclear waste, and roll along the ocean floors. It occurs to me that I’ve written at least two obituary-like tributes for robots, once when an actual robot died, and once when a robot I thought was a robot turned out to be a human.
The human tendency to blur the line between robots and people isn’t just about seeing robots as humans, generally, it’s about building robots in our likenesses, specifically.Building robot versions of oneself is a thing people do a lot now, and in part because there are robots everywhere online. The majority of web traffic is driven by bots, which can send and reply to emails, answer security questions, post comments, tweet, chat, and more. Last year, Twitter estimated that up to 23 million active accounts may be automated bots.Five years ago, one spambot in particular gained a cult following on Twitter. The user @horse_ebooks appeared to be a bot designed to promote a line of ebooks. Only the bot was slightly broken and apparently abandoned—just falling apart enough so that the account still tweeted automatically, but the messages were strange and marvelous phrases scraped from around the web and peppered with bizarre punctuation. In 2013, it was revealed that @horse_ebooks was actually run by humans as a kind of performance art. But before that, the software engineer Jacob Harris built his own version of the bot, a @horse_ebooks-style Twitter account made of material from The New York Times, where he worked at the time.
The idea, basically, is to write code telling a bot to scrape a bunch of language from a desired source—in Harris’s case, mostly quotes from Times articles—then re-order those words to form new, semi-garbled sentences. Harris also made a bot version of himself, again designed to tweet in the disjointed style of @horse_ebooks, by grabbing and reorganizing material from his personal Twitter account. Several of these sorts of bots, or “ebooks accounts,” as they’re informally called, have since appeared on Twitter.
One of those bots, made by a former colleague of mine, Tom Meagher, is based on my Twitter account. (He explains his process, inspired by Harris, here.) Most of what @adriennelaf_ebx tweets is nonsense, but there’s a certain essence about it that’s unmistakably me. Or, at least, the few friends of mine who follow the account have told me that they routinely see a tweet by robot me and mistakenly assume it’s me me. Weirdly, I am delighted by this confusion.
I suspect that delight comes from the notion that, amid the nonsense there is something familiar. That a tiny flicker of truth or authenticity, a little spotlight on the way a person actually talks, maybe even a glimmer of who a person is, can be reproduced so simply. (That being said, the musings of a person’s robot doppelgänger are about as interesting to other people as the details of an odd dream—which is to say, usually not very.)There are many other projects that similarly explore the line between human and machine. Patrick Hogan, at the website Fusion, designed a chatbot based on transcripts he’d salvaged from the hard drive of a computer he used to chat online when he was a teenager. He figured out how to chat with his past self based on a robot built from those records. (“Hello, teen version of me,” Hogan wrote. “Stop staring at me,” his teen-bot replied.) Hogan also built a group of bots, based on presidential debate transcripts, that are designed to argue with one another ad infinitum. On Twitter, too, there are bots that chat with one another, interrupt each other, and bots that generate pixelated cats and digital art on-demand. There are also bots that randomly tweet made-up fantasy story plots and satirical think-piece headlines.
1 comments:
Thanks for sharing, nice post! Post really provice useful information!
Giaonhan247 chuyên dịch vụ mua hàng mỹ về Việt Nam uy tín, hay dịch vụ gửi hàng đi mỹ giá rẻ vận chuyển gửi hàng đi mỹ giá rẻ nhất cùng dịch vụ vận chuyển hàng đi mỹ uy tín và chuyển hàng đi mỹ giá rẻ nhất hiện nay.
Post a Comment