Noam Cohen reports in The New Yorker:
California became the first state to try to reduce the power of bots by requiring they reveal their “artificial identity” when they are used to sell a product or influence a voter. Violators could face fines under statutes related to unfair competition. Just as pharmaceutical companies must disclose that the people who say a new drug has improved their lives are paid actors, bots in California—or the people who deploy them—will have to level with their audience. By attempting to regulate technology that thrives on social networks, the state will be testing society’s resolve after more than two decades of a runaway Internet.
When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give youa list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives. In other words, they are an especially useful tool, considering how politics is played today.
On July 1st, California became the first state in the nation to try to reduce the power of bots by requiring that they reveal their “artificial identity” when they are used to sell a product or influence a voter. Violators could face fines under state statutes related to unfair competition. Just as pharmaceutical companies must disclose that the happy people who say a new drug has miraculously improved their lives are paid actors, bots in California—or rather, the people who deploy them—will have to level with their audience.
“It’s literally taking these high-end technological concepts and bringing them home to basic common-law principles,” Robert Hertzberg, a California state senator who is the author of the bot-disclosure law, told me. “You can’t defraud people. You can’t lie. You can’t cheat them economically. You can’t cheat ’em in elections. ”
California’s bot-disclosure law is more than a run-of-the-mill anti-fraud rule. By attempting to regulate a technology that thrives on social networks, the state will be testing society’s resolve to get our (virtual) house in order after more than two decades of a runaway Internet. We are in new terrain, where the microtargeting of audiences on social networks, the perception of false news stories as genuine, and the bot-led amplification of some voices and drowning-out of others have combined to create angry, ill-informed online communities that are suspicious of one another and of the government.
Regulating bots should be low-hanging fruit when it comes to improving the Internet. The California law doesn’t even ban them outright but, rather, insists that they identify themselves in a manner that is “clear, conspicuous, and reasonably designed.”
But the path from bill to law was hardly easy. Initial versions of the legislation were far more sweeping: large platforms would have been required to take down bots that didn’t reveal themselves, and all bots were covered, not just explicitly political or commercial ones. The trade group the Internet Association and the digital-rights group the Electronic Frontier Foundation, among others, mobilized quickly in opposition, and those provisions were dropped from the draft bill.
Opposition to the bot bill came both from the large social-network platforms that profit from an unregulated public square and from adherents to the familiar libertarian ideology of Silicon Valley, which sees the Internet as a reservoir of unfettered individual freedom. Together, they try to block government encroachment. As John Perry Barlow, an early cyberlibertarian and a founder of E.F.F., said to the “Governments of the Industrial World” in his 1996 “Declaration of Independence of Cyberspace”: “You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”
The point where economic self-interest stops and libertarian ideology begins can be hard to identify. Mark Zuckerberg, of Facebook, speaking at the Aspen Ideas Festival last week, appealed to personal freedom to defend his platform’s decision to allow the microtargeting of false, incendiary information. “I do not think we want to go so far towards saying that a private company prevents you from saying something that it thinks is factually incorrect,” he said. “That to me just feels like it’s too far and goes away from the tradition of free expression.”
In Aspen, Zuckerberg was responding to a question about why his platform declined to take down an altered video that was meant to fool viewers into thinking that Nancy Pelosi was slurring her speech. In an interview last year with Recode, he tried to explain why Facebook allows Holocaust deniers to spread false conspiracy theories.
To be clear, Facebook isn’t the government (yet). As a private company, it can and does take down speech it doesn’t like—nude pictures, for example. What Zuckerberg was describing was the kind of political speech he believes the government should protect and the policy he wants Facebook to follow.
The first bots, short for chatbots, couldn’t hide their artificiality. When they were invented, back in the nineteen-sixties, they weren’t capable of manipulating their users. Most bot creators worked in university labs and didn’t conjure these programs to exploit the public. Today’s bots have been designed to achieve specific goals by appearing human and blending into the cacophony of online voices. Many have been commercialized or politicized.
In the 2016 Presidential campaign, bots were created to support both Donald Trump and Hillary Clinton, but pro-Trump bots outnumbered pro-Clinton ones five to one, by one estimate, and many were dispatched by Russian intermediaries. Twitter told a Senate committee that, in the run-up to the 2016 election, fifty thousand bots that it concluded had Russian ties retweeted Trump’s tweets nearly half a million times, which represented 4.25 per cent of all his retweets, roughly ten times the level of Russian bot retweets supporting Clinton.
Bots also gave Trump victories in quick online polls asking who had won a Presidential debate; they disrupted discussions of Trump’s misdeeds or crude statements; and they relentlessly pushed dubious policy proposals through hashtags like #draintheswamp.
They have also aided Trump during his Presidency. Suspected bots created by unidentified users drove an estimated forty to sixty per cent of the Twitter discussion of a “caravan” of Central American migrants headed to the U.S., which was pushed by the President and his supporters prior to the 2018 midterm elections. Trump himself has retweeted accounts that praise him and his Presidency, and which appear to be bots. And last week a suspected bot network was discovered to be smearing Senator Kamala Harris, of California, with a form of “birtherism” after her strong showing in the first round of Democratic-primary debates.
The problem with attempts to regulate bots, the E.F.F. and other critics argue, is that many of us see them only as a destructive tool. They contend that bots are a new medium of self-expression that is in danger of being silenced. In a letter to the California Assembly, the organization argued that the bot-labelling bill was overly broad and “would silence or diminish the very voices it hopes to protect.” The codes behind the bots are produced by people, they note, and the government should have to meet the toughest standards if it attempts to regulate their speech in any way.
Bots certainly can be beneficial. There are bots created by artists that find interesting patterns in society; bots that inform us about history; bots that ask searching questions about the relationship between people and machines by pretending to be human. “Just because a statement is ultimately ‘made’ by a robot does not mean that it is not the product of human creation,” Madeline Lamo, then a fellow at the University of Washington Tech Policy Lab, and Ryan Calo, a University of Washington law professor, wrote in “Regulating Bot Speech,” a recent article for the UCLA Law Review, which questions the California law.
Jamie Lee Williams, a lawyer at E.F.F. who analyzed the bot law, called the original measure a perilous reduction in free-speech rights. “What scares me a lot,” she said, “is this idea that First Amendment protections are too great and we should whittle it back and relax our standards and allow more government restrictions on speech—giving the government the power to police speech is a dangerous thing.”
In the end, E.F.F. was pleased enough by the changes to the bill to move from opposition to neutrality. Neutrality was as far as the organization would go, Williams commented. “I don’t think E.F.F. would ever come out in support of rules like this,” she said. “There are a lot of good bots.”
Hertzberg, the state senator who authored the legislation, told me that he was glad that the changes to the bill before passage were related to the implementation of the law, rather than to its central purpose of requiring that bots reveal themselves to the public when used politically or commercially. A lawyer by training, Hertzberg said that he resented the accusation that he didn’t care about First Amendment concerns. “There is no effort in this bill to have a chilling effect on speech—zero,” he said. “The argument you go back to is, Do bots have free speech? People have free speech. Bots are not people.”
0 comments:
Post a Comment