We could ascribe this to popular delusion (well, maybe not outside the US...) but, as the following article explains, the transition from the 'wisdom of crowds' to viral nonsense may have more to do with the design of the algorithms that run social networks than simple psychology. They are constructed to reinforce the tendencies of the convinced or gullible or both in order to generate more traffic and attention. And while that may be totally super when it comes to commerce, the coders-in-chief may need to begin thinking about how to scroll society back from the edge. Unless, of course... JL
Renee Diresta reports in Fast Company:
The filter bubble—the idea that online recommendation engines learn what we like and thus keep us only reading things we agree with—has evolved. Algorithms, network effects, and zero-cost publishing are enabling crackpot theories to go viral, and—unchecked—these ideas are impacting the decisions of policy makers and shaping public opinion, whether they are verified or not.
After an anonymous source alleged that Facebook's Trending News algorithm (and human staff) was intentionally hiding conservative news from the social network, all hell broke loose. Congressional hearings have been called. Whether the reports are right—and whether hearings are justified—underneath the uproar is a largely unspoken truth: The algorithms that drive social networks are shifting the reality of our political systems—and not for the better.
The filter bubble—the idea that online recommendation engines learn what we like and thus keep us only reading things we agree with—has evolved. Algorithms, network effects, and zero-cost publishing are enabling crackpot theories to go viral, and—unchecked—these ideas are impacting the decisions of policy makers and shaping public opinion, whether they are verified or not.
Fiction As Fact
First it is important to understand the technology that drives the system. Most algorithms work simply: Web companies try to tailor their content (which includes news and search results) to match the tastes and interests of readers. However as online organizer and author Eli Pariser says in the TED Talk where the idea of the filter bubble became popularized: "There's a dangerous unintended consequence. We get trapped in a ‘filter bubble' and don't get exposed to information that could challenge or broaden our worldview."
Facebook's news feed and personalized search delivers results that are tailored just to us because a social network’s business is to keep us interested and happy. Feeling good drives engagement and more time spent on a site, and that keeps a user targetable with advertisements for longer. Pariser argues that this nearly invisible editing of the Internet limits what we see—and that it will "ultimately prove to be bad for us and bad for democracy."
In his 1962 book, The Image: A Guide to Pseudo-Events in America, former Librarian of Congress Daniel J. Boorstin describes a world where our ability to technologically shape reality is so sophisticated, it overcomes reality itself. "We risk being the first people in history," he writes, "to have been able to make their illusions so vivid, so persuasive, so ‘realistic’ that they can live in them."
Since Pariser's TED Talk, we've reached the point where social networks are now used as primary news outlets. Seeking news from traditional sources—newspapers and magazines—has been replaced with a new model: getting all of one’s news from trending stories on social networks. The people that we know best are most likely to influence us because we trust them. Their ideas and beliefs shape ours. And the tech behind social networks is built to enhance this. Where "proximity" used to mean knowing people next door or down the street, it now includes online communities. It’s easier than ever to find like-minded people independently of geography.
Once a user joins a single group on Facebook, the social network will suggest dozens of others on that topic, as well as groups focused on tangential topics that people with similar profiles also joined. That is smart business. However, with unchecked content, it means that once people join a single conspiracy-minded group, they are algorithmically routed to a plethora of others. Join an anti-vaccine group, and your suggestions will include anti-GMO, chemtrail watch, flat Earther (yes, really), and "curing cancer naturally" groups. Rather than pulling a user out of the rabbit hole, the recommendation engine pushes them further in. We are long past merely partisan filter bubbles and well into the realm of siloed communities that experience their own reality and operate with their own facts.ADVERTISEMENTLearn More
Zero-Cost Publishing Is Shifting Us From "Crowd Wisdom" To "Viral Nonsense"
Another tech trend fueling this issue is the ability to publish ideas online at no cost, and to gather an audience around those ideas. It’s now easier than ever to produce content specifically designed to convince people who may be on the fence or "curious" about a particular topic. This is an especially big issue when it comes to violent extremism, and pseudoscience. Self-publishing has eliminated all the checks and balances of reputable media―fact-checkers, editors, distribution partners.
Social publishing platforms have made all of us content creators. And this is a wonderful, tremendously valuable innovation that enables talented, or traditionally voiceless, people to be heard. We believe in the wisdom of crowds. Inherent in our platform designs is the conviction that good content will get noticed, and the rest will stagnate, unseen, in lonely corners of the web. But the increasing proliferation of fringe content is beginning to suggest that this is no longer as true as it once was.
Social platforms—in their effort to keep users continually engaged (and targeted with relevant ads)—are designed to surface what’s popular and trending, whether it’s true or not. Since nearly half of web-using adults now get their news from Facebook in any given week, what counts as "truth" on our social platforms matters. When nonsense stories gain traction, they’re extremely difficult to correct. And stories jump from platform to platform, reaching new audiences and "going viral" in ways and at speeds that were previously impossible.
Many Brazilians, for example, think the Brazilian government is lying to them about Zika causing birth defects, though they aren’t quite sure whether they should be worried about vaccines, Monsanto, chemtrails, or GMO mosquitoes as the true cause.
People in Portland, Oregon, voted in 2013 to stop the fluoridation of water—a common practice to improve dental health. Depending on which opposition group you asked, fluoridation was either a technique used by fascist regimes to pacify their citizens or a toxic chemical that causes cancer.
For instance, last year local residents in a town in Bastrop County, Texas, became convinced that a routine military practice exercise known as Jade Helm 15 was a secret government plot to impose martial law and confiscate Texans’ firearms. The uproar was so large that it reached the desk of Texas governor Greg Abbott. Abbott then legitimized the conspiracy theory by making a statement declaring that the Texas State Guard would monitor the exercise.
You might ask: Isn’t this simply an artifact of reality, reflected online? Maybe all of us simply weren’t exposed to this "other" world and are simply coming into contact with it thanks to the Internet?
That is one possibility. But the Internet doesn’t just reflect reality anymore; it shapes it. The mere fact of these theories being online and discoverable helps create this phenomenon.
How Big Is This Issue? Look at The Statistics
The problem is that social-web activity is notorious for an asymmetry of passion. On many issues, the most active social media voices are the conspiracist fringe. The majority of people know that vaccines don't cause autism, and that 9/11 was not an inside job. They don't dedicate hours to creating content or tweeting to reinforce the obvious. But passionate truthers and extremists produce copious amounts of content in their commitment to "wake up the sheeple." Last month, for example, a study looked at the relative percentages of pro-vaccine vs. anti-vaccine content on Pinterest and Instagram; 75% of the immunization-related pinned content was opposed to vaccines. This was a dramatic shift from studies of social networks in the early 2000s, when the percentage of negative content was estimated at around 25%.
This asymmetry of passion, and the resulting proliferation of nonsense on social channels, is particularly strong where pseudoscience is concerned. Consider the Food Babe, an anti-GMO "food safety activist" who boasts 1 million Facebook fans and a committed #foodbabearmy on Twitter dedicated to harassing companies (such as the Girl Scouts) to get them to remove ingredients that are hard to pronounce. When refutations, corrections, or takedowns of her often misinformed agenda are published in the mainstream media, her followers dig in more, convinced that the pushback is because they’ve struck a nerve in Big Agriculture or Big Food, or because the reporter is "bought."
Activism spawned from these online conspiracy groups wastes time and money, and it’s increasing. In a recent interview, Californian Republican Representative Devin Nunes said that 90% of the communication he receives from constituents is conspiracy-theorist nonsense, up from approximately 10% when he took office in 2003. It’s impacting the political process on everything from zoning laws (fears of UN Agenda 21) to public health policy (water fluoridation). In Hawaii last month, for example, lawmakers killed a simple procedural bill that would have allowed the state to more quickly adopt federal guidelines on administering vaccines in case of an outbreak—because outraged residents claimed that vaccines were responsible for Zika (and, of course, for autism).
There are plenty of explanations about why conspiracy theories exist. These range from a decreasing amount of trust in leaders and institutions to proportionality bias (a belief that big events must have big causes) to projection and more. The most predominant factor—confirmation bias, the tendency to use information to confirm what you already believe—is in many ways made worse, not better, in a world where more, not less, information is available, thanks to Google and the Internet.
How Do We Fix This? Address The Underlying Tech
Ultimately, we need our best minds playing both offense and defense if we are going to reduce the prevalence and impact of conspiracist influence both online, and in real life. How do we do that?
We need a shift in how we design our ever-evolving social platforms. The very structure and incentives in social networks have brought us to this point. Our platform designers themselves should be considering what steps they can take to bring us back. Perhaps we should have more conversations about ethics in design—and maybe these Facebook allegations will kick that off. Their product design is having a dramatic impact on public policy, and the effects are only going to get stronger. What responsibility do the designers of those products have to civil discourse?
Platforms have the power here. Some have begun to introduce algorithms that warn readers that a share is likely a hoax, or satire. Google is investigating the possibility of "truth ranking" for web searches, which is promising. These are great starting points, but a regulation by algorithm has its own set of advantages and pitfalls. The primary concern is that turning companies into arbiters of truth is a slippery slope, particularly where politically rooted conspiracies are concerned.
But we have to start somewhere. As Eli Pariser said: "We really need you to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. We need you to make sure that they're transparent enough that we can see what the rules are that determine what gets through our filters." The Internet isn’t separate from the real world; building the web we want is building the future we want.
0 comments:
Post a Comment