Kalev Leetaru comments in Forbes:
70% of respondents were indifferent to the commercial use of their personal information in return for free services. The dream of the internet as an unfiltered and truly demographic and distributed utopia in which any idea could thrive and flowed freely across geographic and cultural boundaries is devolving into a consolidated handful of walled gardens governed by commercial entities wielding absolute power over what is allowed and what is undesirable.
The online world is becoming an ever-more-important piece of our societal existence. In the US, people have become so glued to their phones that 10% of all pedestrian emergency room visits are now from cell phones and not a week goes by without another report of what has become “death by cellphone.” A recent Ofcom report suggests that 39% of Americans agree or strongly agree with the statement “I am happy to provide personal information online to companies as long as I get what I want,” the highest of the nine countries sampled, while 70% of respondents either agreed or were indifferent to the commercial use of their personal information in return for free services. There is even a growing countermovement lamenting the role of humans in the digital economy “not as creators of value, but as the content … we are the content … we are the data … as you use a smartphone, your smartphone gets smarter, but you get dumber.” Facebook has become such a part of our lives that the slightest outage is front page news and Time magazine writes an entire article about a brief technical glitch.
At the core of this digital economy lie a new generation of electronic gatekeepers: a mixture of highly sophisticated computer algorithms tended and programmed by a small cadre of technical elite that determine what we see in the online world. With nearly two thirds of Americans getting their news at least in part from social media platforms like Twitter and Facebook, the decisions made by those algorithms are increasingly wielding an outsized influence on American life.
Perhaps most famously the two social media platforms attracted outsized attention in 2014 during the height of the Ferguson protests during which Facebook’s algorithms filtered out the violent protests and replaced them with happier imagery of the ALS ice bucket challenge.
The influence of Facebook’s algorithms over what users see was reinforced by an April 2012 post by Facebook itself noting that at the time just 16% of content posted to the network was seen by others and that companies should pay to “sponsor” their posts to ensure they are seen. Facebook constantly changes the weights that go into its algorithms, meaning what is widely disseminated today may face a penalty tomorrow. Users found this out the hard way when Facebook suddenly deemphasized posts containing photographs, prioritizing posts containing video instead, in keeping with the company’s migration to video rich content. By the end of 2014 a typical photo post on Facebook was seen by just 4 out of every 100 followers, a text only post seen by just under 6 out of every 100 followers, and video posts by around 9 out of every 100 followers.
In other words, there is so much content being generated on Facebook, that users only see a small and ever-shrinking fraction of it, meaning that paid advertising and the decisions of Facebook’s own algorithms determine an ever-growing fraction of what we ultimately see when we log in. With Facebook pushing aggressively to become a primary distributor of news content, these algorithms will even play a substantial role in the world events we are aware of. It is not just social networks – research has suggested the power of search algorithms to change the outcome of a presidential election.
Facebook was recently in the news when its free internet service was temporarily shuttered in India and then Egypt for violating the countries’ net neutrality laws by offering free access only to a small number of websites determined by Facebook itself. Twitter abruptly terminated and then recently restored access to Politwoops, which tracks politician’s deleted tweets. Twitter initially responded to the spread of Islamic State propaganda on its platform with a pledge to uphold “the ability of users to share freely their views — including views that many people may disagree with or find abhorrent.” Yet, it subsequently began quietly deleting ISIS accounts and last week reversed course and formally amended its terms of use to ban “hateful conduct.”
In a world in which the President of Turkey commonly labels his political “opponents as terrorists or traitors” and where in other countries government criticisms are deemed hate speech with corresponding jail sentences, this poses an uncertain landscape to the future of freedom of speech online. In fact, just days after announcing its new policy, Twitter made headlines when it banned a political activist after allegedly confusing his common last name and use of Arabic in his tweets with a wanted terror leader.
The role of social media networks like Facebook and Twitter in determining what we see online has growing implications for censorship and determining acceptable speech and ideas online. The Electronic Frontier Foundation operates an entire website devoted to tracking censorship and content restrictions by major online platforms. The site was first created after Facebook removed all links to a call for Palestinian freedom, while in another case Facebook blocked all posts about a study claiming immigrants were taking up newly available jobs in the US. In both cases Facebook cited “abusive” content as the reason for the removal. More recently, Facebook blocked posts about the Indian Prime Minister’s visit to the UK, only to reverse itself a few hours later with the terse statement “The content was mistakenly captured by our spam filter and has now been restored. We are sorry for the error and inconvenience caused.” While almost certainly a technical glitch, it is fascinating to note that blocking all mentions worldwide about the meeting of two heads of state was deemed by Facebook as simply a minor “inconvenience.”
Facebook has been criticized for enforcing specific standards of female behavior and modesty in how it enforces censorship and takedowns of imagery of men versus women. Twitter recently censored imagery of the Paris attacks at the request of the French government, while Facebook removed fictionalized posts that mirrored alleged government misconduct in Venezuela, inserting it squarely into the debate over the freedom of political speech. This past November Facebook not only banned all links to rival upstart Tsu, but even allegedly retroactively cleansed its network of past links that had ever been made, arguing that they were spam.
While there will always be mistakes and disagreements when reviewing content at the scale of Twitter and Facebook, the use of automated review mechanisms, the limited contact options, and the lack of an appeals process makes these removals particularly problematic. Both networks rely heavily on fully automated removal algorithms that can easily be spoofed, yet offer few explanatory details when posts or accounts are removed and few mechanisms for communicating with their staff. The lack of transparency and communication is a growing hallmark of the digital world. In fact, T-Mobile made headlines recently when it silently reduced the resolution of videos viewed from certain websites as a way to ease its own data costs, without providing any notification or warning to users.
The algorithms themselves that make removal decisions and the human decisions that drive them operate as black boxes without any accountability to the outside world. As a private commercial service, users have no legal “right” to publish any given content on Facebook or Twitter and thus there is little recourse when the services decide to remove a particular piece of content or ban links to certain services from their platforms.
Users also trade their personal information and privacy for access to the services. This was perhaps most famously demonstrated in 2007 when a 15 year old girl from Dallas, Texas found her photograph plastered across billboards in Australia and portrayed in a very negative context, all because her church youth counselor posted the photograph to an online photo service and didn’t fully understand the click-through legal contract for the rights he assigned the image. According to some estimates, more than half the Internet’s economic value comes from the collection and reselling of private data on individuals in the form of advertising and marketing. In fact, collecting user data is so critical to modern social networks that in its official announcement of its new HTML5 video player, Facebook devoted an entire section to how it ensured that it will still be able to capture every bit of user interaction with videos on its network.
The World Wide Web that powers these services was originally designed as a global decentralized network in which freedom of speech reigned supreme and no single entity controlled what was published online. The ability of anyone anywhere to create a website and make it available to the entire world meant that what was prohibited speech in one country could simply be hosted on a server in a different country to ensure that no single government was able to control the free flow of information around the world.
Fast forward to today and the web is rapidly centralizing into the hands of a small number of companies that exercise absolute power over what exists on their platforms, deciding what is allowable speech and what will be censored, with criteria that can change in an instant.
It is not just social media platforms that suffer from these issues. Apple’s App Store has generated a myriad of its own headlines. A Pulitzer-winning cartoonist had his app banned in 2010 because his cartoons frequently ridiculed public figures, while apps chronicling drone strikes and offering virtual walkthroughs of major news events have all found themselves on the banned side of Apple’s review.
In 2009 an ebook reader was banned because it was theoretically possible to use the app to locate an explicit book, then reinstated the app after extensive press coverage of the ban. In 2011 Apple banned an app that chronicled issues around the supply chain of smartphones, including references to worker suicides at one of the factories that manufactures Apple’s devices. At the time Apple’s developer guidelines provided no clear definition of what would cause an app to be banned, providing only the vague definition of “content or behavior that we believe is over the line … what line you ask? … you will … know it when you cross it.”
Moreover, it isn’t just content that is banned. Companies often ban competing products. When streaming video service Meerkat began to compete with a soon-to-be-launched Twitter service, Twitter simply banned it, giving the company less than two hours’ notice. In 2012 Twitter launched its own “official” interfaces and sharply curtailed or banned a wide swath of common applications, suddenly upending in a single evening an entire developer ecosystem that had grown up around its services. Even Apple has banned applications that “duplicat[e] the functionality” of existing Apple services.
The founding vision and dream of the internet as an unfiltered and truly demographic and distributed utopia in which any and every idea could thrive and information flowed freely across geographic and cultural boundaries is devolving into a centralized digital world consolidated into a handful of walled gardens governed by commercial entities wielding absolute power over what is allowed and what is undesirable.
In the words of Jennifer Granick, we are beginning to see a web “shift[ing] from liberator to oppressor … the end of the Internet Dream.”