Neima Jahromi reports in The New Yorker:
YouTube now attracts a monthly audience of two billion people. Every minute, users upload five hundred hours of video. Provocateurs take advantage of data vacuums. Automated systems within a social platform can be co-opted to work at cross purposes. The increasing efficiency of the recommendation system drew toxic content (which) changed the tenor of the platform. “Bullshit is more difficult to combat than it is to spread." Algorithmic tweaks don’t change the basic structure of YouTube, a structure that encourages the mass uploading of videos from unvetted sources. This structure is incompatible with a healthy civic discourse.
Earlier this year, executives at YouTube began mulling, once again, the problem of online speech. On grounds of freedom of expression and ideological neutrality, the platform has long allowed users to upload videos endorsing noxious ideas, from conspiracy theories to neo-Nazism. Now it wanted to reverse course. “There are no sacred cows,” Susan Wojcicki, the C.E.O. of YouTube, reportedly told her team. Wojcicki had two competing goals: she wanted to avoid accusations of ideological bias while also affirming her company’s values. In the course of the spring, YouTube drafted a new policy that would ban videos trafficking in historical “denialism” (of the Holocaust, 9/11, Sandy Hook) and “supremacist” views (lauding the “white race,” arguing that men were intellectually superior to women). YouTube planned to roll out its new policy as early as June. In May, meanwhile, it started preparing for Pride Month, turning its red logo rainbow-colored and promoting popular L.G.B.T.Q. video producers on Instagram.
On May 30th, Carlos Maza, a media critic at Vox, upended these efforts. In a Twitter thread that quickly went viral, Maza argued that the company’s publicity campaign belied its lax enforcement of the content and harassment policies it had already put in place. Maza posted a video supercut of bigoted insults that he’d received from Steven Crowder, a conservative comedian with nearly four million YouTube followers; the insults focussed on Maza’s ethnicity and sexual orientation. When Crowder mentioned Maza in a video, his fans piled on; last year, Maza’s cell phone was bombarded with hundreds of texts from different numbers which read “debate steven crowder.” Maza said that he’d reported the behavior to YouTube’s content moderators numerous times, and that they had done nothing.
On Twitter and his YouTube channel, Crowder insisted that, in labelling Maza a “lispy queer” and a “token Vox gay-athiest sprite,” he had been trying to be funny. Maza’s supporters, meanwhile, shared screenshots of ads that had run before Crowder’s videos, suggesting that, because YouTube offers popular video producers a cut of ad revenue, the company had implicitly condoned Crowder’s messages. YouTube said it would investigate. A week later, it tweeted that Crowder hadn’t violated its community guidelines in any of the videos that Maza highlighted. The next day, it announced its new policy, which included a warning that the company would no longer share ad revenue with YouTubers who repeatedly brushed up against its rules. Then it announced that Crowder would be cut off from the platform’s ad dollars.
The news made no one happy. Maza said that he wanted Crowder’s channel removed completely; conservatives, including the Republican senator Ted Cruz, complained about censorship. YouTube employees, siding with Maza, began denouncing their bosses on Twitter and in the press. “It’s a classic move from a comms playbook,” Micah Schaffer, a technology adviser who wrote YouTube’s first community guidelines, told me. “Like, ‘Hey, can we move up that launch to change the news cycle?’ Instead, it made it worse. It combined into a Voltron of bad news.” (A YouTube spokesperson said that the launch date was not in response to any individual event.) Former colleagues deluged Schaffer, who had left the
in 2009, with bewildered e-mails and texts. (A typical subject line: “WTF is Going on at YouTube?”) Sitting in a dentist’s office, he started typing a response on his phone, trying to lay out what he thought had gone wrong at the company.
Schaffer told me that hate speech had been a problem on YouTube since its earliest days. Dealing with it used to be fairly straightforward. YouTube was founded, in 2005, by Chad Hurley, Steve Chen, and Jawed Karim, who met while working at PayPal. At first, the site was moderated largely by its co-founders; in 2006, they hired a single, part-time moderator. The company removed videos often, rarely encountering pushback. In the intervening thirteen years, a lot has changed. “YouTube has the scale of the entire Internet,” Sundar Pichai, the C.E.O. of Google, which owns YouTube, told Axios last month. The site now attracts a monthly audience of two billion people and employs thousands of moderators. Every minute, its users upload five hundred hours of new video. The technical, social, and political challenges of moderating such a system are profound. They raise fundamental questions not just about YouTube’s business but about what social-media platforms have become and what they should be.Perhaps because of the vast scale at which most social platforms operate, proposed solutions to the problem of online hate speech tend to be technical in nature. In theory, a platform might fine-tune its algorithms to deëmphasize hate speech and conspiracy theories. But, in practice, this is harder than it sounds. Some overtly hateful users may employ language and symbols that clearly violate a site’s community guidelines—but so called borderline content, which dances at the edge of provocation, is harder to detect and draws a broad audience. Machine-learning systems struggle to tell the difference between actual hate speech and content that describes or contests it. (After YouTube announced its new policies, the Southern Poverty Law Center complained that one of its videos, which was meant to document hate speech, had been taken down.) Some automated systems use metadata—information about how often a user posts, or about the number of comments that a post gets in a short period of time—to flag toxic content without trying to interpret it. But this sort of analysis is limited by the way that content bounces between platforms, obscuring the full range of interactions it has provoked.
Tech companies have hired thousands of human moderators to make nuanced decisions about speech. YouTube also relies on anonymous outside “raters” to evaluate videos and help train its recommendations systems. But the flood of questionable posts is overwhelming, and sifting through it can take a psychological toll. Earlier this year, YouTube described its efforts to draw more heavily on user feedback—survey responses, likes and dislikes—to help identify “quality” videos. And yet, in a 2016 white paper, the company’s own engineers wrote that such metrics aren’t very useful; the problem is that, for many videos, “explicit feedback is extremely sparse” compared to “implicit” signals, such as what users click on or how long they watch a video. Teen-agers, in particular, who use YouTube more than any other kind of social media, often respond to surveys in mischievous ways.
Business challenges compound the technical ones. In a broad sense, any algorithmic change that dampens user engagement could work against YouTube’s business model. Netflix, which is YouTube’s chief rival in online video, can keep subscribers streaming by licensing or crafting addictive content; YouTube, by contrast, relies on user-generated clips, strung together by an automated recommendation engine. Programmers are always tweaking the system and the company is reluctant to disclose details. Still, a 2018 white paper outlined the general principle at that time: once someone starts watching a video, the engine is designed to “dig into a topic more deeply,” luring the viewer down the proverbial rabbit hole. Many outside researchers argue that this system, which helped drive YouTube’s engagement growth, also amplified hate speech and conspiracy theories on the platform. As the engine dug deeper, it risked making unsavory suggestions: unearth enough videos about the moon landing and some of them may argue that it was faked.
Francesca Tripodi, a media scholar at James Madison University, has studied how right-wing conspiracy theorists perpetuate false ideas online. Essentially, they find unfilled rabbit holes and then create content to fill them. “When there is limited or no metadata matching a particular topic,” she told a Senate committee in April, “it is easy to coördinate around keywords to guarantee the kind of information Google will return.” Political provocateurs can take advantage of data vacuums to increase the likelihood that legitimate news clips will be followed by their videos. And, because controversial or outlandish videos tend to be riveting, even for those who dislike them, they can register as “engaging” to a recommendation system, which would surface them more often. The many automated systems within a social platform can be co-opted and made to work at cross purposes.Technological solutions are appealing, in part, because they are relatively unobtrusive. Programmers like the idea of solving thorny problems elegantly, behind the scenes. For users, meanwhile, the value of social-media platforms lies partly in their appearance of democratic openness. It’s nice to imagine that the content is made by the people, for the people, and that popularity flows from the grass roots.
In fact, the apparent democratic neutrality of social-media platforms has always been shaped by algorithms and managers. In its early days, YouTube staffers often cultivated popularity by hand, choosing trending videos to highlight on its home page; if the site gave a leg up to a promising YouTuber, that YouTuber’s audience grew. By spotlighting its most appealing users, the platform attracted new ones. It also shaped its identity: by featuring some kinds of content more than others, the company showed YouTubers what kind of videos it was willing to boost. “They had to be super family friendly, not copyright-infringing, and, at the same time, compelling,” Schaffer recalled, of the highlighted videos.
Today, YouTube employs scores of “partner managers,” who actively court and promote celebrities, musicians, and gamers—meeting with individual video producers to answer questions about how they can reach bigger audiences, giving them early access to new platform features, and inviting them to workshops where they can network with other successful YouTubers. Since 2016, meanwhile, it has begun paying socially conscious YouTubers to create videos about politically charged subjects, through a program called Creators for Change. “In this instance, it’s a social-impact group,” Paul Marvucic, a YouTube marketing manager, explained. “We’re saying, ‘We really believe in what you guys are saying, and it’s very core to our values.’ ”
The question of YouTube’s values—what they are, whether it should have them, how it should uphold them—is fraught. In December of last year, Sundar Pichai, the C.E.O. of Google, went before Congress and faced questions about social media’s influence on politics. Democrats complained that YouTube videos promoted white supremacy and right-wing extremism; Republicans, in turn, worried that the site might be “biased” against them, and that innocent videos might be labelled as hate speech merely for containing conservative views. “It’s really important to me that we approach our work in an unbiased way,” Pichai said.
And yet the Creators for Change program requires YouTube to embrace certain kinds of ideological commitments. This past fall, for an audience of high-school and college students, YouTube staged a Creators for Change event in the Economic and Social Council chamber at the United Nations. The occasion marked the seventieth anniversary of the Universal Declaration of Human Rights, and five “ambassadors” from the program joined Craig Mokhiber, the director of the New York office of the U.N. High Commissioner for Human Rights, onstage. “The U.N. is not just a conference center that convenes to hear any perspective offered by any person on any issue,” Mokhiber said. Instead, he argued, it represents one side in a conflict of ideas. In one corner are universal rights to housing, health care, education, food, and safety; in the other are the ideologies espoused by Islamophobes, homophobes, anti-Semites, sexists, ethno-nationalists, white supremacists, and neo-Nazis. In his view, YouTube needed to pick a side. He urged the YouTubers onstage to take the ideals represented by the U.N. and “amplify” them in their videos. “We’re in the middle of a struggle that will determine, in our lifetime, whether human dignity will be advanced or crushed, for us and for future generations,” he said.
Last year, YouTube paid forty-seven ambassadors to produce socially conscious videos and attend workshops. The program’s budget, of around five million dollars—it also helps fund school programs designed to improve students’ critical-thinking skills when they are confronted with emotionally charged videos—is a tiny sum compared to the hundreds of millions that the company reportedly spends on YouTube Originals, its entertainment-production arm. Still, one YouTube representative told me, “We saw hundreds of millions of views on ambassadors’ videos last year—hundreds of thousands of hours of watch time.” Most people encountered the Creators for Change clips as automated advertisements before other videos.
The Mumbai-based comedian Prajakta Koli, known on YouTube as MostlySane, sat beside Mokhiber in the U.N. chamber. Around four million people follow her channel. Her videos usually riff on the irritating people whom she encounters in her college cafeteria or on the pitfalls of dating foreigners. “No Offence,” a music video that she screened at the Creators for Change event, is different. As it begins, Koli slouches in her pajamas on the couch, watching a homophobe, a misogynist, and an Internet troll—all played by her—rant on fictional news shows. A minute later, she dons boxing gloves and takes on each of them in a rap battle. After the screening, Koli said that she had already begun taking on weighty subjects, such as divorce and body shaming, on her own. But it helped that YouTube had footed the production and marketing costs for “No Offence,” which were substantial. The video is now her most watched, with twelve million views.
On a channel called AsapScience, Gregory Brown, a former high-school teacher, and his boyfriend, Mitchell Moffit, make animated clips about science that affects their viewers’ everyday lives; their most successful videos address topics such as the science of coffee or masturbation. They used their Creators for Change dollars to produce a video about the scientifically measurable effects of racism, featuring the Black Lives Matter activist DeRay Mckesson. While the average AsapScience video takes a week to make, the video about racism had taken seven or eight months: the level of bad faith and misinformation surrounding the topic, Brown said, demanded extra precision. “You need to explain the study, explain the parameters, and explain the result so that people can’t argue against it,” he said. “And that doesn’t make the video as interesting, and that’s a challenge.” (Toxic content proliferates, in part, because it is comparatively easy and cheap to make; it can shirk the burden of being true.)
YouTube hopes that Creators for Change will have a role-model effect. The virality of YouTube videos has long been driven by imitation: in the site’s early days, clips such as “Crazy frog brothers” and “David After Dentist” led fans and parodists to reënact their every move. When it comes to political videos, imitation has cut both ways. The perceived popularity of conspiracy videos may have led some YouTubers to make similar clips; conversely, many Creators for Change ambassadors cite other progressive YouTubers as inspirations. (Prajakta Koli based her sketches on those of Lilly Singh, a sketch-comedy YouTuber who has also spoken at the United Nations.) In theory, even just broadcasting the idea that YouTube will reward social-justice content with production dollars and free marketing might encourage a proliferation of videos that denounce hate speech.
And yet, on a platform like YouTube, there are reasons to be skeptical about the potential of what experts call “counterspeech.” Libby Hemphill, a computer-science professor at the University of Michigan’s Center for Social Media Responsibility, studies how different kinds of conversations, from politics to TV criticism, unfold across social media; she also prototypes A.I. tools for rooting out toxic content. “If we frame hate speech or toxicity as a free-speech issue, then the answer is often counterspeech,” she explained. (A misleading video about race and science might be “countered” by the video made by AsapScience.) But, to be effective, counterspeech must be heard. “Recommendation engines don’t just surface content that they think we’ll want to engage with—they also actively hide content that is not what we have actively sought,” Hemphill said. “Our incidental exposure to stuff that we don’t know that we should see is really low.” It may not be enough, in short, to sponsor good content; people who don’t go looking for it must see it.
Theoretically, YouTube could fight hate speech by engineering a point-counterpoint dynamic. In recent years, the platform has applied this technique to speech about terrorism, using the “Redirect Method”: moderators have removed terrorist-recruitment videos while redirecting those who search for them to antiterror and anti-extremist clips. (YouTube doesn’t pay people to create the antiterror videos, but it does handpick them.) A YouTube representative told me that it has no plans to redirect someone who searches for a men’s-rights rant, say, to its Creators for Change–sponsored feminist reply. Perhaps the company worries that treating misogynists the way that it treats ISIS would shatter the illusion that it has cultivated an unbiased marketplace of ideas.One way to make counterspeech more effective is to dampen the speech that it aims to counter. In March, after a video of a white-supremacist mass shooting at a mosque in Christchurch, New Zealand, went viral, Hunter Walk, a former YouTube executive, tweeted that the company should protect “freedom of speech” but not “freedom of reach.” He suggested that YouTube could suppress toxic videos by delisting them as candidates for its recommendation engine—in essence, he wrote, this would “shadowban” them. (Shadow-banning is so-called because a user might not know that his reach has been curtailed, and because the ban effectively pushes undesirable users into the “shadows” of an online space.) Ideally, people who make such shadow-banned videos could grow frustrated by their limited audiences and change their ways; videos, Walk explained, could be shadow-banned if they were linked to by a significant number of far-right Web havens, such as 8chan and Gab. (Walk’s tweets, which are set to auto-delete, have since disappeared.)
Shadow-banning is an age-old moderation tool: the owners of Internet discussion forums have long used it to keep spammers and harassers from bothering other users. On big social-media platforms, however, this kind of moderation doesn’t necessarily focus on individuals; instead, it affects the way that different kinds of content surface algorithmically. YouTube has published a lengthy list of guidelines that its army of raters can use to give some types of content—clips that contain “extreme gore or violence, without a beneficial purpose,” for example, or that advocate hateful ideas expressed in an “emotional,” “polite,” or even “academic-sounding” way—a low rating. YouTube’s A.I. learns from the ratings to make objectionable videos less likely to appear in its automated recommendations. Individual users won’t necessarily know how their videos have been affected. The ambiguities generated by this system have led some to argue that political shadow-banning is taking place. President Trump and congressional Republicans, in particular, are alarmed by the idea that some version of the practice could be widely employed against conservatives. In April, Ted Cruz held a Senate subcommittee hearing called “Stifling Free Speech: Technological Censorship and the Public Discourse.” In his remarks, he threatened the platforms with regulation; he also brought in witnesses who accused them of liberal bias. (YouTube denies that its raters evaluate recommendations along political lines, and most experts agree that there is no evidence for such a bias.)
Among Cruz’s guests was Eugene Kontorovich, a law professor at George Mason University. In his testimony, Kontorovich pondered whether regulation could, in fact, address the issue of bias on search or social platforms. “Actually enforcing ideological neutrality would itself raise First Amendment questions,” he said. Instead, he argued, the best way to address issues of potential bias was with transparency. A technique like shadow-banning might be effective, but it would also stoke paranoia. From this perspective, the clarity of the Creators for Change program adds to its appeal: its videos are prominently labelled.
Engineers at YouTube and other companies are hesitant to detail their algorithmic tweaks for many reasons; among them is the fact that obscure algorithms are harder to exploit. But Serge Abiteboul, a computer-science professor who was tasked by the French government to advise legislators on online hate speech, argues that verifiable solutions are preferable to hidden ones. YouTube has claimed that, since tweaking its systems in January, it has reduced the number of views for recommended videos containing borderline content and harmful misinformation by half. Without transparency and oversight, however, it’s impossible for independent observers to confirm that drop. “Any supervision that’s accepted by society would be better than regulation done in an opaque manner, by the platforms, themselves, alone,” Abiteboul said.Looking back over the history of YouTube, Micah Schaffer thinks he can see where the company made mistakes. Before YouTube he had worked at a Web site that showcased shocking videos and images—gruesome accidents, medical deformities. There he saw how such material can attract a niche of avid users while alienating many others. “Bikinis and Nazism have a chilling effect,” he said. YouTube sought to distinguish itself by highlighting more broadly appealing content. It would create an ecosystem in which a large variety of people felt excited about expressing themselves.
The company featured videos it liked, banned others outright, and kept borderline videos off the home page. Still, it allowed some toxic speech to lurk in the corners. “We thought, if you just quarantine the borderline stuff, it doesn’t spill over to the decent people,” he recalled. “And, even if it did, it seemed like there were enough people who would just immediately recognize it was wrong, and it would be O.K.” The events of the past few years have convinced Schaffer that this was an error. The increasing efficiency of the recommendation system drew toxic content into the light in ways that YouTube’s early policymakers hadn’t anticipated. In the end, borderline content changed the tenor and effect of the platform as a whole. “Our underlying premises were flawed,” Schaffer said. “We don’t need YouTube to tell us these people exist. And counterspeech is not a fair burden. Bullshit is infinitely more difficult to combat than it is to spread. YouTube should have course-corrected a long time ago.”
Some experts point out that algorithmic tweaks and counterspeech don’t change the basic structure of YouTube—a structure that encourages the mass uploading of videos from unvetted sources. It’s possible that this structure is fundamentally incompatible with a healthy civic discourse. “It’s not that Jesus came down and said, ‘You must suck up five hundred hours of content per minute, every day,’ ” Sarah T. Roberts, an expert on commercial content moderation at the University of California, Los Angeles, said. “That’s something that they came up with, that they facilitated. We’re inside the parameters of the potentials and possibilities that have been meted out in the architecture and in the economics of these platforms. It’s only been a decade and a half, at most, and it’s so second nature.”
YouTube has denied that rabbit holes leading toward radicalization exist on its platform; it also says that, despite what researchers claim, “extreme” videos are not more engaging or algorithmically favored. At the same time, the company has said that it has tuned its recommendation systems to redirect users who search for borderline content about breaking news toward more authoritative sources. (What counts as “authoritative” among mainstream outlets such as Fox News and CNN is itself a sticking point.) As the Times reported, in a recent article about the apparent power of alt-right YouTubers, the company also appears to have tweaked its recommendation system to push users to watch videos on new and varied subjects. (In theory, helping viewers find new interests will keep them engaged in the long run.) Recently, it also announced that it would give greater control over recommendations to its users, who will now be able to more easily prevent individual YouTube channels from popping up among their suggested videos.
There are commercial reasons, it turns out, for fighting hate speech: according to a survey by the Anti-Defamation League, fifty-three per cent of Americans reported experiencing online hate or harassment in 2018—rates of bigoted harassment were highest among people who identified as L.G.B.T.Q.—and, in response, many spent less time online or deleted their apps. A study released last year, by Google and Stanford University, identified toxic speech as a “rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” As part of the Creators for Change program, YouTube has drawn up lesson plans for teachers which encourage students to “use video to find your voice and bring people together.” Teen-agers posting videos disputing toxic ideas are engaged users, too.
I asked YouTube’s representatives why they didn’t use the Redirect Method to serve Creators for Change videos to people who search for hate speech. If they valued what their ambassadors had to say, why wouldn’t they disseminate those messages as effectively as possible? A representative explained that YouTube doesn’t want to “pick winners.” I brought that message back to Libby Hemphill, the computer-science professor. “I wish they would recognize that they already do pick winners,” she said. “Algorithms make decisions we teach them to make, even deep-learning algorithms. They should pick different winners on purpose.” Schaffer suggested that YouTube’s insistence on the appearance of neutrality is “a kind of Stockholm syndrome. I think they’re afraid of upsetting their big creators, and it has interfered with their ability to be aggressive about implementing their values.”
The Creators for Change program is open about its bias and, in that respect, suggests a different way of thinking about our social platforms. Instead of aspiring, unrealistically, to make them value-neutral meeting places—worldwide coffee shops, streaming town squares—we could see them as forums analogous to the United Nations: arenas for discussion and negotiation that have also committed to agreed-upon principles of human dignity and universal rights. The U.N., too, has cultivated celebrity “ambassadors,” such as David Beckham, Jackie Chan, and Angelina Jolie. Together, they promote a vision of the world not merely as it is—a messy place full of violence and oppression—but as we might like it to be. Perhaps this way of conceptualizing social platforms better reflects the scale of the influence they wield.
After YouTube removed ads from Steven Crowder’s channel, he gathered his co-hosts and his lawyer and started a live stream—also on YouTube—to respond. YouTube could keep their money, he said, so long as he could “keep the reach” of his four million followers. (That reach draws in money from fans, who buy merchandise, and sponsors.) Crowder also claimed that YouTube’s partner managers had courted him years ago, saying that they wanted more conservative voices on the platform. He went on, “If they said, ‘Listen, we’ve changed our minds, you cannot say anything that offends anybody and we don’t want conservatives here’—you know what? I’d walk off into the sunset.”
Almost certainly, this claim is a bluff. “YouTube has a complete monopoly on video hosting, and they know it,” Lindz Amer, a queer and nonbinary YouTuber, told the Guardian, after the controversy with Carlos Maza. When Amer tried to leave YouTube for another platform, Vimeo, their average audience size went from a hundred thousand views per video to five.
Recently, Gregory Brown, of AsapScience, also expressed his disappointment with the company. “We opened ourselves up to unbelievable hate after coming out on @YouTube,” he wrote, on Twitter, of his relationship with his co-host, Moffit. “But we ignore it, because we love educating people about science on the platform.” When I spoke to him a few days later, he told me that he was “trying to be as empathetic to YouTube as possible—which, at times, is in conflict with what I really feel.” It hurt to know that it was possible to “make money, right now, on YouTube, by being homophobic.” But Brown also said that he recognized the complexity created by YouTube’s global reach. “YouTube is trying to moderate and keep their values while trying to make money off, literally, the entire world,” he said. At the same time, he continued, “Being able to educate people on this huge scale is so important to me. I still feel that YouTube is the best place to do it.”
Steve Chen, the YouTube co-founder, recalled the platform’s early days, when he often intervened to highlight videos that could only be found on YouTube and that the algorithm might not pick up. He might place amateur footage of the destruction wrought by Hurricane Katrina, or documentation of a racist attack at a bus stop in Hong Kong, on the home page. “I remember a video that only had twenty-five views,” he told me. “I said, ‘There’s no way that this piece of content from some photographer in Africa would ever be viewed from around the world without YouTube, so I’m going to feature this.’ ” Creators for Change, he explained, sounded like a “more evolved version of what I was trying to do on the home page—trying to showcase, encourage, and educate users that this is the kind of content that we want on YouTube.” But the technological developments that have made the platform’s global reach possible have also made such efforts more difficult. “Regardless of the proportions or numbers of great content that YouTube wants on the site, it’s the way the algorithm works,” he said. “If you’re watching this type of content, what are the twenty other pieces of content, among the billions, that you’re going to like most? It probably won’t be choosing out of the blue.” Chen, who left YouTube a decade ago, told me that he doesn’t envy the people who have to decide how the system should work. “To be honest, I kind of congratulate myself that I’m no longer with the company, because I wouldn’t know how to deal with it,” he said.
Brown, for his part, wanted the platform to choose a point of view. But, he told me, “If they make decisions about who they’re going to prop up in the algorithm, and make it more clear, I think they would lose money. I think they might lose power.” He paused. “That’s a big test for these companies right now. How are they going to go down in history?”
0 comments:
Post a Comment