Digitally transmitted pings, buzzes, scores, pop up questions and the like attract attention and, usually, a reaction. Which was exactly the point of applying such stimuli. But the longer term question is whether the broader societal outcomes already becoming evident are worth the short term benefits to the companies who set these performance altering processes in motion. JL
Tyler Bettilyon reports in One Zero:
Feedback is a powerful determinant of human behavior. We respond to social pressure, blinking red lights, and new scores. Whatever gets measured gets managed. Measuring impact necessitates measuring and managing users. Companies will send you measurements, multiple times per day. “As attention gets more competitive, we have to crawl deeper down the brainstem to get you addicted to getting attention from other people.” Most of us cannot help but be affected. We’ve become so accustomed to this feedback loop that the lack of notifications is a signal of its own. “First we shape our tools, thereafter they shape us.”
This morning, as all mornings, I opened Twitter, Linkedin, and Medium. I know I’ve been in a writing slump on Medium because I only had nine notifications this morning. I probably haven’t been doing enough social media marketing either: only three notifications on Twitter. On LinkedIn I had eight notifications, but somehow none of them were actually related to me. Maybe LinkedIn is trying to tell me something. Duolingo is definitely trying to tell me something: I broke my language learning streak.Last week, while traveling, I took five Lyfts and provided 25 stars. Some say I’m too generous, but unless something goes seriously wrong during the drive I give five-out-of-five. I’m not about to put someone’s livelihood at risk because I don’t like their music or because they’re not an effusive conversationalist. I also gave surveys to 15 students via Google Forms last week. Their average satisfaction with my class was evidently 3.9 / 4. I wonder if the students treat my survey the same way I treat Lyft drivers, but I’m grateful for the ego boost either way. I also wonder if Google learned as much about my students as I did: Maybe they’ll be served an ad for a competitor’s computer science class.Feedback is a powerful determinant of human behavior. We respond to social pressure, blinking red lights, and new high scores. An old managerial adage claims that whatever gets measured gets managed. Of course, a zillion companies want you to prioritize their products and services over everything else. They will send you various measurements, multiple times per day, often encircled in an urgency-inducing red bubble, begging you to log back into their app. Most of us cannot help but be affected by this feedback, often without ever realizing it. As the renowned science fiction writer William Gibson observed, “first we shape our tools, and thereafter they shape us.”Much has been said about the enormous data extraction operation that has been evolving online. Entire books, such as security expert Bruce Schneier’s Data and Goliath and Shoshana Zuboff’s The Age of Surveillance Capitalism, are filled with disturbing revelations of widespread data mining. But, the cycle doesn’t just end once the data is harvested. Obviously, we can thank surveillance capitalism for the relentless onslaught of “relevant” ads we see every day. But these companies also deliver some of their measurements back to us as measurements. The injection of data back into our lives creates a pervasive and subtle motive that now guides significant aspects of everything we do.We’ve become so accustomed to this feedback loop that even the lack of notifications is a signal of its own.Quora asks me, based on my profile, “can you answer this algorithms question for Gurdeep?” I don’t respond, but I feel a little guilty about it. Sorry, Gurdeep.Facebook tells me, “you made Jason mad!” Shortly thereafter, Facebook shows me one of Jason’s posts. The post makes me mad. I’m low on willpower and, unable to resist the urge, I type back: “Fuck you, Jason.” We don’t speak anymore and I can’t tell if I’m better or worse for it.Strava tells me I just set a personal record. I feel pride as I kick off my shoes and look ahead to my next run.A friend bemoans the dating apps that tell him how desirable he is, in precise, quantified detail. I feel sympathy, he feels insecure.If you’re looking at a screen, it’s probably given you some unsolicited feedback in the last few minutes. Hell, your phone has probably interrupted you (buzz buzz, please look at me!) to deliver some algorithmically generated advice. We’ve become so accustomed to this feedback loop that even the lack of notifications is a signal of its own. Really, no one liked that photo? Ha, that cowardly internet stranger doesn’t have the chutzpah to argue with me, the obviously braver internet stranger. Why hasn’t anyone matched with me yet?This feedback loop has, at times, made me anxious AF. Habitually checking my phone. Nervous that someone might have digitally demanded my attention. Excited that someone might have paid attention to me. Worried I might offend someone I love. Giddy that I might have received praise. I was never a complete wreck, but my relationship with some apps and services has certainly made me feel less healthy, happy, and whole overall.Amid all that anxiety, I slowly broke up with Facebook over several years. First turning off notifications, then removing it from my phone completely. The occasional joy of connecting with an old friend kept me coming back to Facebook for a long time. But I finally deleted my Facebook account in 2019 as the flood of scandals demonstrated how little Facebook actually cared about helping me “connect” with my “friends.”Pull to refresh and you might win a new notification!In moments of candor, and among people they trust, even the most faux-zen CEOs of Silicon Valley will admit that the purpose of their business is to make money. The mission is shareholder value, and the “mission” is just an exercise in product and market fit. In the case of social media, the money comes from our attention. Tristan Harris, former design ethicist at Google and founder of the Center for Humane Technology, has been one of the most prominent voices criticizing the vicious competition for our attention on such platforms.Speaking to Congress in June, Harris told legislators, “as attention gets more competitive, we have to crawl deeper down the brainstem to your identity and get you addicted to getting attention from other people.” He compared design tactics used by tech companies to slot machines — pull to refresh and you might win a new notification! — and describes infinite scroll features as an attempt to “remove the stopping cue,” keeping you in the app. He described people who — like 2017 me — were becoming “obsessed with the constant feedback they get from others.”Social media use, especially high levels of use, has been tied to rising levels of anxiety and depression among teenagers. Some research has even established causality, finding that those who deactivate their social media accounts spend more time socializing in person, which has fairly well-established mental health benefits, and that intentionally limiting social media use caused significant reductions in anxiety among participants.This isn’t earth-shattering news. Like smokers, many social media users realize the downsides while continuing to use the product. I tried an informal experiment — in a new private browsing window I opened Google and typed “how to quit,” and let Google’s data hungry oracle open a window into humanity’s soul:Unlike cigarette companies, social media operations do not have a chemically addictive substance to boost sales. But they do have an unnerving stockpile of psychological ammunition to leverage against our fragile minds. Harris and Zuboff both describe data stockpiles as a form of power imbalance. Facebook, Twitter, and Google have an almost alchemical ability to transmute billions of data points into an algorithmic engine optimized for engagement. Harris likens this process to a virtual voodoo doll, with notifications and new content playing the role of a pushpin. What happens if I poke at this bias, or that desire?In 2014, Facebook got caught with the pushpin still in the doll. Researchers at Facebook intentionally manipulated the news feeds of more than 680,000 people. Some were shown more negative posts, others were shown more positive posts, and the resulting paper confirmed that emotions are contagious online. Those shown increased negativity displayed increased negativity in their own posts, and vice versa. Facebook never asked any users for consent to participate in this experiment. In a pattern that has become quite familiar since 2014, the researcher at Facebook was “very sorry.”Contemporaneous reporting in the wake of the 2014 scandal revealed what probably should have been obvious: Facebook was running tons of experiments on its platform. It would be financially irresponsible for Facebook not to measure the impact of changes to its design and algorithms. Measuring that impact necessitates measuring and managing its users. The business model of Facebook, Twitter, Google, and others relies on engagement. The risk of competition mandates that those companies encourage their users to stay on their platform and use their service frequently.Dr. William Brady, a postdoctoral fellow at Yale University, has found that moral and emotional content is especially gripping on these platforms and that outrage is among the most powerful emotions in terms of spreading content online. There are at least two psychological factors driving this phenomena. First, there are personal motivations related to humanity’s ever-present group orientation. Brady told me when we express outrage or joy about an issue connected to morality we “signal our own attitudes and beliefs to our ingroup.” His work has found that emotional content tends to spread much better within — rather than between — distinct social groups online.The second, Brady told me, is that we might be innately wired to respond to emotional content. “There is also something about moral and emotional content that our perception system is attuned to,” he explained, “when people express emotions, it’s important that we know how to navigate that emotion in the social world. This draws us into that content.”I asked if human psychology alone was enough to explain the swampy state of social media. Brady said he isn’t ready to take that for granted, “if we want to understand why this content is spreading, we also have to understand the specific design of social media and how it might amplify this content. We get rewarded from our group in this immediate and quantifiable way, so it’s possible that this social feedback could amplify social content.” But he cautioned that his work hasn’t clearly established the impact of the algorithms and design decisions in this process, although he is currently doing research along those lines.These perspectives also help explain a paradoxical finding from a 2018 study: Following people you disagree with actually increased polarization. Tristan Harris says the platforms are cynically built to maximize engagement. Brady and others have established that outrageous content is highly engaging. If you buy those arguments, it stands to reason that the voodoo specialists at Facebook and Twitter would want to push their needles straight into our amygdalae.
The platform operators aren’t the only ones interested in our eyeballs.Advertisers, influencers (who are advertisers disguised as your friends), manipulation mercenaries like Cambridge Analytica, and state-sponsored propagandists have all built their own efforts on top social media firms’ engagement optimization machines. Dr. Darren Linvill, associate professor at Clemson University, studies digital disinformation especially by state actors. Linvill commented in a recent article that professional trolls “understand how to harness our biases,” and “know what pressure points to push and how best to drive us to distrust our neighbors.”Dr. Linvill told me that, “absolutely the design of the system fuels disinformation,” from the near-frictionless creation of anonymous accounts to the data-hungry engagement algorithms. He also explained that data collection is a kind of double-edged sword. The data that gives Twitter and Facebook the power to manipulate us is the same data that helps researchers like Linvill identify and study trolls.Linvill reminded me that, “If you’re going to police a platform, you need resources.” Those resources include money, person-power, and data. He also expressed deep frustration with Facebook and Google, who are enormously well resourced, but hoard their data troves like jealous dragons. Similarly, the algorithms that process this data are guarded trade secrets. Without access to the algorithms and the data, it’s hard to quantify their impact on the spread of content. But I’ll go out on a limb and speculate that it’s not trivial.In my view, none of this is inherent to social media, the internet as a whole, or even to a “data driven” lifestyle generally. The worst aspects highlighted here are the specific result of business models predicated on one-sided data extraction, intense competition for our limited attention, and a cynical reduction of people to the metrics they generate.Personally, I loved Facebook for many years. From 2008 to 2012, I widely extolled the virtues of connecting with friends and family scattered around the globe. I felt it broadened my horizons, gave me a more cosmopolitan mindset, and certainly exposed me to many new and interesting ideas. It’s not hard to imagine versions of social media where that’s all still true, but it is hard to imagine the techie robber barons of our new gilded age building such systems out of the goodness of their heart.In his testimony before Congress, Harris suggested that many of these services could be engineered in prosocial ways. Imagine if Facebook gave you a notification to “give your eyes a rest,” if you’ve been in the app for 30 minutes straight. Instagram recently announced it will start asking “are you sure you want to post that?” when the message is similar to ones that have already been flagged or reported by users.In the hands of good-faith organizations with better data management protocol, I believe the processing of personal data can absolutely improve our lives.Similarly, I like that Duolingo nags me a little when I stop practicing. Yes, it is an example of what Harris calls “persuasive technology,” and it does serve to keep me engaged with the app, but it’s also nagging me to learn something and keep practicing.A spokesperson for Duolingo told me that they, “discourage binging Duolingo lessons in favor of shorter practice sessions on a regular basis.”They do run tests, and collect your data as a result, but they use those results to maximize learning outcomes. While Duolingo’s revenue is derived from advertising, the spokesperson indicated that they do not give data collected within their app to Google, who serves the ads.Limiting our engagement with these apps and media ecosystems, and approaching them with intentionality can alleviate some of the risks. One study found that limiting social media use to 30 minutes per day had positive impacts on well-being. Plus, there are risks associated with checking out completely as well: One study found that while political polarization decreased among those who stopped using social media, so did factual news knowledge. The Center for Humane Technology maintains a long list of suggestions for taking control of your own engagement.Harris also argued in his testimony to Congress that regulating these companies like fiduciaries — as we do with lawyers, financial advisors, and doctors — could compel them to abandon the “race to the bottom of the brainstem” by forcing them to build their services with the best interests of their customers at heart. The Electronic Frontier Foundation, a nonprofit focused on digital civil liberties, also endorses the idea of treating companies that collect significant customer data as “information fiduciaries.” The EFF makes the case that, like financial advisors who manage your money, those who collect data “owe a duty of care, meaning they must act competently and diligently to avoid harm to their customers.”It’s clear to me that some data collection and processing has incredible upside. In the hands of good-faith organizations with better data management protocol, I believe the processing of personal data can absolutely improve our lives. In some cases, like the world of health care, such efforts can even lengthen and save our lives. Little nudges and notifications can be deployed towards prosocial goals.But this upside is cold comfort in a world where behemoth corporations surveil our every action, without disclosure or consent, and store the results indefinitely in Mark Zuckerberg’s simmering cauldron of neural network fodder, bubbling endlessly while some data scientists try to alchemize our data into gold.The technology behemoths of the last decade haven’t taken their responsibility to their customers and society nearly as seriously as they have taken their fiduciary responsibility to their shareholders. Without an external incentive — like new legislation or massive loss of market share from customer rejection — Facebook, Google, Twitter, and the rest will continue down the path of extraction and exploitation. We can try to individually resist these companies’ digital voodoo, but it’s past time we regulated the practice as well.
0 comments:
Post a Comment