And, in fact, the highly paid tech workforce may suffer a double blow; not only loss of financial benefit for their work - but loss of jobs due to generative AI's ability to do those techie things like write code. JL
Wendy Liu reports in The Atlantic, image Sarah Grillo, Axios:
OpenAI may still claim that it aims to “benefit humanity as a whole,” but its top brass will benefit the most. The result is a disparity between who does the work that enables these AI models to function and who gets to control and profit from them. AI research resembles a digital “enclosure of the commons,” whereby the informational heritage of humanity—a collective treasure that cannot really be owned by anyone—is seen by corporations primarily as a source of potential profit. This is the Silicon Valley model in a nutshell: in exchange for our data, we get free platforms. (And) it has reminded Silicon Valley of a fundamental truth: Tech workers are just workers.Silicon Valley churns out new products all the time, but rarely does one receive the level of hype that has surrounded the release of GPT-4. The follow-up to ChatGPT can ace standardized tests, tell you why a meme is funny, and even help do your taxes. Since the San Francisco start-up OpenAI introduced the technology earlier this month, it has been branded as “remarkable but unsettling,” and has led to grandiose statements about how “things will never be the same.”
But actually trying out these features for yourself—or at least the ones that have already been publicly released—does not come cheap. Unlike ChatGPT, which captivated the world because it was free, GPT-4 is currently only available to non-developers through a premium service that costs $20 a month. OpenAI has lately made other moves to cash in on its products too. Last month, it announced a partnership with the consulting firm Bain & Company to help automate marketing campaigns and customer-service operations for its clients. And just a few weeks ago, the start-up announced a paid service that would allow other companies to integrate its technology into their own products, and Instacart, Snapchat, and Shopify have already done so.
By next year, OpenAI—a company that was basically unknown outside of tech just a few months ago—expects to rake in $1 billion in annual revenue. And it’s not the only company seeing dollar signs during this AI gold rush. Relatively new start-ups such as Anthropic now have billion-dollar valuations, while Alphabet and Meta have been breathlessly touting their AI investments. Every company wants an AI to call its own, just as they wanted social networks a decade ago or search engines in the decade before. And like those earlier technologies, AI tools can’t entirely be credited to corporate software engineers with six-figure salaries. Some of these products require invaluable labor from overseas workers who make far, far less, and every chatbot is created by ingesting books and content that have been published on the internet by a huge number of people. So in a sense, these tools were built by all of us.
The result is an uncomfortable disparity between who does the work that enables these AI models to function and who gets to control and profit from them. This sort of disparity is nothing new in Silicon Valley, but the development of AI is shifting power further away from those at the bottom at a time when layoffs have already resulted in a sense of wide-ranging precarity for the tech industry. Overseas workers won’t reap any of these profits, nor will the people who might have aspects of their work—or even their entire jobs—replaced by AI, even if their Reddit posts and Wikipedia entries were fed into these chatbots. Well-paid tech workers might eventually lose out too, considering AI’s coding abilities. In the few months since OpenAI has blown up, it has reminded Silicon Valley of a fundamental truth that office perks and stock options should never have been able to disguise: Tech workers are just workers.
The tech industry as a whole may be unabashedly profit-driven despite its lofty rhetoric, but OpenAI wasn’t at first. When the start-up was founded in December 2015, it was deliberately structured as a nonprofit, tapping into a utopian idea of building technology in a way that was, well, open. The company’s mission statement expresses that its aim is “to benefit humanity as a whole,” noting that “since our research is free from financial obligations, we can better focus on a positive human impact.”
The goal might have been worthy, considering all that could go wrong with true artificial intelligence, but it didn’t last. In 2019, citing the need to raise more money for its inventions, OpenAI reconfigured itself into a “capped-profit” company—an uneasy hybrid between for-profit and nonprofit in which any profits are capped at 100 times their initial investment. It has since acted like any other growth-hungry start-up, eager to raise its valuation at every turn. In January, Microsoft dropped $10 billion into OpenAI as part of a deal that gives Microsoft a license to use its technology (hello, Bing), while also providing the start-up with the immense computing resources needed to power its products. That sum creates an inherent tension between OpenAI’s stated commitment and investors’ desire to make good on their investments. The company’s original rhetoric of creating “public goods” bears little resemblance to a Bain partnership oriented around “hyperefficient content creation.” (When reached for comment, a spokesperson for OpenAI did not answer my question about how the company’s latest moves fit within its broader mission.)
This turn toward profit couldn’t possibly compensate for all the labor that contributed to OpenAI’s products. If the outputs of large language models such as GPT-4 feel intelligent and familiar to us, it’s because they are derived from the same content that we ourselves have used to make sense of the world, and perhaps even helped create. Genuine technical achievements went into the development of GPT-4, but the resulting technology would be functionally useless without the input of a data set that represents a slice of the combined insight, creativity, and well, stupidity of humanity. In that way, modern AI research resembles a digital “enclosure of the commons,” whereby the informational heritage of humanity—a collective treasure that cannot really be owned by anyone—is seen by corporations primarily as a source of potential profit. This is the Silicon Valley model in a nutshell: Google organizes the world’s information in a manner that allows it to reap enormous profits through showing us ads; Facebook does the same for our social interactions. It’s an arrangement that most of us just accept: In exchange for our data, we get free platforms.
But even if our internet posts are now data that can be turned into profit for AI companies, people who contributed more directly have been more directly exploited. Whereas some researchers at OpenAI have made nearly $2 million a year, OpenAI reportedly paid outsourced workers in Kenya less than $2 an hour to identify toxic elements in ChatGPT’s training data, exposing them to potentially traumatic content. The OpenAI spokesperson pointed me to an earlier statement to Time that said, “Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content.”
Certainly, these global labor disparities are not unique to OpenAI; similar critiques of outsourcing practices have been leveled at other AI start-ups, in addition to companies, such as Meta, that rely on content moderation for user-generated data. Nor is this even a tech-specific phenomenon: Labor that is seen as simple is outsourced to subcontractors in the global South working under conditions that would not be tolerated by salaried employees in the West.
To recognize that these problems are larger than any one company isn’t to let OpenAI off the hook; rather it’s a sign that the industry and the economy as a whole are built on unequal distribution of rewards. The immense profits in the tech industry have always been funneled toward the top, instead of reflecting the full breadth of who does the work. But the recent developments in AI are particularly concerning given the potential applications for automating work in a way that would concentrate power in the hands of still fewer people. Even the same class of tech workers who are currently benefiting from the AI gold rush may stand to lose out in the future. Already, GPT-4 can create a rudimentary website from a simple napkin sketch, at a moment when workers in the broader tech industry have been taking a beating. In the less than four months between the release of ChatGPT and GPT-4, mass layoffs were announced at large tech companies, including Amazon, Meta, Google, and Microsoft, which laid off 10,000 employees just days before announcing its multibillion-dollar investment in OpenAI. It’s a tense moment for tech workers as a class, and even well-paid employees are learning that they can become expendable for reasons that are outside their control.
If anything, the move to cash in on AI is yet another reminder of who’s actually in charge in this industry that has spawned so many products with enormous impact: certainly not the users, but not the workers either. OpenAI may still claim that it aims to “benefit humanity as a whole,” but surely its top brass will benefit the most.
0 comments:
Post a Comment