A Blog by Jonathan Low

 

Jul 3, 2024

Are Growing Lawsuits About AI Training Data Changing VCs' GenAI Economics?

AI training data costs were rising even before the slew of recent lawsuits made it apparent that the happy days of unimaginably vast profit margins which so excited venture and other investors might be coming to an end. 

On top of concerns about AI's slower than expected corporate uptake, the dearth of trained, skilled employees to implement and - most worrisome of all - the growing realization that there simply might not be enough new data available for training - has raised concerns about what effect this may have on growth prospects as well as investors' anticipated returns. JL 

Melissa Heikkila reports in MIT Technology Review:

The generative AI boom is built on scale. The more training data, the more powerful the model. But there’s a problem. AI companies have pillaged the internet for training data, and many data set owners have started restricting the ability to scrape. We’ve also seen a backlash against the AI practice of indiscriminately scraping online data, as users opt out of making their data available for training as lawsuits from artists, writers, and the New York Times, claim AI companies take their IP without consent or compensation. Thanks to the scarcity of quality data and demand pressure, data owners now have leverage. The music industry’s lawsuit sends the loudest message yet: High-quality training data is not free. 

The generative AI boom is built on scale. The more training data, the more powerful the model. 

But there’s a problem. AI companies have pillaged the internet for training data, and many websites and data set owners have started restricting the ability to scrape their websites. We’ve also seen a backlash against the AI sector’s practice of indiscriminately scraping online data, in the form of users opting out of making their data available for training and lawsuits from artists, writers, and the New York Times, claiming that AI companies have taken their intellectual property without consent or compensation. 

Last week three major record labels—Sony Music, Warner Music Group, and Universal Music Group—announced they were suing the AI music companies Suno and Udio over alleged copyright infringement. The music labels claim the companies made use of copyrighted music in their training data “at an almost unimaginable scale,” allowing the AI models to generate songs that “imitate the qualities of genuine human sound recordings.” 

 

But  this moment also sets an interesting precedent for all of generative AI development. Thanks to the scarcity of high-quality data and the immense pressure and demand to build even bigger and better models, we’re in a rare moment where data owners actually have some leverage. The music industry’s lawsuit sends the loudest message yet: High-quality training data is not free. 

It will likely take a few years at least before we have legal clarity around copyright law, fair use, and AI training data. But the cases are already ushering in changes. OpenAI has been striking deals with news publishers such as Politico, the AtlanticTime, the Financial Times, and others, and exchanging publishers’ news archives for money and citations. And YouTube announced in late June that it will offer licensing deals to top record labels in exchange for music for training. 

These changes are a mixed bag. On one hand, I’m concerned that news publishers are making a Faustian bargain with AI. For example, most of the media houses that have made deals with OpenAI say the deal stipulates that OpenAI cite its sources. But language models are fundamentally incapable of being factual and are best at making things up. Reports have shown that ChatGPT and the AI-powered search engine Perplexity frequently hallucinate citations, which makes it hard for OpenAI to honor its promises.   

It’s tricky for AI companies too. This shift could lead to them build smaller, more efficient models, which are far less polluting. Or they may fork out a fortune to access data at the scale they need to build the next big one. Only the companies most flush with cash, and/or with large existing data sets of their own (such as Meta, with its two decades of social media data), can afford to do that. So the latest developments risk concentrating power even further into the hands of the biggest players. 

On the other hand, the idea of introducing consent into this process is a good one—not just for rights holders, who can benefit from the AI boom, but for all of us. We should all have the agency to decide how our data is used, and a fairer data economy would mean we could all benefit. 

0 comments:

Post a Comment