A Blog by Jonathan Low

 

Sep 3, 2024

The Threat to OpenAI and Chatgpt From Open Source Competition Is Growing

Cheaper, faster and more accessible has almost always won in tech - and AI increasingly appears to be no exception. JL

Christopher Mims reports in the Wall Street Journal:

OpenAI, maker of ChatGPT will face tougher competition than ever in the AI market. Much of that new competition is coming from startups that promise to undercut OpenAI’s services with ones that could be cheaper to use, and better at certain narrow tasks. Many experts believe we will eventually rely on a variety of AIs—some from closed providers like OpenAI and Google, others from open-source challengers. That mix will determine whether it was worth it for companies to spend tens of billions of dollars building advanced AIs. Retaining customers could be an issue for OpenAI. Most companies do not want to be beholden to a single AI vendor, and for now, switching between them is relatively easy. 

AppleNvidia and Microsoft are in talks to invest in OpenAI, maker of ChatGPT, at precisely the moment when it’s become apparent that the company will face tougher competition than ever in the burgeoning artificial-intelligence market.

Much of that new competition is coming from startups that promise to undercut OpenAI’s services with ones that could be cheaper to use, and also better at certain narrow tasks.

At least one tech giant sees promise in the new crop of AI startups. Mark Zuckerberg, chief executive of Facebook’s parent, Meta Platforms, is positioning his company as a champion for the little guys, letting outside developers use Meta’s cutting-edge AI model, Llama, free of charge. Google has also released an open-source AI that’s not nearly as capable as Meta’s.

In a July letter, Zuckerberg argued that this open-source approach “will ensure that more people around the world have access to the benefits and opportunities of AI” without concentrating power in the hands of tech giants. 

Open-source software can be used commercially by pretty much anyone. Examples include the Android operating system, developed by Google but available for any manufacturer to use in mobile devices without paying.

That stands in contrast to the more typical, “closed” approach taken by companies that control who can use their software. Microsoft, for instance, charges manufacturers a licensing fee to install its Windows operating system on their computers. Apple doesn’t let other companies use its iPhone or Mac operating systems.

For the most part, OpenAI falls into this latter camp—charging end users and companies to access its most powerful models.

Many experts believe we will eventually all rely on a variety of AIs—some from closed providers like OpenAI and Google, others from the kind of open-source challengers Zuckerberg is championing. The nature of that mix will determine whether it was worth it for companies to spend tens of billions of dollars building advanced AIs.

The most recent example of that investment: Apple and Nvidia are in talks to join Microsoft in investing in OpenAI’s next round of financing, which would value the company at $100 billion.

Meanwhile, open-source AI is catching up to the big early movers, especially in day-to-day business uses that demand consistent performance and low costs.

Meta announced Thursday that versions of Llama have been downloaded nearly 350 million times by software developers and tinkerers—10 times what that number was a year ago. An apples-to-apples comparison of those numbers with ChatGPT isn’t possible, but OpenAI says the ChatGPT service now has 200 million weekly active users.

For many everyday applications, AIs that are trained to do only specific tasks can be better and cheaper to run, says Julien Launay. His startup, Adaptive ML, uses Llama to train small, customized AIs for companies. These smaller AIs can be more easily customized by users than giant, closed AIs like ChatGPT, he adds.

DoorDashShopifyGoldman Sachs and Zoom are among the companies that have said they use open-source AIs for tasks ranging from customer service to summarizing meetings.

DoorDash is among the companies that have said they’re using open-source AIs for tasks ranging from customer service to summarizing meetings. Photo: Yuki Iwamura/Bloomberg News

Procore Technologies, which makes a platform for managing complicated construction projects, is a good example of how a variety of both closed and open AIs can be used for real-world tasks, from estimating costs to coordinating actual building work.

AI can be helpful at lots of points in that process. At the start of this year, when Procore first rolled out features that used large language models, the company relied on OpenAI’s ChatGPT, accessed through Microsoft’s cloud platform, says Rajitha Chaparala, vice president of product and AI at Procore. This kind of AI used to be costly, but prices have plummeted in the past 12 months.

Now, however, Procore has built software that makes it easy to use just about any AI throughout its system.

This illustrates one way retaining customers could be an issue for OpenAI. Most companies do not want to be beholden to a single AI vendor, and for now at least, switching between them is relatively easy. This pits OpenAI’s models, even if they’re getting more affordable, against ones that its customers are now able to develop on their own.

Open-source AI may make sense when running on individual devices like new AI-enabled PCs and smartphones, says an OpenAI spokesman. In general, the company welcomes competition for any of its services, he adds, because it is confident it is best positioned to deliver the capabilities, prices and performance that software developers want.

All this competition from open-source AIs grows the pool of engineers who know how to use AI, which translates to growth in demand for OpenAI’s services, as well, says the OpenAI spokesman.

An even greater level of transparency than most open-source AI models offer will be necessary when it comes to AI systems for sensitive fields like medicine and insurance, argues Ali Farhadi, CEO of the Allen Institute for Artificial Intelligence. In February, the Allen Institute, a nonprofit research group that aims to solve the world’s problems through AI, released its own open-source AI, and took the unusual step of also releasing all the data on which it was trained, and all the steps involved in tuning the model to offer better answers.

When it comes to bigger-picture concerns about safety, opinions vary as to which approach is more likely to stave off worst-case AI scenarios like the malign, all-powerful intelligence of Skynet in the “Terminator” movies. Advocates of closed systems say they have the resources and control to help prevent misuse of the technology, and that open-source tools can be abused by bad actors. Open-source backers say their systems are subject to public scrutiny, allowing them to detect problems more easily, and deal with unintentional harms from their systems, which would be difficult to detect in closed systems.

Long before we get to the point of battling sentient AI overlords, though, those making systems of more modest power will have to prove their enormous investments are worthwhile.

0 comments:

Post a Comment