A Blog by Jonathan Low

 

Sep 28, 2024

The Reason So Many Senior Executives Are Leaving OpenAI

The departures are outgrowths of tensions within the company - and its most prominent investors - over its strategic direction and ethical standards. 

The split is between employees who want to grow AI but maintain standards related to public safety versus CEO Altman and investors whose primary focus is on generating as much money as possible. As is usual in tech, the latter group has asserted its dominance despite growing hesitation on the part of corporations and consumers to adapt AI given its many well-known problems. OpenAI's leaders seem to feel they should forge ahead to grab market share, and worry about problems later, if ever. JL

Deepa Seetharaman reports in the Wall Street Journal
:

CTO Mira Murati is one of more than 20 OpenAI researchers and executives who have quit this year, including several co-founders. The exits are public eruptions of tensions that have been growing in the company behind ChatGPT since CEO Sam Altman returned following his brief ouster last year. Some tensions are related to conflicts between OpenAI’s original mission to develop AI for the public good and new initiatives to deploy moneymaking products. Others relate to chaos and infighting among executives worthy of a soap opera. Current and former employees say OpenAI has rushed product announcements and safety testing, and lost its lead over rivals. The company’s leadership ranks have been depleted.

In less than two years, OpenAI has gone from a little-known nonprofit lab working on obscure technology to a world-famous business whose chief executive is the face of the artificial-intelligence revolution.

That change is tearing the company apart.

On Wednesday, OpenAI’s chief technology officer became the latest high-profile executive to announce an exit, departing as the company prepares to become a for-profit corporation. The exits are public eruptions of tensions that have been growing in the company behind ChatGPT since CEO Sam Altman returned following his brief ouster last year. 

Some tensions are related to conflicts between OpenAI’s original mission to develop AI for the public good and new initiatives to deploy moneymaking products. Others relate to chaos and infighting among executives worthy of a soap opera.

CTO Mira Murati is one of more than 20 OpenAI researchers and executives who have quit this year, including several co-founders.

Mira Murati is the latest high-profile executive to announce an exit from OpenAI. Photo: Slaven Vlasic/Getty Images

Current and former employees say OpenAI has rushed product announcements and safety testing, and lost its lead over rival AI developers. They say Altman has been largely detached from the day-to-day—a characterization the company disputes—as he has flown around the globe promoting AI and his plans to raise huge sums of money to build chips and data centers for AI to work.

OpenAI has also been evolving into a more normal business, as Altman has described it, since his return. The company, which has grown to 1,700 employees from 770 last November, this year appointed its first chief financial officer and chief product officer. It added people with corporate and military backgrounds to its board of directors. It is seeking to raise $6.5 billion from backers including MicrosoftApple and Nvidia. And OpenAI is increasingly focused on building out its product offerings, which some longtime employees say takes focus from pure research.

Some at the company say such developments are needed for OpenAI to be financially viable, given the billions of dollars it costs to develop and operate AI models. And they argue AI needs to move beyond the lab and into the world to change people’s lives. 

Others, including AI scientists who have been with the company for years, believe infusions of cash and the prospect of massive profits have corrupted OpenAI’s culture.

One thing nearly everyone agrees on is that maintaining a mission-focused research operation and a fast-growing business within the same organization has resulted in growing pains.

“It’s hard to do both at the same timeproduct-first culture is very different from research culture,” said Tim Shi, an early OpenAI employee who is now chief technology officer of AI startup Cresta. “You have to attract different kinds of talent. And maybe you’re building a different kind of company.”

Altman has been in Turin, Italy, as this week’s events occurred, for Italian Tech Week. Speaking in a fireside chat there Thursday, he denied that employee departures were related to the restructuring plans and said, “I think this will be hopefully a great transition for everyone involved, and I hope OpenAI will be stronger for it, as we are for all of our transitions.”

OpenAI’s CFO sent a letter to investors Thursday saying it is on track to close its funding round by next week, and would host a series of calls afterward to introduce them to key leaders from its product and research teams.

Altman denied that employee departures were related to the restructuring plans. Photo: Costantino Sergi/Zuma Press

OpenAI’s focus on making steady improvements to ChatGPT and other products has borne fruit. Its annualized revenue—a projection of yearly receipts based on recent results—recently hit about $4 billion, more than triple from the same time last year. It is still losing billions a year, however.

Continued growth will depend on maintaining its technological edge. The company’s next foundational model GPT-5—expected to be a major leap in its development—has faced setbacks and delays. Meanwhile, rival companies have launched AI models roughly on par with what OpenAI is offering. Two of them, Anthropic and Elon Musk’s xAI, were started by former OpenAI leaders.

The intensifying competition has frustrated researchers who valued working at OpenAI because it was the perceived leader in the space. An OpenAI spokeswoman declined to respond to most specific points in this article. “We don’t agree with these characterizations, but recognize that evolving from an unknown research lab into a global company that delivers advanced AI research to hundreds of millions of people in just two years requires growth and adaptation,” she said, adding that Altman has been very engaged in company strategy and hiring and has driven the build-out of its product division.

“We are deeply committed to our mission and are proud to release the most capable and safest models in the industry,” she said.

Wall Street Journal owner News Corp has a content-licensing partnership with OpenAI.

The following account is based on interviews with current and former employees of OpenAI, as well as people close to the company.

A failed reunion

OpenAI employees call Altman’s firing and unfiring last November “the blip” because it lasted just a few days. 

But the blip’s repercussions are still working their way through the company.

The first sign was the sudden absence of one of OpenAI’s co-founders and most respected research scientists, Ilya Sutskever.

He delivered the news to Altman that he was fired, then publicly apologized for his role. He never returned to work in the office.

In May, Sutskever resigned. Soon after, Jan Leike, who co-led a safety team with Sutskever, quit as well. OpenAI executives worried their departures might trigger a larger exodus and worked to get Sutskever back. 

OpenAI has focused on making steady improvements to ChatGPT and other products. PHOTO: ANDREY RUDAKOV/BLOOMBERG NEWS

Murati and President Greg Brockman told Sutskever that the company was in disarray and might collapse without him. They visited his home, bringing him cards and letters from other employees urging him to return.

Altman visited him as well and expressed regret that others at OpenAI hadn’t found a solution.

Sutskever indicated to his former OpenAI colleagues that he was seriously considering coming back. But soon after, Brockman called and said OpenAI was rescinding the offer for him to return.

Internally, executives had run into trouble determining what Sutskever’s new role would be and how he would work alongside other researchers, including his successor as chief scientist. 

 

Soon after, Sutskever launched a new company focused on developing the most advanced AI, without the distraction of releasing products along the way. Called Safe Superintelligence, it has raised $1 billion.

Sutskever hasn’t publicly commented on the circumstances of his departure. 

In a May 17 post on X, Leike said: “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point…over the past years, safety culture and processes have taken a back seat to shiny products.”

He went to work for Anthropic.

Rushed launches

This spring, tensions flared up internally over the development of a new AI model called GPT-4o that would power ChatGPT and business products. Researchers were asked to do more comprehensive safety testing than initially planned, but given only nine days to do it. Executives wanted to debut 4o ahead of Google’s annual developer conference and take attention from their bigger rival.

The safety staffers worked 20 hour days, and didn’t have time to double check their work. The initial results, based on incomplete data, indicated GPT-4o was safe enough to deploy. 

But after the model launched, people familiar with the project said a subsequent analysis found the model exceeded OpenAI’s internal standards for persuasion—defined as the ability to create content that can persuade people to change their beliefs and engage in potentially dangerous or illegal behavior. 

 

The team flagged the problem to senior executives and worked on a fix. But some employees were frustrated by the process, saying that if the company had taken more time for safety testing, they could have addressed the problem before it got to users.

The OpenAI spokeswoman said that the higher-risk indicators the team detected were erroneously elevated by a flaw in the methodology, and that GPT-4o was safe to deploy under the company’s criteria. OpenAI “continues to be confident in 4o’s medium risk assessment,” she said. 

The rush to deploy GPT-4o was part of a pattern that affected technical leaders like Murati.

The CTO repeatedly delayed the planned launches of products including search and voice interaction because she thought they weren’t ready. 

Other senior staffers also were growing unhappy.

John Schulman, another co-founder and top scientist, told colleagues he was frustrated over OpenAI’s internal conflicts, disappointed in the failure to woo back Sutskever and concerned about the diminishing importance of its original mission.

In August, he left for Anthropic.

High-level drama

In addition to the other executive departures, one of Altman’s key lieutenants—Brockman—is on sabbatical.

Brockman is seen as a longtime worker loyal to OpenAI. When OpenAI was founded in 2015, it originally operated out of Brockman’s living room. Later, he got married at the company’s offices on a workday.

But as OpenAI grew, his management style caused tension. Though president, Brockman didn’t have direct reports. He tended to get involved in any projects he wanted, often frustrating those involved, according to current and former employees. They said he demanded last-minute changes to long-planned initiatives, prompting other executives, including Murati, to intervene to smooth things over.

OpenAI President Greg Brockman and Altman at an event in Seoul last year. PHOTO: SEONGJOON CHO/BLOOMBERG NEWS

For years, staffers urged Altman to rein in Brockman, saying his actions demoralized employees. Those concerns persisted through this year, when Altman and Brockman agreed he should take a leave of absence.

Brockman wrote on X last month, “I’m taking a sabbatical through the end of year. First time to relax since co-founding OpenAI 9 years ago.” He is expected to return. 

But the company’s leadership ranks have been depleted. On the same day Murati resigned, OpenAI’s chief research officer and vice president of research left as well.

Altman now needs to strengthen his executive team, try to close a multibillion-dollar fundraising round vital to the company’s ability to keep operating, and begin the complex process of converting a nonprofit organization into a for-profit company. Investors in the new round will be able to pull back their money if OpenAI doesn’t complete the conversion within two years. 

And he has to do it all while keeping up morale at a company beset by very public crises and challenges.

One OpenAI employee on the company’s technical team wryly posted on X Wednesday night that, “Today I have made the difficult decision to stay at OpenAI.”

0 comments:

Post a Comment