But the larger issue here appears to be the inherent conflict between tech's founding 'don't be evil' aspirational tenets and the reality that developments like AI can make a lot of people insanely rich, the ethics and dangers be damned. The claque on OpenAI's board who fired Altman were increasingly concerned by his desire to build a huge tech company which could compete with Google, Meta and Apple. This growth obsession included Altman soliciting investments from Middle East monarchs - and perhaps other global autocrats - not known for their concern about the niceties of human rights. The ultimate technical cause of his firing was the dysfunctional governance structure of OpenAI's board, a warning to all leaders. Ultimately, though, the issue was one of values, mission and money. And recent history has been brutally clear about where tech always ends up on that one. JL
Gerrit de Vynck and colleagues report in the Washington Post and India Tech Today reports:
Why would OpenAI fire him? He was the face of the company and the top guy in AI. Because a larger rift in AI, where dominating the market competes with preventing AI from advancing beyond human control, a disagreement on safety and profit. Altman was (perceived to be) pushing OpenAI aggressively, undermining safety. Some employees expressed concerns with Altman’s focus on consumer products and driving up revenue, which some saw as at odds with the company’s mission to develop AI benefitting all of humanity. A real concern was Altman had not been forthcoming about his fundraising with autocratic regimes in the Middle East, who could use OpenAI’s AI to enable human rights abuses.
Under Altman, OpenAI built the pioneering AI chatbot ChatGPT, which has more than a billion visits.
“We are working hard to get back on track,” the person, who spoke on the condition of anonymity to discuss private matters, said of talks related to Altman’s return.
Altman learned that he was being fired in a Google Meet on Friday. According to a post on X by OpenAI co-founder and president Greg Brockman, who quit the company in solidarity with Altman, the news was delivered by Ilya Sutskever, the company’s chief researcher. The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter.
The schism between Altman and Sutskever mirrors a larger rift in the world of advanced AI, where a race to dominate the market has been accompanied by a near-religious movement to prevent AI from advancing beyond human control. While questions still remain about what spurred the board’s decision to oust Altman, growing tensions had become impossible to ignore as Altman rushed to launch products and build the next big technology company.
As rumors swirled around the reason behind Altman’s firing, OpenAI’s board has remained silent. But according to a person familiar with the board’s proceedings, who spoke on the condition of anonymity to discuss sensitive matters, the real safety concern was that Altman had not been forthcoming about his aggressive fundraising strategies with autocratic regimes in the Middle East, who could use OpenAI’s artificial intelligence technology to build digital surveillance systems or enable human rights abuses.
OpenAI declined to comment on Altman’s fundraising activities.
On Saturday, OpenAI’s investors were already trying to woo Altman back. “Khosla Ventures wants [Altman] back at [OpenAI] but will back him in whatever he does next,” Vinod Khosla, one of the company’s investors, said in a post on X. Altman and Brockman could not be reached for comment.
Some OpenAI employees declared their support for Altman and his potential return Saturday evening. After the just-departed CEO tweeted “i love the openai team so much,” dozens of staffers, including top executives, flooded X with retweets of his message, adding heart emojis in different colors and other messages of appreciation. Tech leaders and onlookers following the boardroom drama interpreted the simultaneous outpouring as a signal to the board and to OpenAI investors that they could face mass resignations if Altman wasn’t brought back.
Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by Chief Operating Officer Brad Lightcap that was obtained by The Washington Post.
“We still share your concerns about how the process has been handled,” Lightcap said in the memo. “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.”
Altman’s ouster also caught rank-and-file employees within OpenAI off-guard, according to a person familiar with internal conversations, who spoke on the condition of anonymity to discuss private conversations. The staff is “still processing it,” the person said.
In text messages that were shared with The Post, some OpenAI research scientists said Friday afternoon that they had “no idea” Altman was going to be fired, and described being “shocked” by the news. One scientist said they were learning about what happened with Altman’s ouster at the same time as the general public.
Over the past year, some OpenAI employees have expressed concerns with Altman’s focus on building consumer products and driving up revenue, which some of those employees saw as being at odds with the company’s original mission to develop AI that would benefit all of humanity, said a person familiar with employees’ thinking, who spoke on the condition of anonymity. Under Altman, OpenAI had been aggressively hiring product development employees and building up its consumer offerings. Its technology was being used by thousands of start-ups and larger companies to run AI features and products that are already being pitched and sold to customers.
During the company’s first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators.
To the tech industry, that announcement was viewed as OpenAI wanting to become a major player on its own and not limiting itself to building AI models for other companies.
“This is not your standard start-up leadership shake-up. 10,000’s of start-ups are building on OpenAI,” Aaron Levie, CEO of cloud storage company Box, said on X.” “This instantly changes the structure of the industry.”
OpenAI started as a nonprofit research lab launched in 2015 to safely build superhuman AI and keep it away from corporations and foreign adversaries. Believers in that mission bristled against the company’s transformation into a juggernaut start-up that could become the next big name in Big Tech.
Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”
“My hope is that we can do a lot more good for the world than just become another corporation that gets that big,” D’Angelo said in the interview. He did not respond to requests for comment.
Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for the Center for Security and Emerging Technology at Georgetown University, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corp. earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work.
Toner occupies the board seat once held by Holden Karnofsky, a former hedge fund executive and CEO of Open Philanthropy, which invested $30 million in OpenAI to gain a board seat and influence the company toward AI safety. Karnofsky, who is married to Anthropic co-founder Daniela Amodei, left the board in 2021 after Amodei and her brother Dario Amodei, who both worked at OpenAI, left to launch Anthropic, an AI start-up more focused on safety.
OpenAI’s board had already lost its most powerful outside members in the past several years. Elon Musk stepped down in 2018, with OpenAI saying his departure was to remove a potential conflict of interest as Tesla developed AI technology of its own. LinkedIn co-founder Reid Hoffman, who also sits on Microsoft’s board, stepped down as an OpenAI director in March, citing a conflict of interest after starting a new AI start-up called Inflection AI that could compete with OpenAI. Shivon Zilis, an executive at Musk’s brain-interface company Neuralink and one of his closest lieutenants, also left in March.
With the departures of Altman and Brockman, OpenAI is being governed by four members: Toner, McCauley, D’Angelo and Sutskever, who OpenAI paid $1.9 million in 2016 for joining the company as its first research director, according to tax filings. Independent directors don’t hold equity in OpenAI.
Sutskever helped create AI software at the University of Toronto called AlexNet, which classified objects in photographs with more accuracy than any previous software had achieved, laying much of the foundation for the field of computer vision and deep learning.
He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people — because those data centers,” he said, “they will be really quite powerful.”
OpenAI has a unique governing structure, which it adopted in 2019. It created a for-profit subsidiary that allowed investors a return on the money they invested into OpenAI, but capped how much they could get back, with the rest flowing back into the company’s nonprofit. The company’s structure also allows OpenAI’s nonprofit board to govern the activities of the for-profit entity, including the power to fire its chief executive.
Microsoft, which has invested billions of dollars in OpenAI in exchange for special access to its technology, doesn’t have a board seat. Altman’s ouster was an unexpected and unpleasant surprise, according to a person familiar with internal discussions at the company who spoke on the condition of anonymity to discuss sensitive matters. A Microsoft spokesperson declined to comment on the prospect of Altman returning to the company. On Friday, Microsoft said it was still committed to its partnership with OpenAI.
As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board.
“What happened at OpenAI today is a board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” Ron Conway, a longtime venture capitalist who was one of the attendees at OpenAI’s developer conference, said on X. “It is shocking, it is irresponsible, and it does not do right by Sam and Greg or all the builders in OpenAI.”
At OpenAI’s office in San Francisco’s Mission district on Sunday, a handful of employees arrived, declining to speak to reporters waiting outside. Altman tweeted a photo of himself frowning while wearing a visitor badge inside OpenAI’s office, writing: “first and last time i ever wear one of these,” suggesting that he intended to return.
On Saturday the world of tech was in a heightened sense of excitement and buzz after the OpenAI board fired the company CEO Sam Altman. The move was, to say the least, unexpected. And sudden. Altman, who has seen his profile rise to that of industry visionaries and geniuses after the success of ChatGPT, was in a way Open AI. He was the face of the company, and by virtue of his position was considered the top guy in the AI world.
So why would OpenAI fire someone like him? Well, there are public reasons and there are reasons behind the scene. The public reasons are stated in the blog that Open AI posted, the same blog in which the company also appointed Open AI chief technology officer Mira Murati — an Albanian — as its interim CEO. The Open AI said:
"Mr Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."
As far as corporate announcements go, this is quite blunt and brutal, hinting that something nasty has happened with OpenAI. Either Altman did something that the OpenAI board believes is indefensible or there has been a clash of ideologies between him and the board.
No one knows for sure, though. The facts, however, are that Altman is out at OpenAI. And so is the co-founder Greg Brockman, who apparently backed Altman.
Beyond the facts there is buzz. And it suggests that Altman has been fired by Open AI board because of disagreement on two matters: Safety and profit (or rather his insistence on profit).
Kara Swisher, the well-sourced Silicon Valley journalist, tweeted as much. She said, "As I understand it, it was a “misalignment” of the profit versus nonprofit adherents at the company. The developer day was an issue."
Some other people have hinted at the same. The issue seems to be the values on which OpenAI was founded and what OpenAI has become in the ChatGPT era. OpenAI was supposed to be a non-profit when it was founded. Elon Musk, who was one of the original co-founders, in recent times has criticised the company for deviating from its non-profit mission and turning to a for-profit model. Musk, of course, a few years ago had exited Open AI over disagreement about the direction of the company.
It is believed that Sam Altman was a thoroughly for-profit CEO. He believed that Open AI must have a viable business model and must push to create products and services that will let it be a tech giant.
Then there is the safety part. The chatter on social media by people who are clued into the Silicon Valley eco-system suggests that Altman was pushing OpenAI too fast and too aggressively and that it was undermining the safety aspect of ChatGPT and other services.
Just days ago at Developers Day, OpenAI came out with plug-ins that will let people create their own custom AI system. The rush for the feature was tremendous and it apparently broke something within the company’s systems. ChatGPT went down for hours and then struggled, leading Altman to tweet that "we are pausing new ChatGPT Plus sign-ups for a bitâ€æ the surge in usage post devday has exceeded our capacity and we want to make sure everyone has a great experience."
Irrespective of why Altman is out at Open AI, the latest developments do send a shockwave through the world of AI. They also apparently should come as a relief to companies like Google and Elon Musk’s X, which are trying to build their own AI systems and so far were finding it a little difficult to keep pace with the relentless march of OpenAI and ChatGPT.
1 comments:
This clash between aggressive growth and Papa's Pizzeria ethical responsibility, compounded by communication breakdowns, ultimately led to his dismissal
Post a Comment