Over the past year, Sam Altman has brought OpenAI to the tech industry’s adult table. Thanks to its hugely popular chatbot ChatGPT, the San Francisco startup was at the center of an artificial intelligence boom, and Altman, CEO of OpenAI, had become one of the most recognizable people in technology.
But that success generated tensions within the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Altman and nine others, was increasingly concerned that OpenAI’s technology could be dangerous and that Altman was not paying enough attention to that risk, according to three people familiar with his thought. Sutskever, a member of the company’s board of directors, also objected to what he considered his diminished role within the company, according to two of the people.
That conflict between the rapid growth and safety of AI was highlighted Friday afternoon, when Altman was expelled from his job by four of the six members of OpenAI’s board of directors, led by Mr. Sutskever. The move surprised OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry experts said the split was as significant as when Steve Jobs was ousted from Apple in 1985.
The ouster of Altman, 38, drew attention to a long-standing divide in the AI community between people who believe AI is the biggest business opportunity in a generation and others who fear moving too quickly could be dangerous. . And the overthrow showed how a philosophical movement dedicated to fear of AI had become an inevitable part of tech culture.
Since ChatGPT launched almost a year ago, artificial intelligence has captured the public’s imagination, with hopes that it can be used for important work such as drug research or to help teach children. But some AI scientists and political leaders worry about its risks, such as the disappearance of automated jobs or autonomous warfare that goes beyond human control.
Fears that AI researchers were building something dangerous have been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it.
OpenAI’s board of directors has not offered a specific reason for why it ousted Atman, other than to say in a blog post that it did not believe he was communicating honestly with them. OpenAI employees were told Saturday morning that his dismissal had nothing to do with “embezzlement or anything related to our financial, business, security or privacy practices,” according to a message seen by The New York Times.
Greg Brockman, another co-founder and president of the company, resigned in protest on Friday night. The research director of OpenAI did the same. By Saturday morning, the company was in chaos, according to a half-dozen current and former employees, and its roughly 700 employees were struggling to understand why the board made this decision.
“I’m sure all of you are feeling confusion, sadness, and maybe some fear,” Brad Lightcap, OpenAI’s chief operating officer, said in a memo to OpenAI employees. “We are completely focused on getting through this, moving toward resolution and clarity and getting back to work.”
Altman was asked to join a board meeting via video at noon in San Francisco on Friday. There, Sutskever, 37, read a script that closely resembled the blog post the company published minutes later, according to a person familiar with the matter. The publication said Altman “was not consistently candid in his communications with the board, which hindered his ability to exercise his responsibilities.”
But in the hours that followed, OpenAI employees and others focused not only on what Altman could have done, but also on the way the San Francisco startup is structured and the extreme views on the dangers of AI inherent to the company’s work since its creation in 2015.
Sutskever and Altman could not be reached for comment Saturday.
In recent weeks, Jakob Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to the company’s director of research. After previously serving in a position below Sutskever, he was promoted to a position alongside Sutskever, according to two people familiar with the matter.
Pachocki resigned from the company on Friday night, the people said, shortly after Brockman. Earlier in the day, OpenAI said Brockman had been removed as board chair and would report to new interim CEO Mira Murati. Other Altman allies, including two high-level researchers, Szymon Sidor and Aleksander Madry, also left the company.
Mr. Brockman said in a publish in X, formerly Twitter, who although he was the chairman of the board, was not part of the board meeting in which Altman was removed. That left Mr. Sutskever and three other board members: Adam D’Angelo, chief executive of the question-and-answer site Quora; Tasha McCauley, senior associate scientist at the RAND Corporation; and Helen Toner, director of fundamental research strategy and scholarship at Georgetown University’s Center for Security and Emerging Technology.
They could not be reached for comment Saturday.
McCauley and Toner have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that AI could one day destroy humanity. Current artificial intelligence technology cannot destroy humanity. But this community believes that as technology becomes more and more powerful, these dangers will emerge.
In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to form a new AI company called anthropic.
Sutskever was increasingly aligned with those beliefs. Born in the Soviet Union, he spent his formative years in Israel and immigrated to Canada as a teenager. As an undergraduate at the University of Toronto, he helped create a breakthrough in an artificial intelligence technology called neural networks.
In 2015, Sutskever left his job at Google and helped found OpenAI alongside Altman, Brockman, and Tesla CEO Elon Musk. They built the lab as a nonprofit and said that, unlike Google and other companies, it would not be driven by commercial incentives. They promised to build what is called artificial general intelligence, or AGI, a machine that can do anything the brain can do.
Altman transformed OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment with Microsoft. Such huge sums of money are essential to developing technologies like GPT-4, which was launched earlier this year. Since its initial investment, Microsoft has invested another $12 billion in the company.
The company was still governed by the nonprofit board. Investors like Microsoft receive benefits from OpenAI, but their benefits are limited. Any money over the limit is funneled back to the nonprofit.
Seeing the power of GPT-4, Sutskever helped create a new Super Alignment team within the company that would explore ways to ensure future versions of the technology did no harm.
Altman was open to those concerns, but he also wanted OpenAI to stay ahead of its much larger competitors. In late September, Altman flew to the Middle East to meet with investors, according to two people familiar with the matter. He sought up to $1 billion in funding from SoftBankJapanese technology investor led by Masayoshi Son, for a potential OpenAI company that would build a hardware device to run AI technologies like ChatGPT.
OpenAI is also in talks for a “public offering” financing that would allow employees to cash out shares in the company. That deal would value OpenAI at more than $80 billion, nearly triple its value about six months ago.
But the company’s success appears to have raised concerns that something could go wrong with AI.
“It doesn’t seem at all implausible that we have computers (data centers) that are much smarter than people,” Sutskever said at a news conference. podcast The 2nd of November. “What would those AIs do? I don’t know.”
Kevin Roose and Tripp Mickle contributed reports.