How OpenAI’s Sam Altman Is Thinking About AGI and Superintelligence in 2025

How OpenAI’s Sam Altman Is Thinking About AGI and Superintelligence in 2025

OpenAI CEO Sam Altman recently published a post on his personal blog reflecting on AI progress and his predictions for how the technology will impact humanity’s future. “We are now confident we know how to build AGI [artificial general intelligence] as we have traditionally understood it,”Altman wrote. He added that OpenAI, the company behind ChatGPT, is beginning to turn its attention to superintelligence.

[time-brightcove not-tgx=”true”]

While there is no universally accepted definition for AGI, OpenAI has historically defined it as “a highly autonomous system that outperforms humans at most economically valuable work.” Although AI systems already outperform humans in narrow domains, such as chess, the key to AGI is generality. Such a system would be able to, for example, manage a complex coding project from start to finish, draw on insights from biology to solve engineering problems, or write a Pulitzer-worthy novel. OpenAI says its mission is to “ensure that AGI benefits all of humanity.”

Altman indicated in his post that advances in the technology could lead to more noticeable adoption of AI in the workplace in the coming year, in the form of AI agents—autonomous systems that can perform specific tasks without human intervention, potentially taking actions for days at a time. “In 2025, we may see the first AI agents ‘join the workforce⁠’⁠ and materially change the output of companies,” hewrote.

In a recent interview with Bloomberg, Altman said he thinks “AGI will probably get developed during [Trump’s] term,” while noting his belief that AGI “has become a very sloppy term.” Competitors also think AGI is close: Elon Musk, a co-founder of OpenAI, who runs AI startup xAI, and Dario Amodei, CEO of Anthropic, have both said they think AI systems could outsmart humans by 2026. In the largest survey of AI researchers to date, which included over 2,700 participants, researchers collectively estimated there is a 10% chance that AI systems can outperform humans on most tasks by 2027, assuming science continues progressing without interruption.

Others are more skeptical. Gary Marcus, a prominent AI commentator, disagrees with Altman that AGI is “basically a solved problem,” while Mustafa Suleyman, CEO of Microsoft AI, has said, regarding whether AGI can be achieved on today’s hardware,“the uncertainty around this is so high, that any categorical declarations just feel sort of ungrounded to me and over the top,” citing challenges in robotics as one cause for his skepticism. 

Microsoft and OpenAI, which have had a partnership since 2019, also have a financial definition of AGI. Microsoft is OpenAI’s exclusive cloud provider and largest backer, having invested over $13 billion in the company to date. The companies have an agreement that Microsoft will lose access to OpenAI’s models once AGI is achieved. Under this agreement, which has not been publicly disclosed, AGI is reportedly defined as being achieved when an AI system is capable of generating the maximum total profits to which its earliest investors are entitled: a figure that currently sits at $100 billion. Ultimately, however, the declaration of “sufficient AGI” remains at the “reasonable discretion” of OpenAI’s board, according to a report in The Information.

At present, OpenAI is a long way from profitability. The company currently loses billions annually and it has reportedly projected that its annual losses could triple to $14 billion by 2026. It does not expect to turn its first profit until 2029, when it expects its annual revenue could reach $100 billion. Even the company’s latest plan, ChatGPT Pro, which costs $200 per month and gives users access to the company’s most advanced models, is losing money, Altman wrote in a post on X. Although Altman didn’t explicitly say why the company is losing money, running AI models is very cost intensive, requiring investments in data centers and electricity to provide the necessary computing power.  

Pursuit of superintelligence

OpenAI has said that AGI “could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” But recent comments from Altman have been somewhat more subdued. “My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” he said in December. “AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence.”

In his most recent post, Altman wrote, “We are beginning to turn our aim beyond [AGI], to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future.”

He added that “superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.” This ability to accelerate scientific discovery is a key distinguishing factor between AGI and superintelligence, at least for Altman, who has previously written that “it is possible that we will have superintelligence in a few thousand days.”

The concept of superintelligence was popularized by philosopher Nick Bostrom, who in 2014 wrote a best-selling bookSuperintelligence: Paths, Dangers, Strategies—that Altman has called “the best thing [he’s] seen on the topic.” Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”—like AGI, but more. “The first AGI will be just a point along a continuum of intelligence”, OpenAI said in a 2023 blog post. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.”

These harms are inextricable from the idea of superintelligence, because experts do not currently know how to align these hypothetical systems with human values. Both AGI and superintelligent systems could cause harm, not necessarily due to malicious intent, but simply because humans are unable to adequately specify what they want the system to do. As professor Stuart Russell told TIME in 2024, the concern is that “what seem to be reasonable goals, such as fixing climate change, lead to catastrophic consequences, such as eliminating the human race as a way to fix climate change.” In his 2015 essay, Altman wrote that “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

Read More: New Tests Reveal AI’s Capacity for Deception 

OpenAI has previously written that it doesn’t know “how to reliably steer and control superhuman AI systems.” The team created to lead work on steering superintelligent systems for the safety of humans was disbanded last year, after both its co-leads left the company. At the time, one of the co-leads, Jan Leike, wrote on X that “over the past years, safety culture and processes have taken a backseat to shiny products.” At present, the company has three safety bodies: an internal safety advisory group, a safety and security committee, which is part of the board, and the deployment safety board, which has members from both OpenAI and Microsoft, and approves the deployment of models above a certain capability level. Altman has said they are working to streamline their safety processes.

Read More: AI Models Are Getting Smarter. New Tests Are Racing to Catch Up

When asked on X whether he thinks the public should be asked if they want superintelligence, Altman replied: “yes i really do; i hope we can start a lot more public debate very soon about how to approach this.” OpenAI has previously emphasized that the company’s mission is to build AGI, not superintelligence, but Altman’s recent post suggests that stance might have shifted.

Discussing the risks from AI in the recent Bloomberg interview, Altman said he still expects “that on cybersecurity and bio stuff, we’ll see serious, or potentially serious, short-term issues that need mitigation,” and that long term risks are harder to imagine precisely. “I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn,” he said.

Learnings from his brief ouster

Reflecting on recent years, Altman wrote that they “have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far.”

Delving further into his brief ouster in November 2023 as CEO by the OpenAI board, and subsequent return to the company, Altman called the event “a big failure of governance by well-meaning people, myself included,” noting he wished he had done things differently. In his recent interview with Bloomberg he expanded on that, saying he regrets initially saying he would only return to the company if the whole board quit. He also said there was “real deception” on behalf of the board, who accused him of not being “consistently candid” in his dealings with them. Helen Toner and Tasha McCauley, members of the board at the time, later wrote that senior leaders in the company had approached them with concerns that Altman had cultivated a “toxic culture of lying,” and engaged in behaviour that could be called “psychological abuse.”

Current board members Bret Taylor and Larry Summers have rejected the claims made by Toner and McCauley, and pointed to an investigation of the dismissal by law firm WilmerHale on behalf of the company. They wrote in an op-ed that they “found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team.”

The review attributed Altman’s removal to “a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman,” rather than concerns regarding product safety or the pace of development. Commenting on the period following his return as CEO, Altman told Bloomberg, “It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f—ed me and f—ed the company were gone, and now I had to clean up their mess.” He did not specify what he meant by “fake news.”

Writing about what the experience taught him, Altman said he had “learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility.”

Since the end of 2023, many of the companies’ top researchers—including its cofounder and then-chief scientist, Ilya Sutskever, its chief technology officer, Mira Murati, and Alec Radford, who was lead author on the seminal paper that introduced GPT—have left the company.

Read More: Timeline of Recent Accusations Leveled at OpenAI, Sam Altman

In December, OpenAI announced plans to restructure as a public benefit corporation, which would remove the company from control by the nonprofit that tried to fire Altman. The nonprofit would receive shares in the new company, though the value is still being negotiated.

Acknowledging that some might consider discussion of superintelligence as “crazy,” Altman wrote, “We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important,” adding: “Given the possibilities of our work, OpenAI cannot be a normal company.”

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *