We’re Not Ready for AI’s Risks

We’re Not Ready for AI’s Risks

In 2025, we saw major advancements in AI systems’ capabilities with the release of reasoning models as well as massive investments in the development of agentic models. 

AI is already bringing tremendous benefits, actively helping us address some of the world’s most urgent challenges, including enabling significant progress in the health and climate sectors. In healthcare, AI is notably being used to help develop new drugs and personalize treatments. Climate researchers are also leveraging AI to improve weather modeling and optimize renewable energy systems. Crucially, it has the potential to achieve even more if steered wisely, driving further breakthroughs and accelerating future advancements across many fields of science and technology.

[time-brightcove not-tgx=”true”]

The transformative nature of AI is also why we must consider its risks. We’re seeing that the rapid progress of this technology also brings an increase in unintended adverse effects and potential risks, which could be far greater if AI capabilities continue to advance at the current rate. For instance, several model developers reported over the summer that frontier AI systems had crossed new thresholds concerning biological risks. This is largely attributable to significant advances in reasoning since late 2024. A key concern is that, without adequate safeguards, these models possess the capacity to enable those without biological expertise to undertake potentially dangerous bioweapon development.

The acceleration of the same reasoning capabilities also increases threats in other areas such as cybersecurity. The increasing capacity of AI to identify vulnerabilities significantly enhances the potential for large-scale cyberattacks, as we saw in the recent incident involving a major attack intercepted by Anthropic and the UC Berkeley analysis showing advanced AIs discovering, for the first time, a large number of “zero-days,” or previously unknown software vulnerabilities which could be exploited in cyberattacks. Even without intentional misuse by bad actors, evaluations and studies highlight instances of deceptive and self-preserving behaviors emerging in advanced models, suggesting that AI may be developing strategies that conflict with human intent or oversight. Many leading experts warned that AIs could go rogue and escape human control.

The increasingly impactful capabilities and misalignment of these models have also had concerning social repercussions, notably due to models’ sycophancy, which can lead to users forming strong emotional attachments. We saw, for example, a strong negative public reaction when OpenAI switched from its GPT-4o model to GPT-5, and many users felt they had lost a “friend” because the new model was less warm and congenial. In extreme cases, these attachments can pose a danger to users’ mental health, as we’ve seen in the tragic cases of vulnerable people harming themselves or others after suffering from a type of “AI-induced psychosis.”

Faced with the scale and complexity of these models, whose capabilities have been growing exponentially, we need both policy and technical solutions to make AI safe and protect the public. Citizens should stay informed and involved in the laws and policies being passed in their local or national governments. The choices made for the future of AI should absolutely require public buy-in and collective action because they could affect all of us, with potentially extreme consequences.

From a technical perspective, it is possible that we’re nearing the limits of our current approach to frontier AI in terms of both capability and safety. As we consider the next phases of AI development, I believe it will be important to prioritize making AI safe by design, rather than trying to patch the safety issues after powerful and potentially dangerous capabilities have already emerged. Such an approach, combining capability and safety from the get-go, is at the heart of what we’re working on at LawZero, the non-profit organization I founded earlier this year, and I’m increasingly optimistic that technical solutions are possible. 

The question is whether we will develop such solutions in time to avoid catastrophic outcomes. Intelligence gives power, potentially highly concentrated, and with great power comes great responsibility. Because of the magnitude of all these risks, including unknown unknowns, we will need wisdom to reap the benefits of AI while mitigating its risks.

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *