The Gap Between Open and Closed AI Models Might Be Shrinking. Here’s Why That Matters

The Gap Between Open and Closed AI Models Might Be Shrinking. Here’s Why That Matters

Today’s best AI models, like OpenAI’s ChatGPT and Anthropic’s Claude, come with conditions: their creators control the terms on which they are accessed to prevent them being used in harmful ways. This is in contrast with ‘open’ models, which can be downloaded, modified, and used by anyone for almost any purpose. A new report by non-profit research organization Epoch AI found that open models available today are about a year behind the top closed models.

[time-brightcove not-tgx=”true”]

“The best open model today is on par with closed models in performance, but with a lag of about one year,” says Ben Cottier, lead researcher on the report.

Meta’s Llama 3.1 405B, an open model released in July, took about 16 months to match the capabilities of the first version of GPT-4. If Meta’s next generation AI, Llama 4, is released as an open model, as it is widely expected to be, this gap could shrink even further. The findings come as policymakers grapple with how to deal with increasingly-powerful AI systems, which have already been reshaping information environments ahead of elections across the world, and which some experts worry could one day be capable of engineering pandemics, executing sophisticated cyberattacks, and causing other harms to humans.

Researchers at Epoch AI analyzed hundreds of notable models released since 2018. To arrive at their results, they measured the performance of top models on technical benchmarks—standardized tests that measure an AI’s ability to handle tasks like solving math problems, answering general knowledge questions, and demonstrating logical reasoning. They also looked at how much computing power, or compute, was used to train them, since that has historically been a good proxy for capabilities, though open models can sometimes perform as well as closed models while using less compute, thanks to advancements in the efficiency of AI algorithms. “The lag between open and closed models provides a window for policymakers and AI labs to assess frontier capabilities before they become available in open models,” Epoch researchers write in the report.

Read More: The Researcher Trying to Glimpse the Future of AI

But the distinction between ‘open’ and ‘closed’ AI models is not as simple as it might appear. While Meta describes its Llama models as open-source, it doesn’t meet the new definition published last month by the Open Source Initiative, which has historically set the industry standard for what constitutes open source. The new definition requires companies to share not just the model itself, but also the data and code used to train it. While Meta releases its model “weights”—long lists of numbers that allow users to download and modify the model—it doesn’t release either the training data or the code used to train the models. Before downloading a model, users must agree to an Acceptable Use Policy that prohibits military use and other harmful or illegal activities, although once models are downloaded, these restrictions are difficult to enforce in practice.

Meta says it disagrees with the Open Source Initiative’s new definition. “There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today’s rapidly advancing AI models,” a Meta spokesperson told TIME in an emailed statement. “We make Llama free and openly available, and our license and Acceptable Use Policy help keep people safe by having some restrictions in place. We will continue working with OSI and other industry groups to make AI more accessible and free responsibly, regardless of technical definitions.”

Making AI models open is widely seen to be beneficial because it democratizes access to technology and drives innovation and competition. “One of the key things that open communities do is they get a wider, geographically more-dispersed, and more diverse community involved in AI development,” says Elizabeth Seger, director of digital policy at Demos, a U.K.-based think tank. Open communities, which include academic researchers, independent developers, and non-profit AI labs, also drive innovation through collaboration, particularly in making technical processes more efficient. “They don’t have the same resources to play with as Big Tech companies, so being able to do a lot more with a lot less is really important,” says Seger. In India, for example, “AI that’s built into public service delivery is almost completely built off of open source models,” she says. 

Open models also enable greater transparency and accountability. “There needs to be an open version of any model that becomes basic infrastructure for society, because we do need to know where the problems are coming from,” says Yacine Jernite, machine learning and society lead at Hugging Face, a company that maintains the digital infrastructure where many open models are hosted. He points to the example of Stable Diffusion 2, an open image generation model that allowed researchers and critics to examine its training data and push back against potential biases or copyright infringements—something impossible with closed models like OpenAI’s DALL-E. “You can do that much more easily when you have the receipts and the traces,” he says.

Read More: The Heated Debate Over Who Should Control Access to AI

However, the fact that open models can be used by anyone creates inherent risks, as people with malicious intentions can use them for harm, such as producing child sexual abuse material, or they could even be used by rival states. Last week, Reuters reported that Chinese research institutions linked to the People’s Liberation Army had used an old version of Meta’s Llama model to develop an AI tool for military use, underscoring the fact that, once a model has been publicly released, it cannot be recalled. Chinese companies such as Alibaba have also developed their own open models, which are reportedly competitive with their American counterparts.

On Monday, Meta announced it would make its Llama models available to U.S. government agencies, including those working on defense and national security applications, and to private companies supporting government work, such as Lockeed Martin, Anduril, and Palantir. The company argues that American leadership in open-source AI is both economically advantageous and crucial for global security.

Closed proprietary models present their own challenges. While they are more secure, because access is controlled by their developers, they are also more opaque. Third parties cannot inspect the data on which the models are trained to search for bias, copyrighted material, and other issues. Organizations using AI to process sensitive data may choose to avoid closed models due to privacy concerns. And while these models have stronger guardrails built in to prevent misuse, many people have found ways to ‘jailbreak’ them, effectively circumventing these guardrails.

Governance challenges

At present, the safety of closed models is primarily in the hands of private companies, although government institutions such as the U.S. AI Safety Institute (AISI) are increasingly playing a role in safety-testing models ahead of their release. In August, the U.S. AISI signed formal agreements with Anthropic to enable “formal collaboration on AI safety research, testing and evaluation”.

Because of the lack of centralized control, open models present distinct governance challenges—particularly in relation to the most extreme risks that future AI systems could pose, such as empowering bioterrorists or enhancing cyberattacks. How policymakers should respond depends on whether the capabilities gap between open and closed models is shrinking or widening. “If the gap keeps getting wider, then when we talk about frontier AI safety, we don’t have to worry so much about open ecosystems, because anything we see is going to be happening with closed models first, and those are easier to regulate,” says Seger. “However, if that gap is going to get narrower, then we need to think a lot harder about if and how and when to regulate open model development, which is an entire other can of worms, because there’s no central, regulatable entity.”

For companies such as OpenAI and Anthropic, selling access to their models is central to their business model. “A key difference between Meta and closed model providers is that selling access to AI models isn’t our business model,” Meta CEO Mark Zuckerberg wrote in an open letter in July. “We expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency.”

Measuring the abilities of AI systems is not straightforward. “Capabilities is not a term that’s defined in any way, shape or form, which makes it a terrible thing to discuss without common vocabulary,” says Jernite. “There are many things you can do with open models that you can’t do with closed models,” he says, emphasizing that open models can be adapted to a range of use-cases, and that they may outperform closed models when trained for specific tasks.

Ethan Mollick, a Wharton professor and popular commentator on the technology, argues that even if there was no further progress in AI, it would likely take years before these systems are fully integrated into our world. With new capabilities being added to AI systems at a steady rate—in October, frontier AI lab Anthropic introduced the ability for its model to directly control a computer, still in beta—the complexity of governing this technology will only increase. 

In response, Seger says that it is vital to tease out exactly what risks are at stake. “We need to establish very clear threat models outlining what the harm is and how we expect openness to lead to the realization of that harm, and then figure out the best point along those individual threat models for intervention.”

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *