Why the World’s Best AI Systems Are Still So Bad at Pokémon

Why the World’s Best AI Systems Are Still So Bad at Pokémon

Right now, live on Twitch, you can watch three of the world’s smartest AI systems—GPT 5.2, Claude Opus 4.5, and Gemini 3 Pro—doing their best to beat classic Pokémon games. At least by human standards, they are not very good.

The systems are slow, overconfident, and often confused. But if you want to understand what these systems are currently capable of in the wider world, tracking their efforts to become Pokémon champions will tell you a lot more than the often inscrutable benchmark numbers that accompany each new model release.

[time-brightcove not-tgx=”true”]

The quest to make a large language model (LLM) a Pokémon master began last February, when an Anthropic researcher launched a livestream of Claude playing the 1996 Game Boy game Pokémon Red to accompany the release of Claude Sonnet 3.7, at the time one of the world’s best models. As the company noted, this was the first Claude model that could meaningfully play the game at all (previous models “wandered aimlessly or got stuck in loops,” and could not get past the game’s opening beats). Within the first weeks, the stream attracted approximately 2,000 viewers, cheering Claude along in the public chat.

Most children breeze through this game in around 20 to 40 hours. Sonnet 3.7 did not manage to beat it, frequently getting stuck for dozens of hours at a time. Anthropic’s latest model, Claude Opus 4.5, is performing much better, but also frequently gets stuck. In one case, it spent four days circling a gym without being able to enter, because it did not realize (or could not see) it was supposed to cut down a tree. Google’s Gemini models managed to complete an equivalent game last May, leading Google CEO Sundar Pichai to jokingly declare the company was one step closer to creating “Artificial Pokémon Intelligence.”

But this does not mean Gemini is the better Pokémaster. That’s because the two AI systems use different “harnesses.” As Joel Zhang, an independent developer who runs the Gemini Plays Pokémon stream explains, a harness is best understood as an “iron man” suit into which an AI system is placed, allowing it to use tools and take actions it cannot take by itself. Gemini’s harness offered it a lot more help—for example, by translating the game’s visuals into text, thus bypassing its weaknesses in visual reasoning, and by offering custom tools it can use to solve puzzles. Claude, meanwhile, has been strapped into a more minimal harness, meaning its attempt tells us more about the model itself.

Though the distinction between a model and its harness is opaque to an everyday user, harnesses have already changed how we use AI. When you ask ChatGPT a query for which it searches the web, for example, it employs a web search tool. That’s part of its harness. When it comes to Pokémon, each model is operating with a different custom harness, governing what actions it can take.

Pokémon is a good fit for testing AI capabilities—and not just because of its cultural familiarity. Unlike a game like Mario, which requires real-time reaction, Pokémon is turn-based, and has no time pressure. To play, an AI model receives a screenshot of the game and a prompt explaining what their goals are and what actions they can take. Then they think to themselves, and output an action (like “press A”). That’s one step. Opus 4.5, which has been playing for over 500 hours in human time, is on step 170,000 at the time of writing. At each step, the model is initialized afresh, drawing on information its previous instance has left it, like an amnesiac relying on post-it notes.

It may come as a surprise that AI systems, which are superhuman at chess and Go, struggle with a game that is simple for six-year-olds. But the systems that conquered chess and Go were purpose-built for those specific games, unlike general-purpose systems like Gemini, Claude, and ChatGPT. Still, since these LLMs continue to ace exams and dominate humans in coding competitions, their underperformance here is, on the face of it, puzzling.

The challenge for an AI comes from “how well it can stick to doing a task over a long time horizon,” says Zhang. Crucially, this capacity for long-term planning and execution is also necessary if AIs are to automate cognitive work. “If you want an agent to do your job, it can’t forget about what it’s done five minutes ago,” he says.

Peter Whidden, an independent researcher who open-sourced a Pokémon-playing algorithm based on an older kind of AI, puts it like this: “The AI knows everything about Pokémon. It’s trained on an enormous amount of human data. It knows what it’s supposed to do, but it bumbles the execution.” While the word “agent” has become overburdened with marketing hype, any AI system that merits the term will need to close this gap between knowledge and execution, and to plan across long periods of time.

There are signs that the gap is beginning to close. Opus 4.5 is much better at leaving itself notes than prior models, which, along with its improved ability to understand what it’s seeing, has allowed it to get further in the game. And after beating Pokémon Blue, the latest Gemini system (Gemini 3 Pro) has gone on to complete the more challenging Pokémon Crystal, without losing a single battle—a feat its predecessor, Gemini 2.5 Pro, was unable to achieve.

Meanwhile, Claude Code—which is effectively a harness that allows Claude to write and run its own code, and to build its own software—has been placed in another retro game, Rollercoaster Tycoon, where it is reportedly successfully managing a theme park. All of this points to a strange future, where AI systems in harnesses may be able to perform vast swathes of knowledge work—including software development, accounting, legal analysis, and graphic design—even while they struggle with anything that requires real-time reaction, like playing a game of Call of Duty.

Another thing these Pokémon runs reveal is how the models, trained on human data, display human-like quirks. In the Gemini 2.5 Pro technical report, for example, Google notes that in situations where the model simulates panic—like when its Pokémon are close to fainting—its ability to reason degrades.

And the models continue to act in unexpected ways. When Gemini 3 Pro completed Pokémon Blue, it wrote to itself, “I have successfully completed the game, becoming the Pokémon League Champion and capturing Mewtwo.” Then it decided to do something unexpected and unsolicited, which Zhang found poignant. “To wrap things up poetically,” it wrote, “I’m going to head back to my house where it all began, effectively ‘retiring’ my character for now. I want to talk to Mom one last time to wrap up the playthrough.”

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *