The Ancient Chinese Game That Led to the AI Boom

March 30, 2026

Thore Graepel may have been the first human to be vanquished by a superintelligence. In 2015, on his first day as a researcher at Google DeepMind, he was challenged to play against the earliest iteration of AlphaGo—a computer program developed by DeepMind that would prove so effective at the ancient-Chinese game of weiqi (or Go, as it is commonly known in the West) that it changed how humans play it, and then upended the field of AI itself.

When Graepel faced it, AlphaGo was just a “baby” project, as he put it to me, and he was an accomplished amateur player. But it still took him down. Then, the following year, AlphaGo—now fully developed—plowed through a number of human champions, ultimately crushing Lee Sedol, widely considered the best player in the world, with a match score of 4–1. This month marked the tenth anniversary of that victory.

For decades, developing a program that plays Go at an elite level was an infamous problem in computer science. Many considered it unsolvable—far harder than developing a similar program for chess, in which the supercomputer DeepBlue beat the world champion in 1997. In Go, two players take turns positioning stones on a 19-by-19 grid, and their movements are relatively unrestricted. In chess, which has a far smaller grid, a rook can move only horizontally and a bishop only diagonally, but Go pieces can be placed on any open space. The number of possible Go positions is so high that it cannot be easily expressed in words; it is higher than the number of atoms in the observable universe, and orders of magnitude higher than the number of possible chess games. Today, the technical frameworks and approaches that allowed an algorithm to excel at this board game have translated fairly directly into bots that can write advanced code, help tackle open problems in mathematics, and replicate scientific discoveries from scratch.

Generative AI is living in AlphaGo’s shadow. Beyond the actual models, “conceptual things emerged from the whole AlphaGo experience which essentially entered the AI vocabulary,” Pushmeet Kohli, the vice president of science and strategic initiatives at Google DeepMind, told me. In many ways, Go and chess provide ideal templates for understanding how the AI boom has unfolded—and a guide for what it may yet wreak.

DeepMind’s innovation was to essentially pair two algorithms: one AI model to propose moves and a second model to judge whether a move is good or not, allowing the system to devote computational resources to planning sequences of moves most likely to result in victory. AlphaGo then played itself thousands of times, improving from every mistake through a training process known as reinforcement learning. Today’s frontier AI labs face an analogous problem: Large language models such as ChatGPT could spit out lucid sentences and paragraphs, but when they faced challenging tasks in computer science, physics, and other areas that would require a human to really think, chatbots had been stuck stumbling in the dark. That began to change in late 2024 with the advent of so-called reasoning models, an approach that now underlies all of the top bots from OpenAI, Google DeepMind, and Anthropic. And the idea behind these reasoning models “is surprisingly similar to AlphaGo,” as Noam Brown, a researcher at OpenAI, recently put it.

The intuition behind chatbot reasoning is to have AI models work out a solution step-by-step, using a scratch pad of sorts, and then evaluate steps along the way to change course or start over as needed—very much like the two-step approach used by AlphaGo. The training method for these reasoning chatbots is the same as well: reinforcement learning. An algorithm can play lots of games of Go or attempt to solve lots of difficult math problems, then learn from its mistakes when it loses or errs. Today’s best AI models “can be traced back to some degree to the AlphaGo work,” Graepel said.

Perhaps the most crucial insight shared between AlphaGo and the chatbot-reasoning breakthrough is a twist on the AI industry’s central dogma, the “scaling laws.” Traditionally, AI companies improved their large language models by training them on more data and with more computing power. In the case of AlphaGo and reasoning models, researchers realized that they could scale another dimension: having the program devote more time and computing power to a task, akin to how harder problems typically take humans more time to solve. For bots, this meant planning more and longer sequences of moves or using more words to “reason” through a tough coding task. That wasn’t guaranteed. “It could happen that you give them more time and they spend more time just getting confused,” Kohli said.

After the success of AlphaGo, DeepMind made a successor program called AlphaZero. Whereas AlphaGo was initially shown a number of human Go matches as a baseline, AlphaZero became dominant at a number of games—Go, chess, and so on—purely by playing itself, with zero prior knowledge, and learning from each game. That an AI model essentially taught itself, very rapidly, to surpass the abilities of any human ever at multiple games might suggest that very rapid advances for today’s chatbots are on the horizon. By this logic, models could essentially figure out ways to improve themselves. But the success of AlphaGo and AlphaZero more likely signals obstacles ahead. The most important ingredient in AlphaGo was the simplicity with which one could measure success—win or lose—and thus give the machine feedback to improve.

With board games, “we were always operating in a specific environment where the rules of the game were known,” Kohli said. “The systems of today are expected to operate in a much more general environment.” Reasoning models have found success mostly in areas that still have a relatively clear rubric for evaluation: whether an AI-written program works as intended, for instance, or whether an AI-written proof holds up. Instilling any notion of a more general intelligence in a machine will be a far more challenging problem than conquering even Go.

DeepMind has been able to design evaluations for more abstract ideas, for instance by orchestrating several AI agents to act as a team of virtual “scientists” that will rank hypotheses about problems in biology. But even that system operates within a relatively constrained domain of biological reasoning and literature. It’s unlikely that any lab will come up with a single way to evaluate “general intelligence” that can be used to train a bot AlphaGo style, let alone one as straightforward as winning or losing a board game.

Still, the progress the AlphaGo approach has yielded for AI models in a number of scientific domains is impressive—so much so that, a decade after AI conquered humanity’s hardest board game, the nation is now in a frenzy over whether AI is about to first overhaul the economy and then unsettle the purpose of being human at all.

Once again, chess and Go might offer guides. As a result of improving via self-play, AlphaGo and AlphaZero developed not only superhuman ability but also inhuman style, using tactics and strategies no human had previously considered. These AI strategies did not destroy the human pursuits of chess and Go; they reignited new waves of human creativity and strategy. The most optimistic analogy for today’s more broadly useful AI systems would be that they also, rather than providing a wholesale replacement for humans, will function as a sort of complementary intelligence. Biologists, mathematicians, and computer scientists are already finding ways in which today’s AI models are not simply speeding up their work but qualitatively changing the kinds of questions humans can ask and the discoveries we can make.

Of course, the business proposition of generative AI is quite the opposite: that products such as ChatGPT and Claude Code can automate huge swaths of white-collar work, help students cheat their way through school, and allow humans to live mostly without thinking. Perhaps C-suite executives, like AI researchers, can learn a lesson from Go and chess. Like any sport, chess and Go are worthwhile because of human struggles and storylines, champions made and toppled, the very fact that people are doomed to be imperfect but always striving to become just a bit better. And rather than automating human chess masters or destroying the sport and pastime, chess-playing AI models have helped the business of chess to boom.

Likewise, employees, managers, students, professors—really all of us—are always learning and learning by failing, or at least we should be. That is useful and worth preserving in plain economic terms. Nobody becomes world-class at anything without at some point being rather terrible at it, and allowing novices who might be less capable than a bot to build up skills is the only way you get experts with human judgment and abilities that surpass any AI. But more important than that economic rationale is an existential one: To grow or help another do so is a beautiful thing. Some might call it being human.

Matteo Wong is a staff writer at The Atlantic.