What chess can teach us about the AI economy
Chess survived machine supremacy. It grew more professional and more demanding. Much of knowledge work may be about to do the same.
For a moment, chess looked like a warning from the future. Once computers could beat the best humans what was the point of the human game? If a machine could always find the stronger move, perhaps chess would become pointless. That is not what happened. Chess survived machine supremacy. It grew more professional and more demanding. Research on elite play suggests that human decision-making kept improving through both the late-1990s engine shock and the later arrival of deep-learning systems. Another long-run study finds evidence of “population-level learning” in the game.
That matters because much of today’s argument about artificial intelligence is, in essence, the same one chess faced a generation ago. Once a machine surpasses a person at the core task, many assume the human role must vanish. In chess it did not. What changed was what people valued. Fans still care about style and nerve. Nobody watches top-level chess because humans can outcalculate computers. They watch because the contest is human.
Prediction is not decision
Something similar is likely in much of the economy: in many fields, machine superiority at the technical core will not eliminate demand for people. It will shift attention towards judgement. As Ajay Agrawal, Joshua Gans and Avi Goldfarb argue, AI is best understood first as a prediction technology. Prediction matters, but it is only one ingredient in decision-making.
When the floor rises
Chess offers a second lesson, and a less comforting one. Engines did not merely automate analysis. They made analysis so good, and so cheap, that the standard of competent play rose for everyone who took the game seriously. Openings became deeper. Dubious ideas were exposed at home before they ever reached the board. The average strong player today is better armed than his counterpart a generation ago, not because human brains evolved, but because the tools did. One study finds that chess computers improved players by acting as scalable training partners. Another argues that AI changed the sources of competitive advantage in chess rather than abolishing them.
That pattern is already visible outside the game. In a field experiment involving 5,172 customer-support agents, a generative-AI assistant raised productivity by about 15% on average, with especially large gains for less experienced workers. In an experiment on professional writing tasks, ChatGPT cut completion times by roughly 40% and improved quality by 18%. And in three field experiments with software developers at Microsoft, Accenture and another large firm, researchers found a roughly 26% increase in completed tasks. This is not a world in which skill stops mattering. It is a world in which the floor rises quickly, and in which being merely a little better than average becomes a weaker moat.
Where value moves
As engines improved, the scarce edge in chess moved elsewhere. Cheap calculation made raw move-finding less valuable. Other things counted for more: steering the game into positions one understood, preparing well, managing the clock, keeping one’s nerve, deciding when to trust instinct and when to trust silicon. The same shift is likely in knowledge work. If machines become cheap producers of code, legal drafts, marketing copy and first-pass analysis, value will move towards framing the problem, spotting what matters in the output and recognising when the machine is confidently wrong.
That fits a broader trend in labour markets. David Deming finds that decision-making has become increasingly important as routine work has been automated away and the remaining jobs have grown more open-ended. AI is likely to accelerate that shift.
Centaur economics
It also helps explain why human-machine teams matter more than either enthusiasts or pessimists suppose. Centaur chess, in which humans work with engines, once looked like a curiosity. It now looks more like a preview. The lesson was not that humans plus machines always beat machines. Often they do not. The lesson was that interface, judgement and process matter. A strong player using a machine badly can lose to a weaker one using it well. In many industries, the winners may turn out to be neither old-style professionals nor stand-alone systems, but organisations that learn how to combine machine speed with human sense.
The squeezed middle
Chess offers one harsher lesson, too. The middle gets squeezed. Once powerful analysis became cheap, the merely competent player lost some of his old advantage. Preparation spread. Mistakes once hidden by fog were laid bare. Life became harder for the good-but-not-exceptional professional. Generative AI seems likely to do something similar to white-collar work. Research on labour-market exposure estimates that around 80% of American workers could see at least 10% of their tasks affected by large language models, and about 19% could see at least half their tasks affected. That does not mean those jobs disappear. It does suggest that routine competence will become more abundant, and cheaper, in many domains. (You can check your own exposure with the AI job risk quiz, or see how your area compares in the location rankings.)
Authorship still matters
Yet abundance does not settle everything. Engines can identify the best move in chess, but spectators still prefer some players to others. They admire certain temperaments, styles and instincts. People do not consume only outcomes; they care about authorship. That has an economic counterpart. In experiments on creative writing, generative AI helped people produce stories judged more creative and enjoyable, especially weaker writers’ stories. But it also made those stories more similar to one another. If competent output becomes plentiful, a distinctive voice may become more valuable precisely because it is rarer.
Access first, then use
There is one final parallel. In chess, the strongest tools were at first unevenly distributed. Early advantages went to those with money, access and technical fluency. Only later did strong engines become ordinary kit. AI seems likely to follow the same path, though with much larger stakes. Studies of AI adoption in America show that early uptake has been highly uneven, concentrated in larger firms, high-growth start-ups and a handful of “superstar” cities. Other work on market structure points in the same direction: foundation models involve high fixed costs and strong economies of scale, which can favour concentration near the frontier. Early gains may therefore accrue mainly to the firms that own the biggest models or can deploy them fastest. Later, if the tools spread more widely, advantage will shift from access to use.
More demanding, not pointless
Chess is not a complete map of the AI economy. The world is messier than 64 squares. But it remains a useful rehearsal for what happens after a machine becomes better than a human at a task once thought deeply human. The answer is not that people retreat into some untouched realm of authenticity. Nor is it that they vanish. They adapt. Standards rise. Easy advantages disappear. New bottlenecks emerge. The human role shifts to where scarcity remains.
That is the likeliest shape of AI’s next phase. Machines will do more of the visible work. They will flatten some kinds of expertise and cheapen plenty of middling output. But they will also raise the return on older human strengths that are harder to formalise: knowing which problem is worth solving, which trade-off is acceptable, which result can be trusted, what style is wanted and whose judgement carries weight. Chess did not become pointless when computers mastered it. It became more demanding. Much of knowledge work may be about to do the same.
Sources
- Bilalić et al., “The role of AI in transforming elite human performance”. PubMed
- Ribeiro et al., “Move-by-Move Dynamics of the Advantage in Chess Matches Reveals Population-Level Learning of the Game”. PLOS
- Agrawal, Gans and Goldfarb, “Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction”. NBER
- Gaessler et al., “Training with AI: Evidence from chess computers”. Wiley Online Library
- Krakowski et al., “Artificial intelligence and the changing sources of competitive advantage”. Wiley Online Library
- Brynjolfsson, Li and Raymond, “Generative AI at Work”. OUP Academic
- Noy and Zhang, “Experimental evidence on the productivity effects of generative artificial intelligence”. Science
- Cui et al., “The Effects of Generative AI on High-Skilled Work: Evidence from Three Field Experiments with Software Developers”. SSRN
- Deming, “The Growing Importance of Decision-Making on the Job”. NBER
- Eloundou et al., “GPTs are GPTs: Labor market impact potential of LLMs”. PubMed
- Doshi and Hauser, “Generative AI enhances individual creativity but reduces the collective diversity of novel content”. Science
- McElheran et al., “AI Adoption in the United States”.
- Korinek and Suh, “Scaling and Market Structure in Artificial Intelligence”. NBER
Check how exposed your own role is with the AI job risk quiz, or see how your area compares in the location rankings. We explain how we score each occupation using 10 research sources.