In a recent research, 30 years of chess games are analysed and authors conclude on the effect of AI on training for decision making. On one side, practising with AI helps some players to get better and on the other side, it fails at teaching one particular aspect of strategic thinking: exploiting an opponent’s failure. What are the strategic implications of using AI for training?
The research
Part of strategic thinking is dynamic: reacting to others’ moves. Learning how to react often implies experiencing sequences of actions and responses and learning from them. In a lot of contexts, it’s hard or impossible to practice that sequence (think about a market entry strategy for a blockbuster drug or movie). That’s why some argue that AI could be one response to teach strategic thinking, an AI playing the role of the other participants.
To test this hypothesis, Gaessler and Piezunka analysed millions of chess games over several decades and compared players’ performance before and after they compete and train with AI chess games, their main conclusions are the following:
- the more players have access to computers to train, the higher their performance gets
- the worst players see their performance increasing more than the better players
- where and when players are not numerous enough for them to practice, access to computer chess increases players’ performance, suggesting that chess computers are a substitute for human training partners
- players with access to chess computers are less successful in exploiting an opponent’s blunder to win the game
AI does help to learn and AI training benefits more to the worst performers. However, because AI is never tired, never makes mistakes and always counts right, training with AI doesn’t help to learn how to exploit an opponent’s blunder.
Implications
We can derive from the research three implications regarding the source of competitive advantage.
The first implication of these results is that AI can be a good substitute for human trainers when human training is scarce. Practising decision-making with a computer can be a good way to learn.
The second implication is that if training with AI scales and a lot of companies use this to improve their teams’ strategic thinking capabilities, the gap will reduce between low-performing companies and high-performing companies, reducing the advantage for the formerly high-performing ones.
The third implication is that there will be a premium for the companies and teams which are trained on what AI cannot (yet) train to: the ability to exploit the blunders of boundedly rational opponents.
What to do
Considering its positive effect on performance, exposing the lowest performing teams within the company to AI for practising seems a natural opportunity to explore. Other studies have also recently concluded on the particular effect of AI on the lowest performers. This would imply identifying them within the company and also identifying the situations when they could use AI for practising and defining guidelines and procedures. For example, several observers have explained how they use solutions such as ChatGPT to act as a competitor or a client to fine-tune a proposition or make a decision.
Additionally, exposing the best teams to human interactions and helping them to spot and identify human blunders is a good way to maintain an advantage over the teams trained only with AI and then less likely to spot and leverage a blunder. This implies developing pure human skills such as emotional intelligence.
Last, and it’s a radical one, acting out of rationality could be a good way to trump competitors’ teams trained on AI. Similar to the Madman strategy used by Nixon during the cold war or as the character played by Rick Gervais in the Invention of Lying who benefits from being the first human with the ability to lie in a world where people can only tell the truth.