How to differentiate signal from noise in generative AI? On the one hand, McKinsey predicts that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually. On the other hand, we have plenty of studies (even from McKinsey) which conclude that value capture is much harder to achieve than what predictions estimate when new digital technology appears. The problem is that decision makers need to find the right moment to invest in a new technology, not too early because it’s too costly, not too late because others might have made the right bets earlier. What drivers to monitor? Is now the right time? How to know?
Failed promises of digital technologies
We have been through this already, several times: a new digital technology emerges and consultants publish reports on 1) the disruption that it might start 2) the value creation opportunities and 3) the market size and structure. Generative AI is no exception and 6 months after the public release of ChatGPT, those reports have been published and the question is: should we care? Examples of two digital technologies development call for caution in following the predictions: big data and Internet of Things.
In 5 years, promises of value creation through big data are divided by 3.
In 2011, McKinsey published a report which triggered a lot of interest and signalled the beginning of a major market opportunity Big data: The next frontier for innovation, competition, and productivity. In this report, very attractive value creation opportunities were listed: a retailer (…) could increase its operating margin by more than 60%. The US healthcare sector could create more than $300 billion in value every year. In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data. And users of services enabled by personal-location data could capture $600 billion in consumer surplus.
Time passed and neither retailers, nor healthcare corporations nor government administrators saw the value creation promises turn into reality. So much as McKinsey released another report 5 years later to understand the gap. In 2016 was released The age of analytics: Competing in a data-driven world which highlights that for example manufacturing, the public sector, and health care have captured less than 30 percent of the potential value we highlighted five years ago. When it comes to understanding the reasons for this difficulty to capture the promises, it’s mainly about organisation and human resources: The biggest barriers companies face in extracting value from data and analytics are organizational; many struggle to incorporate data-driven insights into day-to-day business processes. Another challenge is attracting and retaining the right talent—not only data scientists but business translators who combine data savvy with industry and functional expertise.
From 2016 to 2021, the value created by the Internet of Things halved
In their 2015 study, The Internet of Things: Mapping the value beyond the hype, authors estimate a potential economic impact of as much as $11.1 trillion per year in 2025 for IoT applications in nine settings. Well, here again, 5 years later and probably after some backlash from clients, another study aimed to understand the gap between the prediction and the reality. In 2021, was published The Internet of Things: Catching up to an accelerating opportunity. Similarly, only half of the value identified in the first report had been captured and an analysis of the gap is presented. Five factors are dampening both the at-scale adoption of IoT solutions and their impact: change management, Interoperability, Installation, Cybersecurity, Privacy.
Drivers to monitor
Let’s get back to generative AI, what can we do with the current predictions? In The economic potential of generative AI: The next productivity frontier, McKinsey estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually. Follow a series of detailed estimates by functions and industries. Maybe the previous experiences with big data and IoT made the authors more cautious about the affirmation of the potential and the executive summary ends on a nuanced note: a full realization of the technology’s benefits will take time, and leaders in business and society still have considerable challenges to address. These include managing the risks inherent in generative AI, determining what new skills and capabilities the workforce will need, and rethinking core business processes such as retraining and developing new skills.
The question for decision-makers is two fold: how big will be the scale of change and when will it happen? Four drivers are to monitor to answer these two questions: technological development, change in organisation and processes, environmental footprint and regulations.
Technological development and progress are likely to support the spread and scale of generative AI solutions. The capacity of chips is continuing to develop at a rapid pace and companies in the industry investing a significant amount of money to keep up with Moore’s Law. Cloud providers have scaled their operations and developed customized solutions for generative AI companies. Storing data and running models on every year bigger datasets is becoming more accessible and cheaper. In addition, companies have already massively explored and invested in AI. Mobilizing the same type of resources, generative AI penetration is likely to be easier than other digital innovations which appeared less connected to previous waves (blockchain or metaverse for example).
However, it’s not always when a technology is accessible that productivity increases. The economist Robert Solow identified what he called the productivity paradox. In 1987, a decade into the computer revolution, he observed that productivity growth had actually slowed down. “You can see the computer age everywhere,” he wrote, “but in the productivity statistics.” Older examples such as the rise of electricity in assembly lines show a significant lag between the emergence of a technology and its impact on work. Generative AI is likely to follow the same path and it will take time for organisations to organise differently to seize the opportunities offered by the technology.
More and more voices make themselves heard on the environmental footprint of generative AI solutions. Beyond the direct footprint of devices, their impact on energy consumption and CO2 emissions is significant. It’s estimated that training GPT-3 produced 500 tonnes of CO2. Even the relatively more efficient BLOOM took 433 MWh of power to train, which would be enough to power the average American home for 41 years. And the consumption doesn’t stop once the model is trained. At each click, the model runs and consumes electricity. Whether for cost reasons or because of regulations, this could represent a headwind for the development of generative AI solutions.
Countries all over the world are now drafting regulations to control the uses of generative AI solutions and according to how they conclude they could foster or hamper both the technical developments and the business environment around generative AI. The AI act under discussion at the European Commission exemplifies the tradeoff between user protection, geopolitics and business development.
How to navigate
The previous waves of digital technologies taught us that adoption rate variations between industries, countries, functions or company size are significant. This means that companies ought to implement highly context-specific weak signal measures to conclude if it’s the right time to invest. For these measures, McKinsey’s studies fall short.
Most importantly, all reports and research share a common conclusion: value capture is made possible only because of change and strategic alignment. On the one hand, technologies are sources of competitive advantage when they are well connected with other dimensions of the organisation (offering, organisation, skills, …), no technology replaces strategic thinking. On the other hand, in order to deliver the value creation promises, structure and processes might be redefined to leverage the advantage of the technology.
In a recent article published by Harvard Business Review, two consultants from a strategy consultancy (Innosight) co-founded by former Harvard Business School professor Clayton Christensen offer a different perspective: in order to initiate change, it is best to rely on in-company rather than external data.
As I described in another blog, the authors propose a two-fold approach:
- rely on internal data to gain an understanding of the changes underway. This data could be qualitative (comments from customers and from new hires, who often have greater exposure to changes in the industry). It could also be quantitative, such as changes in customer behaviour during a recent period, analysis of discourse on social media or specialised forums, which often offer a wealth of valuable signals.
- make room for debate on emerging realities and statistically unrepresentative facts. Asking a group of decision-makers to position themselves on a scale representing the need for change. This allows those who see an urgent need for change, who are often rare, to share their point of view.
Applying this approach to generative AI would mean first putting the technology in perspective regarding the strategy of the company. Second, to source qualitative information on its adoption in the company’s business ecosystem (clients, suppliers, competitors, partners) and third to organise internal controversy and debates on if and how to follow suit. This will gradually bring to light the significance of the situation and the distinctive strategy to be implemented.
Photo de Randy Tarampi sur Unsplash