The European Commission released recently a proposal for a regulation on Artificial Intelligence (AI). Though there is still a long process to take place before the regulation is enforced, it will impact Datanomics, i.e. value creation opportunities and competition dynamics related to data and analytics. What would change should the regulation implemented as described? What are the possible strategies?
The regulation main highlights
The 120+ pages document describes the rationale, scope, content and enforcement process of a proposed regulation on Artificial intelligence (AI). The objective is to both “preserve EU’s technological leadership” and “to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles.”
In other words, strike a fair balance between corporate benefits and citizen rights as well as providing a regulatory framework enabling the development of AI technologies and markets.
Below the main aspects of the regulation which would have an impact on Datanomics:
- A regulation not limited to European companies: similarly to GDPR, AI Regulation will apply to both public and private actors inside and outside the EU if the AI system is put in place in the EU or its use affects people located in the EU.
-
A risk-based approach defining 3 levels: the AI Regulation distinguishes between uses of AI that create an unacceptable risk; a high risk; and low or minimal risks.
- Some applications explicitly prohibited (unacceptable risk): AI systems or applications that manipulate human behavior to circumvent users’ free will and systems that allow “social scoring” by governments. In particular, unrestricted facial recognition in public places is prohibited (With three notable exceptions: victim search, threat prevention, suspect identification for crimes).
- High risks systems are accepted and explicitly listed. AI systems are classified as high-risk if they are intended to be used as safety components of a product or where stand-alone AI systems may affect fundamental rights, such as law enforcement, employment or essential services. It explicitly covers the use of AI inside companies for hiring, or for assessing workers for promotion. It also includes AI that affects access to “essential” services, like the granting of credit.
- More transparency and control expected for high risks systems. A risk management system shall be established, implemented, documented and maintained. Stricter governance practices should be implemented for training, validation and testing data sets to limit bias. Events should be documented and recorded automatically in case of an investigation is led. AI systems should be overseen by humans.
- Lighter constraints for AI systems with low or minimal risk: the EC proposes to encourage and facilitate the creation of codes of conduct to foster the voluntary application of the requirements that apply to high-risk AI systems, including technical specifications and solutions, environmental sustainability, etc.
- Users should be informed when they interact with AI systems like chatbots and when their emotion is detected and used. Similarly, a mention should be made when content is generated through automated means (deepfakes).
This regulation is not isolated and should be interpreted with other regulations or initiatives led by the European Commission targeting to ensure fair market conditions within Europe and fair treatment of European users by platforms and digital players (in particular General Data Protection Regulation (2018) on data protection and privacy and Digital Service Act (2020) and Digital Market Act (2020) on platform regulation).
Possible impacts on Datanomics
It is too early to tell exactly which consequences will emerge out of this new regulation for two reasons. First, the regulation is not enforced yet. It will go through a series of discussions and revisions at the European level and we can anticipate a lot of lobbying which will certainly make the initial proposal evolve. Second, it’s hard to predict the reactions from the various stakeholders and in particular the users of digital services.
However, we can foresee some changes regarding the value creation opportunities, the software and service industry and the battle between incumbents and digital giants.
- One way of creating value with data is to trade it as a commodity. Because it puts more pressure on the quality of the training data sets for AI systems, this regulation is likely to widen the business opportunities for trading high-quality data sets. In particular, the market of labelling data.
- Another way of creating value with data is to use them to improve the performance of a business model, to use them as a lever. Previously embedded biases in AI systems may have limited market opportunities, for example excluding some clients from certain offers. It is then not unlikely that new markets will emerge from less biased AI systems.
- Creating value with data has a direct cost (collection, storage, analysis, exchange) and the regulation is likely to increase direct costs to maintain high-risk AI systems for European users. The regulation lists a number of resources-consuming obligations for those systems: risk management practices implementation, documentation of the systems, human supervision, recording of logs.
- Regarding the software and service industry, new business opportunities for companies able to provide digital products and services to meet expectations regarding high-risk systems. In particular, data sets and governance mechanisms to limit biases in training AI systems. As these solutions are banned in public spaces except for criminal search purpose, we can expect a curb in the demand for real-time facial recognition solutions in Europe. It is likely to limit the development of strong European capabilities for these solutions.
- Competition dynamics would be affected. Firstly, the cost increase of maintaining high risks AI systems will favour big players which can support the costs and finance the risk. Second, stronger demands will require more investments in specific capabilities which will impose tradeoffs for incumbents regarding resource allocation. It is likely that partnerships will continue to extend between technology providers and incumbents, with varying value sharing mechanisms according to the bargaining power of parties.
Strategic actions
From the previous analysis, we can identify several non exclusive strategic actions:
- Lobby to influence what will be decided in the enforced regulation or to be excluded from it. In particular, the list of high-risk AI system will certainly be a direct target of lobbying efforts. It’s likely also that some companies will try and explain that some parts of their digital services do not fall in the definition of AI, similarly to Uber explaining they are not in the transport business but in the software business.
- Secure critical resources and capabilities associated with the regulation evolution. As an example, recruiting experts on ethics and bias reduction and partnering with universities on research programs may be a good way to prepare. Most importantly because the European Commission is far from being alone to put bias at the top of the list of issues to be solved.
- For incumbents: cooperate with competitors to improve the bargaining power with technology providers. It could be for example joining funding capabilities to support the emergence of a new player or collaborating on purchasing practices through a dedicated joint venture.
- Decide to go beyond the regulation and claim a “best-of-class” AI system (according to the criteria stated in the regulation). This could be in particular the case for low or minimum risks AI systems and developing a strong positioning could pay off.
These actions take time to transfer into results and some require major positioning and resource shifts. In that process, limited experiments and tests, as well as original partnerships, could turn out to be efficient ways of navigating in this uncertain context.
Photo by Tingey Injury Law Firm on Unsplash