Artificial Intelligence, from opportunity to regulation | Eleven

Artificial Intelligence, from opportunity to regulation27 April 2021

Data science

On 21 April 2021, the European Commission proposed a regulatory text on Artificial Intelligence in the European Union. This text, still awaiting approval by the Parliament and the Council of the EU, should be validated within the year.

Artificial Intelligence at the heart of the debate

At the beginning of 2018, the so-called “Villani” report laid the first foundations of the French strategy on Artificial Intelligence (AI). Its objectives were to create a network of institutes dedicated to AI, to set up a dedicated supercomputer and to increase the attractiveness of careers in research to avoid the flight to the American giants. At the end of 2018, the French AI plan was officially announced, following the broad lines of the Villani report: €1.5 billion “of public money dedicated over the entire five-year period to develop artificial intelligence” by various means (supercomputers, funding for PhDs, funding for specialised start-ups, etc.). At the same time, Germany also announced its plan dedicated to AI, 3 billion euros to propel itself among the leading nations on Earth, hoping to overtake the American and Chinese hegemonies. The AI race is on in Europe, and other European countries will follow France and Germany’s lead, announcing dedicated plans in the years to come.

In 2020, it is therefore not surprising to see the European Commission seize the subject by publishing its white paper on AI. It sets its own objectives around 4 pillars:

  • Creating favourable conditions for the development of AI
  • Making the EU an attractive area for innovation
  • Building strategic leaders in the different sectors of activity
  • Ensuring that AI is used for the benefit of citizens

However, as expected, after the euphoric period of investment comes the need for regulation of artificial intelligence, accelerated in particular by the various autonomous vehicle accidents involving AI systems.

A Commission proposal to regulate Artificial Intelligence

On 21 April 2021, the European Commission proposed the first legal framework for AI. This new regulation aims to guarantee European citizens that they can trust AI, by anticipating all the risks induced by the use of automated systems. The use of facial recognition – and biometric means of identification – and systems affecting high-impact issues such as education or democracy are particularly targeted.

Furthermore, the European Commission intends to guarantee a homogeneity of rules throughout the territory of the Member States, in order to avoid a fragmentation of the market. These rules, subject to approval by the Council of the EU and the European Parliament, will be applicable to all economic actors, regardless of nationality, throughout the European Union. Their application will be ensured through national authorities designated by each member country and responsible for ensuring the implementation of this new regulation. Finally, the European Council on AI will centralise all the information reported by the national authorities.

Non-compliance with this new regulation may result in a fine with a ceiling:

  • Up to €30M or 6% of annual global turnover (whichever is higher) for prohibited practices or non-compliance with the data regulation
  • Up to €20M or 4% of total worldwide turnover (whichever is higher) for non-compliance with any other regulatory principle
  • Up to €10M or 2% of total worldwide turnover (whichever is higher) for incorrect, incomplete or misleading notification to the competent authorities

In practice, a wide range of systems are affected

The definition of AI given by the European Commission is very broad, making it possible to legally cover a wide range of systems:

Machine learning approaches, supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning
Logic and knowledge-based approaches, including knowledge representation, inductive programming, knowledge bases, inference and deduction engines, symbolic reasoning and expert systems
Statistical approaches, Bayesian estimation, search and optimisation methods

These systems were then classified according to levels of risk:

  • Unacceptable risk: obvious threats to citizens, social rating by public authorities, etc. These systems will simply be banned from the European territory.
  • High risk: critical infrastructure, education, examinations, employment, recruitment, essential public and private services (including loan systems), border management, justice and democracy. These systems will be heavily regulated.
  • Limited risk: AI dialogue systems (chatbot), AI-generated image/video/content. These systems will be subject to the rule of transparency so as not to mislead the user about the origin of the content.
  • Minimal risk: video game world, spam filters, etc. These systems, which according to the Commission represent more than 90% of the cases, will not see any change as a result of this new regulation.
Pyramide des risques des systèmes d'IA

An exception to this risk-based definition is made for biometric identification systems (notably facial recognition). The main reason given by the European Commission is the excessive risk of statistical bias in these algorithms, which can give rise to serious false positives, particularly on the grounds of gender, ethnic origin or age.

The latter will simply be prohibited on European territory, except in three specific cases of use:

  • The search for victims of crime and missing children
  • Responding to an imminent threat of a terrorist attack
  • Detection and identification of perpetrators of serious crimes

Future developments

Four major areas of development are foreseen for risk systems.

1. Risk assessment

Systems at risk should have a formalised policy for assessing the risk of error. This policy should be iterative and therefore evolve over the life cycle of the system. Three key elements should be put in place:

  • The upstream definition of metrics to quantify the errors expected from the system
  • The estimation of the risks that may be induced in the event of a system error
  • The assessment of other possible risks that may be detected after the system has been put into production

2. Validation of the datasets

The datasets used for training should meet certain criteria, including: “be relevant, representative, error-free and complete. Have adequate statistical properties, including, where appropriate, all profiles or groups of people with whom the system is intended to be used “1.

While this principle of validating datasets is valuable, it should be remembered that datasets will always be somewhat biased, often partially incomplete and will very rarely have perfect statistical properties. The application of this rule will therefore probably rely more on good faith than on formal and absolute criteria. However, it is fundamental to work on debiasing these algorithms as much as possible, as already mentioned in our article: “Can algorithms be racist“.

3. Transparency, interpretability and human beings

The third regulatory issue put forward by the European Commission revolves around the transparency of models and their interpretability. In practice, risk systems must be understandable to human users and avoid the “black box” principle, which only allows a result to be obtained without access to the reasoning that led to the solution. The financial world has already taken this on board, for example, in the case of credit allocation models.

With this principle of transparency also comes the principle of involving humans at the heart of the AI process. Indeed, operational staff must not only be capable of using AI systems, but also of understanding their results and questioning them. This principle of empowerment and empowerment of the business is particularly important to us at eleven and is fundamental to the successful adoption of AI as a means to an end. To read more on the subject: Interpretability of Machine Learning models.

Good practices to be generalised

In reality, this new regulation was rather expected and will probably not change much in the day-to-day life of AI system providers, but it will allow to set common bases for a more supervised AI. As many of the systems at risk already have the importance of these principles in mind, no drastic change of direction for the industries concerned is expected.

However, it is interesting to see these good practices being formalised and gradually generalised to all systems in production. This is what the European Commission is suggesting, with the instigation of the “Voluntary Codes of Conduct”, which consist of applying these principles of risk analysis, bias assessment and transparency to all systems.

For our part, these are fundamental principles that we will continue to apply on a daily basis with our clients and which you can find here.

 

Morand Studer, Simon Georges-Kot, Jean Sauvignon

 

Notes and references

Traduction libre, source : New rules for Artificial Intelligence – Q&As (europa.eu)

Sur le même sujet

follow us

All rights reserved Eleven Strategy ©2024