Interpretability of machine learning models | Eleven

Interpretability of machine learning models21 January 2021

Data science

The development of machine learning models that process large amounts of data greatly improves the performance of predictions. Nevertheless, these models raise many questions about their interpretability, which can lead to rejection by business lines or customers using them. Data Scientists who wish to deploy these models must therefore propose a rigorous approach to improve the understanding of the results. Improving the understanding of machine learning models is therefore essential for their successful implementation within companies.

On January 17th, 1991, Operation Desert Storm began, which pitted a coalition of 35 states, led by the United States, against Iraq. After the first successful air raids, the coalition decided to launch a ground assault. To everyone’s amazement, when the American tanks opened fire, they pointed their guns at their allies and began to pound them, breaking up the coalition in the process.

 

It is necessary to have a good understanding of machine learning models

This episode of political fiction has its origin in the consequences that could have resulted from a misinterpretation of machine learning models. Indeed, during the 1990s, the American army tested an automatic enemy tank detection technology based on image recognition learning algorithms. However, within the sample, the most discriminating factor for detecting the presence of an enemy tank was the color of the sky: photos showing a landscape including an enemy tank were taken in good weather while those without were taken in bad weather. Once the detection models had been calibrated in this way, a sunny day or a simple storm was enough to make a military coalition precarious… This example highlights the need to have a good understanding of the machine learning models in order to be able to use them correctly.

This is all the more so as we live in an era when algorithms are taking an increasingly important place in our daily lives: credit granting, dating sites, choice of routes, medical diagnoses, etc. However, this multiplication of algorithms raises many questions: how were they built? How do they work? How do they explain their decisions? The answers to these questions are a relatively new but expanding field of research in the scientific world. These questions must be taken seriously by companies wishing to leverage with such tools, at the risk of seeing the relationship with their customers and the support of business lines for data access projects deteriorate.

To illustrate our point, let us take as an example the algorithms used by banks to determine each other’s borrowing capacity. Let’s put ourselves in the shoes of a young investor we will call Daniel who is looking for his first real estate investment. Daniel goes to the bank to find out the terms of the loan and provides a fairly wide range of his personal data (age, salary, marital status, etc.).

To his great surprise, his bank advisor told him that he was ineligible to take out a loan. The new credit allocation software based on machine learning that allows users to allocate credit to their customers “with just a few clicks and with unparalleled accuracy” has recently been implemented and is expected to use the latest state-of-the-art techniques in artificial intelligence. However, the performance of the algorithm was preferred to its interpretability, which leaves the bank advisor in a bind with this new software since he cannot explain to Daniel the factors that discriminated against his file.

How to reconcile performance and interpretability ?

This example illustrates the compromise faced by any development project related to machine learning: what is the ideal balance between performance and interpretability? Where should the cursor be placed? Modeling is generally characterized by an inversely proportional relationship between performance and interpretability.

Thus, the universe of machine learning models can be divided into models that can be interpreted by nature (multilinear regressions, decision trees, etc.) and so-called “black box” models (random forests, neural networks, etc.) as summarized in the illustration below:

 

The success of a Machine Learning project within a company is based on the following five golden rules:

  1. The model must solve a clearly identified problem corresponding to a business need ;
  2. The model must capture reality as well as possible without presenting any overlearning bias and by being generalizable ;
  3. The model must be able to be explained and easily adopted by business leaders in order to gain their support ;
  4. The model must be adapted to the requirements of the end customer ; and
  5. The model must meet the requirements of the regulator.

Improving the interpretability of machine learning models is one of the main levers available to Data Scientist teams to meet the criteria for successful project development. This makes it possible to avoid the duality of interpretability vs. performance, which could work against potentially more efficient models.

Our previous example illustrates the need to have a good understanding of machine learning models: the bank advisor is unable to explain the result of the model to the client, who finds himself confused as to the bank’s decision, which leads to a deterioration of the pre-existing trust-based relationship between the bank and its client.

Understanding and explaining models is therefore one of the major challenges in machine learning projects. What is the process to be followed to achieve this? What are the existing solutions? To answer these questions, two categories of techniques emerge: global interpretability and local interpretability.

Global interpretability seeks to identify the most important variables of the model through a careful analysis of the contribution of each variable to the model’s output data. What is their contribution to the model’s performance? What is the relationship between each variable and the output of the model? This must also be complemented by a critical look at the economic meaning of the behavior of the main variables. The overall interpretability should eventually (1) improve the understanding of the model by business experts and thus (2) make the results more appropriate.

A method commonly used for global interpretability is the Partial Dependency Plot method, which consists in freezing all variables except the one you want to analyze. Thanks to numerous simulations, it is possible to evaluate the behavior of this variable in the model. It is then sufficient to apply this methodology to all the variables of the model to be able to know the impact of each variable and on the output.

Reduce the gap between data science and business

In our example, the overall interpretability allows the bank to accurately understand the criteria and variables that the model uses to estimate the credit risk associated with a particular type of profile. This control of risks and models is nowadays essential for European banks, particularly with regard to the regulator, who is increasingly demanding in the calculation of banking risks[1].

Local interpretability, on the other hand, seeks to decipher the behavior of the model at the level of an individual by identifying the impact and local contribution of each variable. This method should improve the communication and justification of the algorithm results to the end user.

One of the methods commonly used for local interpretability is the so-called Shapley Value method, which highlights the contribution of each variable to the difference between the prediction and the average of the predictions. In Daniel’s example, this helps to highlight the strengths and weaknesses of his case.  Thus, the bank advisor could explain that age, salary and level of savings were the variables that contributed most to the final decision made on his loan application. Projects related to big data in companies often lead to the improvement and automation of the operational chain or to a more frictionless customer experience through a simplified and unified journey. Nevertheless, not taking into account the expectations of the business, customers and regulators further upstream of project developments, can lead to project failure especially when projects are based on “black boxes”. Therefore, the interpretability of models offers an essential opportunity to reduce the gap between data science and business. In this respect, devoting part of its efforts to the use of interpretability methods could eventually lead to data science teams better ensuring the acceptance of more efficient models. Interpretability is therefore one of the key factors for the successful implementation of decision algorithms in companies.

 

Morand STUDER, Pietro TURATI and Clément TEQUI

 

[1] Basel III: Finalising post-crisis reforms, Bank for international Settlements, December 2017

Sur le même sujet

follow us

All rights reserved Eleven Strategy ©2024