Home >Technology peripherals >AI >Let's talk about explainable anti-fraud AI

Let's talk about explainable anti-fraud AI

WBOY
WBOYforward
2023-04-11 20:46:121743browse

Let's talk about explainable anti-fraud AI

In recent years, artificial intelligence has developed rapidly and is used as a powerful innovative tool in countless use cases in various industries. However, great responsibility often requires great ability. Thanks to AI and machine learning, anti-fraud technology is becoming more precise and developing faster than ever. Real-time scoring technology allows business leaders to instantly identify fraud. However, the use of AI-ML driven decision-making also raises concerns about transparency. And, when ML models emerge in high-stakes environments, the need for interpretability also arises.

As the number of critical decisions made by machines continues to increase, explainability and understandability are becoming increasingly important. Technology researcher Tim Miller said: Understandability is the degree to which humans can understand the reasons for decisions. Therefore, developing the interpretability of ML models is crucial to facilitate the development of trustworthy automation solutions.

Developers, consumers, and leaders should all understand the meaning and process of fraud prevention decision-making. However, ML models with slightly more parameters are difficult for most people to understand. However, the explainable AI research community has repeatedly stated that due to the development of understandability tools, black-box models are no longer black boxes. With the help of these tools, users can understand and more trust the ML models used to make important decisions.

SHAP of Things

SHAP (SHapley Additive exPlanations) is one of the most commonly used model-agnostic explanation tools today. It calculates Sharply values ​​from cooperative games, sharing the influence of features evenly. SHAP's TreeExplainer algorithm provides the opportunity to obtain local explanations in polynomial time when we use ensemble methods to combat fraud based on tabular data. With this tool, only approximations are possible. This is a huge advance compared to neural network-based explanations.

White box usually refers to the rules engine that calculates fraud degree scores. Essentially, black boxes and white boxes produce different results because black boxes produce results based on what the machine learns from the data, while white boxes produce scores based on predefined rules. We can develop both ways based on these differences. For example, adjusting rules based on fraud loops discovered by black-box models.

Black box testing combined with SHAP can help us understand the global behavior of the model and reveal the main characteristics of the model used to detect fraudulent activities. At the same time, undesirable biases in the model can also be revealed. For example, a model might discriminate against certain demographics. It can detect such situations through global model interpretation, thereby preventing inaccurate predictions.

Additionally, it helps us understand the individual predictions made by the model. During the debugging process of ML models, data scientists can observe each prediction independently and interpret it accordingly. Its feature contributions can help us perceive what the model is doing, and we can develop further from these inputs. By leveraging SHAP, end users can not only obtain the basic features of the model, but also understand how each feature (in which direction) affects the model output fraud probability.

Confidence Coefficient

Finally, with the help of SHAP, confidence can be gained from the customer by gaining trust in the successful model. Generally speaking, we have more confidence in a product if we understand how it works. People don't like what they don't understand. With the help of interpretive tools, we can look inside the black box, understand and trust it better. And, by understanding the model, we can continuously improve it.

Explainable Booster (EBM) is an alternative to using SHAP gradient to boost ML models. This is the flagship product of InterpretML (Microsoft's artificial intelligence company) and is a so-called glass box. The name Glass Box comes from the fact that its nature is interpretable due to its structure. According to the original documentation, “EBM is generally as accurate as state-of-the-art black-box models while maintaining full interpretability. Although EBM is slower to train than other modern algorithms, it is extremely compact and fast at prediction. ” Locally interpretable models - model-agnostic interpretation (LIME) are also a good tool that can be used for black-box interpretation. However, it is more popular for unstructured data models.

By leveraging the above tools along with transparent data points, organizations can make decisions with confidence. All stakeholders must know how their tools produce optimal results. Understanding black-box ML and the various technologies combined with it can help organizations better understand how they arrive at results to help achieve business goals.

Comments

For humans, the unknown is often scary and cannot be trusted. The algorithm model of AI-ML driven decision-making is like a "black box". We can only understand the structure, but we cannot gain insight into its operating principles, let alone the reliability of its results. Especially in high-risk environments such as the field of fraud prevention, the application of AI and ML technologies has become more difficult. The introduction of interpretability tools has gradually made the "black box" transparent, which has largely dispelled users' doubts and concerns, and at the same time created conditions for the development of the "black box" itself.

The above is the detailed content of Let's talk about explainable anti-fraud AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete