Home  >  Article  >  Technology peripherals  >  In today's artificial intelligence environment, what is explainable AI?

In today's artificial intelligence environment, what is explainable AI?

WBOY
WBOYforward
2023-04-11 22:01:151424browse

In today's artificial intelligence environment, what is explainable AI?

As artificial intelligence (AI) becomes more sophisticated and widely adopted in society, one of the most critical sets of processes and methods is explainable AI, sometimes referred to as XAI.

Explainable AI can be defined as:

A set of processes and methods that help human users understand and trust the results of machine learning algorithms.

As you can guess, this interpretability is very important. Because AI algorithms control many areas, this brings the risk of bias, faulty algorithms, and other problems. By enabling transparency through explainability, the world can truly harness the power of artificial intelligence.

Explainable AI, as the name suggests, helps describe an AI model, its impact and potential biases. It also plays a role in describing model accuracy, fairness, transparency and the outcomes of AI-driven decision-making processes.

Today’s AI-driven organizations should always adopt explainable AI processes to help build trust and confidence in AI models in production. In today’s artificial intelligence environment, explainable AI is also key to being a responsible enterprise.

Because today’s artificial intelligence systems are so advanced, humans often perform a computational process to trace how the algorithm arrived at its results. The process becomes a "black box", meaning it cannot be understood. When these unexplainable models are developed directly from data, no one can understand what is going on.

Through explainable AI to understand how the AI ​​system operates, developers can ensure that the system can work properly. It can also help ensure that models comply with regulatory standards and provide opportunities for models to be challenged or changed.

DIFFERENCES BETWEEN AI AND Techniques and methods to help ensure every decision in the ML process is traceable and explainable. In contrast, conventional AI often uses ML algorithms to get results, but it is impossible to fully understand how the algorithm gets the results. In the case of conventional AI, it is difficult to check for accuracy, resulting in a loss of control, accountability, and auditability.

Benefits of Explainable AI

There are many benefits for any organization looking to adopt Explainable AI, such as:

Faster results: Explainable AI enables Organizations are able to systematically monitor and manage models to optimize business results. Model performance can be continuously evaluated and improved, and model development fine-tuned.
  • Reduce risk: By adopting an explainable AI process, you can ensure that the AI ​​model is explainable and transparent. Regulatory, compliance, risk and other needs can be managed while minimizing the overhead of manual inspections. All of this also helps reduce the risk of unintentional bias.
  • Build trust: Explainable AI helps build trust in production AI. AI models can be put into production quickly, interpretability can be guaranteed, and the model evaluation process can be simplified and made more transparent.
  • Explainable AI Technology

There are some XAI technologies that all organizations should consider, and there are three main approaches: predictive accuracy, traceability, and decision understanding.

The first approach, accuracy of predictions, is key to the successful use of artificial intelligence in daily operations. Simulations can be performed and the XAI output compared to the results in the training data set, which can help determine the accuracy of the predictions. One of the more popular techniques for achieving this is called Locally Interpretable Model-Independent Explanation (LIME), which is a technique for interpreting classifier predictions through machine learning algorithms.
  • The second approach is traceability, which is achieved by limiting how decisions are made and establishing a narrower scope for machine learning rules and features. One of the most common traceability technologies is DeepLIFT, or Deep Learning Important Features. DeepLIFT compares the activation of each neuron to its reference neuron while demonstrating traceable links between each activated neuron. It also shows dependencies on each other.
  • The third method is decision-making understanding, which is different from the first two methods in that it is people-centered. Decision understanding involves educating organizations, especially teams working with AI, so that they can understand how and why AI makes decisions. This approach is critical to building trust in the system.
  • Interpretable AI Principles

To better understand XAI and its principles, the National Institute of Standards and Technology (NIST), an affiliate of the U.S. Department of Commerce, provides four principles for explainable AI. Definition of principle:

  • AI systems should provide evidence, support, or reasoning for each output.
  • AI systems should give explanations that users can understand.
  • The explanation should accurately reflect the process used by the system to achieve its output.
  • AI systems should only operate under the conditions for which they were designed and should not provide output when they lack sufficient confidence in the results.

These principles can be further organized as:

  • Meaningful: In order to implement the principles of meaningfulness, users should understand the explanations provided. This also means that, given the use of AI algorithms by different types of users, there may be multiple interpretations. For example, in the case of self-driving cars, one explanation might be something like this... "The AI ​​classified the plastic bag on the road as a rock and therefore took action to avoid hitting it." While this example applies to drivers, Not very useful for AI developers looking to correct this problem. In this case, the developer must understand why the misclassification occurred.
  • Explanation accuracy: Unlike output accuracy, explanation accuracy involves the AI ​​algorithm accurately explaining how it arrived at its output. For example, if a loan approval algorithm interprets the decision based on the applicant's income when in fact it is based on the applicant's residence, then this interpretation will be inaccurate.
  • Knowledge Limitation: The knowledge limit of AI can be reached in two ways, which involves input beyond the system’s expertise. For example, if you build a system to classify bird species and are given a picture of an "apple," it should be able to interpret that the input is not a bird. If the system is given a blurry picture, it should be able to report that it cannot identify the bird in the image, or that its identification has very low confidence.

The role of data in explainable AI

One of the most important components of explainable AI is data.

According to Google, regarding data and explainable AI, “an AI system is best understood through the underlying training data and training process, and the resulting AI model.” This understanding relies on integrating the trained The ability of AI models to map to the precise data sets used to train them, as well as the ability to closely examine the data.

To enhance the interpretability of the model, it is important to pay attention to the training data. The team should identify the source of the data used to train the algorithm, the legality and ethics of obtaining the data, any potential bias in the data, and what steps can be taken to mitigate any bias.

Another key aspect of data and XAI is that data that is not relevant to the system should be excluded. In order to achieve this, irrelevant data must not be included in the training set or input data.

Google recommends a set of practices for achieving explainability and accountability:

  • Plan choices to pursue explainability
  • Think of explainability as A core part of user experience
  • Design interpretable models
  • Choose metrics to reflect the end goal and ultimate mission
  • Understand the trained model
  • With the model User communication explanation
  • Conduct extensive testing to ensure AI systems work as expected

By following these recommended practices, organizations can ensure the implementation of explainable AI. This is key for any AI-driven organization in today’s environment.

The above is the detailed content of In today's artificial intelligence environment, what is explainable AI?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete