Home  >  Article  >  Technology peripherals  >  Practicing Responsible AI Deployment: Four Principles

Practicing Responsible AI Deployment: Four Principles

王林
王林forward
2023-04-22 20:28:061271browse

Artificial Intelligence (AI) is transforming every industry, with more than one-third of organizations now producing AI extensively or on a limited basis. But like any technology, AI comes with significant economic and social risks, such as the spread of unethical bias, dilution of accountability, and violations of data privacy.

Practicing Responsible AI Deployment: Four Principles

To avoid these risks and deploy AI responsibly, both regulatory policy and industry have a responsibility to develop processes and standards for practitioners and users working around the technology. To that end, the team at the Ethical AI and ML Institute has put together Responsible AI Principles to empower practitioners to ensure these principles are embedded by design into the infrastructure and processes surrounding production AI and machine learning systems.

This article breaks down four of the eight principles: bias assessment, explainability, artificial enhancement, and repeatability.

Bias Assessment

In a sense, AI models are inherently biased because they are designed to treat relevant answers differently. That's because intelligence, at its core, is the ability to recognize and act on patterns we see in the world. When developing AI models, we try to replicate this accurate ability and encourage the AI ​​to discover patterns in the data it is fed into and develop biases accordingly. For example, a model that studies protein chemistry data will inherently have a relevant bias toward proteins whose structures can fold in a certain way, thereby discovering which proteins are useful in relevant use cases in medicine.

Therefore, we should be careful when speaking out against AI bias. When it comes to the topic of bias in AI, we generally refer to bias that is actually undesirable or unreasonable, such as bias based on a protected characteristic such as race, gender, or national origin.

But why do artificial intelligence models produce unethical biases? The answer depends on the data it's fed into. Models will ultimately reflect biases present in the training data they were used before deployment, so if the training data is unrepresentative or incorporates pre-existing biases, the resulting model will eventually reflect them. As they say in computer science, "garbage in, garbage out."

Teams must also create a series of processes and procedures to properly identify any undesirable bias around the effectiveness of the AI ​​training data, the training and evaluation of the model itself, and the operational lifecycle of the model itself. If you're deploying AI, a good example to look at is the Ethical AI and Machine Learning Institute's eXplainable AI framework, which we'll cover in more detail next.

Interpretability

To ensure that the AI ​​model is fit for purpose, the participation of experts in relevant fields is also important. These people can help teams ensure that AI models are using the right performance metrics, not just statistics and accuracy-driven performance metrics. It is worth emphasizing that domain experts include not only technical experts, but also experts in the social sciences and humanities relevant to the use case.

However, for it to be effective, it is also important to ensure that the model’s predictions can be interpreted by relevant domain experts. However, advanced AI models often use state-of-the-art deep learning techniques, which may not simply explain why a particular prediction is made.

To overcome this difficulty, organizations tend to achieve machine learning explainability by leveraging a variety of techniques and tools that can be used to decipher the predictions of AI models.

After explainability comes the operationalization of artificial intelligence models. This is the time for investigation and monitoring by relevant stakeholders. The lifecycle of such an AI model only begins after it is properly deployed to production. Once up and running, a model only suffers performance degradation due to external pressures, whether it is conceptual drift or changes in the environment in which the model is run.

Human Augmentation

When deploying AI, it is critical to first assess the current needs of the original non-automated process, including outlining the risk of adverse outcomes. This will allow for a deeper understanding of the process and help identify areas that may require human intervention to reduce risk.

For example, an AI that recommends meal plans to professional athletes has far fewer high-impact risk factors than an AI model that automates the back-end loan approval process for a bank, indicating the need for human intervention in the former be smaller than the latter. When teams identify potential risk points in their AI workflows, they can consider implementing a “human-machine loop” review process (HITL).

HITL ensures that after automating a process, there are still various touch points where human intervention is required to check the results, making it easier to provide corrections or reverse decisions when necessary. This process can include a team of technical experts and industry experts (for example, an underwriter for a bank loan, or a nutritionist for meal planning) to evaluate the decisions made by the AI ​​model and ensure that they adhere to best practices.

Repeatability

Reproducibility refers to a team’s ability to repeatedly run an algorithm on data points and get the same results every time. This is a core component of responsible AI, as it is critical to ensuring that the model’s previous predictions are republished when re-run at a later stage.

Naturally, reproducibility is difficult to achieve, largely due to the inherent difficulties of artificial intelligence systems. This is because the output of the AI ​​model may vary depending on various background situations, such as:

  • Code for calculating AI interference
  • Weights learned from the data used
  • Environment, infrastructure and configuration to run the code
  • Inputs and input structures provided to the model

This is a complex issue, especially when an AI When models are deployed at scale and countless other tools and frameworks need to be considered. To do this, teams need to develop robust practices to help control the above situations and implement tools to help improve reproducibility.

Key Takeaways

With the high-level principles above, industries can ensure they follow best practices for responsible use of AI. Adopting such principles is critical to ensuring that AI reaches its full economic potential and does not disempower vulnerable groups, reinforce unethical biases, or undermine accountability. Instead, it can be the technology we can use to drive growth, productivity, efficiency, innovation and greater good for all.

The above is the detailed content of Practicing Responsible AI Deployment: Four Principles. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete