Home  >  Article  >  Technology peripherals  >  How to avoid the risks of artificial intelligence?

How to avoid the risks of artificial intelligence?

PHPz
PHPzforward
2023-04-21 14:01:081612browse

How to avoid the risks of artificial intelligence?

Artificial Intelligence systems and AI projects are becoming increasingly common today as businesses harness the power of this emerging technology to automate decision-making and improve efficiency.

If your business is implementing a large-scale AI project, how should you prepare? Here are three of the most important risks associated with AI, and how to prevent and mitigate them.

1. From Privacy to Security

People are very concerned about their privacy, and facial recognition artificial intelligence is developing rapidly in some aspects, raising ethical concerns about privacy and surveillance. For example, the technology could allow companies to track users' behavior and even emotions without their consent. The U.S. government recently proposed an “Artificial Intelligence Bill of Rights” to prevent AI technology from causing real harm that goes against core values, including the basic right to privacy.

IT leaders need to let users know what data is being collected and obtain their consent. Beyond this, proper training and implementation regarding data sets is critical to preventing data breaches and potential security breaches.

Test an AI system to ensure it achieves its goals without unintended consequences, such as allowing hackers to use fake biometric data to access sensitive information. Implementing oversight of an AI system would enable a business to stop or reverse its actions when necessary.

2. From Opaque to Transparent

Many artificial intelligence systems that use machine learning are opaque, which means it is not clear how they make decisions. For example, an extensive study of mortgage data shows that predictive AI tools used to approve or deny loans are less accurate for minority applicants. The opacity of the technology violates the “right to explanation” of applicants who have been denied loans.

When an enterprise’s AI/ML tool makes an important decision for its users, it needs to ensure that they are notified and given a complete explanation as to why the decision was made.

An enterprise’s AI team should also be able to track the key factors that led to each decision and diagnose any errors along the way. Internal employee-facing documentation and external customer-facing documentation should explain how and why the AI ​​system works the way it does.

3. From Bias to Fairness

A recent study shows that artificial intelligence systems trained on biased data reinforce patterns of discrimination, ranging from under-recruitment in medical research to reduced participation of scientists. , even minority patients are less willing to participate in research.

People need to ask themselves: If an unintended consequence occurred, who or which group would it affect? ​​Does it affect all users equally, or only certain groups?

Look carefully Historical data to assess whether any potential bias was introduced or mitigated. An often overlooked factor is the diversity of a company's development teams, and more diverse teams often lead to fairer processes and outcomes.

To avoid unintended harm, organizations need to ensure that all stakeholders from AI/ML development, product, audit, and governance teams fully understand the high-level principles, values, and control plans that guide the organization’s AI projects. Obtain independent assessments to confirm that all projects are aligned with these principles and values.

The above is the detailed content of How to avoid the risks of artificial intelligence?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete