Home  >  Article  >  Technology peripherals  >  Expert opinion: AI without supervision will produce ethical bias

Expert opinion: AI without supervision will produce ethical bias

WBOY
WBOYforward
2023-04-09 14:21:12869browse

Transparency often plays a key role in ethical business dilemmas - the more information we have, the easier it is to determine what are acceptable and unacceptable outcomes. If finances are misaligned, who made the accounting mistake? Who is responsible for protecting the data if it is breached, and are they acting correctly?

Expert opinion: AI without supervision will produce ethical bias

But what happens when we look for the clear source of a bug or problem and can’t find anyone? This is where artificial intelligence raises unique ethical considerations.

Artificial intelligence shows great potential in organizations, but it is still very much a solution in search of a problem. This is a misunderstood concept and practical applications have yet to be fully realized within enterprises. Coupled with the fact that many companies lack the budget, talent, and vision to apply AI in a truly transformational way, AI is still far from reaching critical mass and is prone to abuse.

But just because AI may not be hyper-visible in day-to-day business, that doesn’t mean it doesn’t have a role to play somewhere within your organization. Like many other ethical dilemmas in business, ethical lapses in artificial intelligence often occur in the shadows. Whether intentional or not, the consequences of an AI project or application pushing ethical boundaries can be nightmarish. The key to avoiding ethical lapses in AI is to have corporate governance in place on the project from the start.

Building AI with Transparency and Trust

By now we are all familiar with popular examples of AI gone wrong. Soap dispensers that don’t work properly for dark-skinned customers, pulse oximeters that are more accurate for white people, and even algorithms that predict whether criminals will go back to jail are all artificial intelligence (arguably unintentional). Biased stories.

Not only will these situations generate bad headlines and social media backlash, but they will also undermine more legitimate AI use cases that will not be possible if the technology continues to be viewed with distrust. . For example, in healthcare alone, AI has the potential to improve cancer diagnosis and flag patients at high risk of readmission for additional support. Unless we learn to build people’s trust in AI, we won’t see the full benefits of these powerful solutions.

When I talk about AI with peers and business leaders, I have supported the idea of ​​transparency and governance in AI efforts from the beginning. More specifically, here are my suggestions:

1. Ethical AI cannot happen in a vacuum

If not implemented properly, AI applications can have significant ripple effects. This often happens when a single department or IT team starts experimenting with AI-driven processes without oversight. Is the team aware of the ethical implications that might occur if their experiment goes wrong? Does the deployment comply with the company's existing data retention and access policies? Without supervision, it is difficult to answer these questions.

And, without governance, it may be more difficult to assemble the stakeholders needed to correct an ethical lapse if it does occur. Oversight should not be seen as a barrier to innovation, but as a necessary check to ensure that AI operates within certain ethical boundaries. Oversight should ultimately fall to the Chief Data Officer in the organization that owns them, or the Chief Information Officer if the CDO role does not exist.

2. Always Have a Plan

The worst headlines we see about AI projects going wrong often have one thing in common, which is that the companies in them are not prepared to answer questions when they arise. question or explain a decision. Supervision can solve this problem. When an understanding of AI and a healthy philosophy exist at the highest levels of an organization, it’s less likely to be blindsided by problems.

3. Due diligence and testing are mandatory

With more patience and more testing, many classic examples of AI bias can be mitigated. Like the hand sanitizer dispenser example, a company's excitement about showing off its new technology ultimately backfires. Further testing may reveal this bias before the product is released publicly. Additionally, any AI application needs to be rigorously vetted from the outset. Due to the complexity and uncertain potential of AI, it must be used strategically and carefully.

4. Consider artificial intelligence supervision capabilities

To protect customer privacy, financial institutions invest significant resources in managing access to sensitive files. Their records team carefully categorizes assets and builds infrastructure to ensure only the correct job role and department can see each one. This structure can serve as a template for building an organization’s AI governance function. A dedicated team can estimate the potential positive or negative impact of an AI application and determine how often its results need to be reviewed and by whom.

For businesses seeking digital disruption, experimenting with artificial intelligence is an important next step. It frees human workers from mundane tasks and enables certain activities, such as image analysis, to scale in ways that were not financially prudent before. But this is not something to be taken lightly. AI applications must be developed carefully and with appropriate supervision to avoid bias, ethically questionable decisions, and poor business outcomes. Make sure you have the right training for AI efforts within your organization. The most serious moral lapses often occur in dark places.

The above is the detailed content of Expert opinion: AI without supervision will produce ethical bias. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete