Home  >  Article  >  Technology peripherals  >  Build a better society with better artificial intelligence

Build a better society with better artificial intelligence

WBOY
WBOYforward
2023-04-08 19:31:061009browse

Artificial intelligence (AI) has enormous potential to innovate to improve every aspect of society, from traditional engineering systems to healthcare to creative processes in the arts and entertainment. In Hollywood, for example, studios are using artificial intelligence to reveal and measure bias in scripts—tools producers and writers need to create fairer and more inclusive media.

Build a better society with better artificial intelligence

However, AI is only as smart as the data it is trained on, and that data reflects real-life biases. To avoid perpetuating stereotypes and exclusivity, technologists are addressing issues of equity and inclusion in real life and innovation.

Innate Human Bias

As technologists look to use AI to find human-centered solutions to optimize industry practices and daily life, it’s important to be mindful of our innate biases that may Having unintended consequences is crucial.

“As humans, we are very biased,” said Ammanath, global leader of the Deloitte AI Institute and leader of technology and AI ethics at Deloitte. “As these biases are baked into the system, parts of society are likely to be left behind — underrepresented minorities, people who don’t have access to certain tools — and that could lead to more inequality in the world.”​

If the system was trained with biased data, or the researchers failed to consider how their own perspectives influence the direction of the research, then the starting point is good projects - creating equitable outcomes or mitigating past inequalities — may still end up being biased.

Ammanath said that so far, adjustments to AI bias have typically been in response to the discovery of biased algorithms or the emergence of underrepresented demographics after the fact. However, companies must now learn how to be proactive, mitigate these issues early, and take responsibility for missteps in their AI efforts.

Algorithmic bias in artificial intelligence

In artificial intelligence, bias appears in the form of algorithmic bias. “Algorithmic bias is a set of challenges in building AI models,” explained Kirk Bresniker, chief architect at HP Labs and vice president at Hewlett Packard Enterprise (HPE). "We may encounter challenges because our algorithms cannot handle different inputs, or because we have not collected a broad enough data set to incorporate into our model training. In either case, we do not have enough data. ”

Algorithmic bias can also come from inaccurate processing, data being modified, or someone injecting false signals. Whether intentional or not, this bias can lead to unfair outcomes that may privilege one group or exclude another entirely.

For example, Ammanath described an algorithm designed to identify different types of shoes, such as flip-flops, sandals, dress shoes, and sneakers. However, when it was released, the algorithm failed to recognize women's shoes with high heels. The development team was a bunch of recent college grads—all men—who had never thought of training it in women's shoes.

“This is a trivial example, but you realize the data set is limited,” Ammanath said. "Now think about a similar algorithm that uses historical data to diagnose a disease or illness. What if it wasn't trained for certain body types, certain genders, or certain races? The implications are huge." ​

"To Crucially, she said, if you don’t have that diversity, you’re going to miss out on certain scenarios.”​​

Better AI means self-regulation and ethics

Simple Accessing larger (and more diverse) data sets is a daunting challenge, especially as data becomes more concentrated. Data sharing raises many issues, the most important of which are security and privacy.

Nathan Schneider, assistant professor of media studies at the University of Colorado Boulder, said: "Currently, we are facing a situation where individual users have far less power than the large companies that collect and process their data."

Expanded laws and regulations will likely eventually dictate when and how data is shared and used. But innovation won't wait for lawmakers. Currently, AI development organizations have a responsibility to be good stewards of data and protect individual privacy while working to reduce algorithmic bias. Deloitte's Ammanath said because technology matures so quickly, it's impossible to rely on regulations to cover every possible scenario. “We will enter an era of balancing compliance with existing regulations and self-regulation.”

This self-regulation means raising standards across the entire technology supply chain that builds AI solutions, from data to training to the infrastructure needed to make these solutions possible. Additionally, organizations need to create avenues for individuals across departments to raise concerns about bias. While it’s impossible to completely eliminate bias, businesses must regularly audit the effectiveness of their AI solutions.

Due to the highly situational nature of AI, self-regulation will look different for each business. For example, HPE has developed ethical AI guidelines. A variety of people from across the company spent nearly a year working together to develop the company's AI principles and then reviewing them with a wide range of employees to ensure they could be followed and that they made sense to the corporate culture.

HPE's Bresniker said: "We want to improve the general understanding of these issues and then collect best practices. It is everyone's job - there is enough awareness in this area."​

Artificial intelligence technology has matured, from research and development to practical application and value creation, across all industries. The growing penetration of AI in society means businesses now have an ethical responsibility to provide solutions that are powerful, inclusive and accessible. This responsibility drives organizations to examine, sometimes for the first time, the data they pull into their processes. “We want people to build that vision and have measurable confidence in the data that comes in,” Bresniker said. “They have the power to stop ongoing systemic inequities and create equitable outcomes for a better future.”

The above is the detailed content of Build a better society with better artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete