Home > Article > Technology peripherals > Can artificial intelligence help eliminate bias?
“We don’t see things for what they are, we just see them the way we see them.” She describes quite succinctly the various unfortunate biases that come with our brains.
In business settings, affinity bias, confirmation bias, attribution bias, and the halo effect, some of these reasoning errors that are better known, actually only appear on the surface. of. Collectively, they leave behind a trail of offenses and mistakes.
Of course, the most harmful human biases are those that prejudice our fellow humans or against us based on age, race, gender, religion, or appearance. Despite our efforts to purify ourselves, our work environments, and our societies from these distortions, they still permeate our thoughts and behaviors, including even modern technologies such as artificial intelligence.
Since AI was first deployed in recruiting, loan approval, insurance premium modeling, facial recognition, law enforcement and a host of other applications Since then, critics have pointed out (with considerable justification) the technology’s biased tendencies.
For example, Google’s new language model BERT (Bidirectional Encoder Representations from Transformers) is a leading natural language processing (NLP) model that developers can use to build their own AI. BERT was originally built using Wikipedia text as its primary source. Is there anything wrong with this? Wikipedia’s contributors are overwhelmingly white men from Europe and North America. As a result, one of the most important sources of language-based AI came with a biased perspective at its inception.
Similar problems have been found in computer vision, another key area of artificial intelligence development. Facial recognition datasets contain hundreds of thousands of annotated faces, which are critical for developing facial recognition applications for cybersecurity, law enforcement, and even customer service. However, it turns out that developers (probably mostly white, middle-aged men) are unknowingly better at achieving accuracy for people like them. Women, children, older adults and people of color had much higher error rates than middle-aged white men. As a result, IBM, Amazon and Microsoft were forced to stop selling their facial recognition technology to law enforcement in 2020 over concerns that biases could lead to misidentifications of suspects.
To learn more, watch the important and sometimes chilling documentary Coded Bias.
However, a better understanding of the phenomenon of bias in AI suggests that AI is simply exposing and amplifying things that already exist but are being ignored or Misconceptions of implicit bias. AI itself is immune to color, gender, age and other biases. It is less susceptible to the logical fallacies and cognitive biases that plague humans. The only reason we see bias in AI is because humans sometimes train it with heuristic errors and biased data.
Since the above biases were discovered, all major technology companies have worked hard to improve their data sets and eliminate biases. One way to eliminate bias in AI? — By using artificial intelligence! If that seems unlikely, let’s continue exploring.
The classic example can be found in job opportunities. Women and people of color are notoriously underrepresented across the most coveted employment opportunities. This phenomenon is self-perpetuating as new hires become senior leaders who become responsible for recruiting. Affinity bias ensures that "people like me" continue to be hired, while attribution bias justifies those choices based on past employee performance.
But that may change when artificial intelligence plays a greater role in recruiting. Tools like Textio, Gender Decoder, and Ongig use artificial intelligence to scrutinize hidden biases about gender and other characteristics. Knockri, Ceridian and Gapjumpers use artificial intelligence to remove or ignore identifying characteristics such as gender, national origin, skin color and age so hiring managers can focus solely on a candidate’s qualifications and experience. Some of these solutions also reduce recency bias, affinity bias, and gender bias in the interview process by objectively assessing a candidate’s soft skills or changing a candidate’s phone voice to mask their gender.
A similar approach can be taken in the venture capital world. In the venture capital world, men make up 80% of partners, while women receive only 2.2% of investments, despite being the founders of 40% of new startups. For example, Founders Factory, a British startup accelerator, has written software to screen program candidates based on identifiable characteristics of startup success. Similarly, F4capital, a female-run nonprofit, developed a FICO score for Startups to assess a startup’s maturity, opportunity and risk to eliminate bias in the risk decision-making process. This approach should be widely adopted, not only because it’s the ethical thing to do, but also because it delivers better returns — 184% higher than investing without the help of AI.
Artificial intelligence can also help make better decisions in health care. For example, medical diagnostics company Flow Health is working on using artificial intelligence to overcome cognitive biases that doctors often use to diagnose patients. For example, the "availability heuristic" encourages doctors to make common but sometimes incorrect diagnoses, while the "anchoring heuristic" causes them to stick to an incorrect initial diagnosis even if new information contradicts them. I believe artificial intelligence will be an important part of the rapidly evolving world of data-driven personalized medicine.
Artificial intelligence can even help reduce less malignant, but still very powerful biases that often cloud our business judgment. Think about the bias (in English-speaking countries) against information published in English, the bias in startups against older people despite their greater knowledge and experience; the tendency in manufacturing to use the same suppliers and methods rather than Try new, possibly better approaches. Don’t forget that during tough economic times, supply chain executives and Wall Street investors make short-term decisions based on emotion.
Putting AI to work in all of these areas can effectively check for unrecognized bias in decision-making.
If making mistakes is human nature, artificial intelligence may be the solution we need to avoid the consequences of our hidden biases Costly and unethical consequences. But what about the interference these biases have with AI itself? How can AI be a useful solution if it misreads biased data and amplifies biased human heuristics?
There are now tools designed to eliminate implicit human and data biases that creep into AI. The What-If tool, developed by Google's People and AI Research team (PAIR), allows developers to explore the performance of AI using an extensive library of "fairness metrics," while PWC's Bias Analyzer tool, IBM Research's AI Fairness 360 tool, and O 'Each of Reilly's LIME tools helps us identify whether bias exists in our AI code.
If you are a senior executive or board member who is considering the ways in which artificial intelligence might reduce bias in your organization, I urge you to consider artificial intelligence as a promising new weapon in your arsenal, Rather than seeing it as a panacea that completely solves the problem. From a holistic and practical perspective, you still need to establish baselines for reducing bias, train your employees to recognize and avoid hidden bias, and collect external feedback from customers, suppliers, or consultants. Bias reviews are not only a good idea, in some cases, they're even the law.
The above is the detailed content of Can artificial intelligence help eliminate bias?. For more information, please follow other related articles on the PHP Chinese website!