search
HomeTechnology peripheralsAIIs artificial intelligence (AI) trustworthy?

Is artificial intelligence (AI) trustworthy?

​Artificial intelligence is more artificial than intelligence

In June 2022, Microsoft released the Microsoft Responsible Artificial Intelligence Standard v2, with the purpose of "defining responsible artificial intelligence" product development needs”. Perhaps surprisingly, the document only mentions one kind of bias in AI, namely that Microsoft's algorithm developers need to be aware of issues that may arise from users who rely too much on AI (also known as "automation discrimination").

In short, Microsoft seems to be more concerned about what users think of its products than how the products actually affect users. This is good business responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic discrimination negatively impacting individuals or groups of individuals).

There are three major unresolved issues with commercial artificial intelligence:

  • Hidden biases causing false results;
  • The possibility of abuse by users or attackers;
  • Algorithms return so many false positives that they negate the value of automation.

Concerns in academia

When artificial intelligence was first introduced into cybersecurity products, it was described as a protective silver bullet. There is no doubt that AI has its value, but with some faulty algorithms, hidden discrimination, and the abuse of AI by criminals, and even privacy snooping by law enforcement and intelligence agencies, the voice against AI is getting stronger and stronger.

According to "Scientific American" on June 6, 2022, the problem is the commercialization of a science that is still developing:

The largest research in artificial intelligence Teams are not found in academia but in business. In academia, peer review is king. Unlike universities, businesses have no incentive to compete fairly. Rather than submitting new academic papers for academic review, they engage journalists through press releases and leapfrog the peer review process. We only know what companies want us to know.

--Gary Marcus, professor of psychology and neuroscience at New York University

The result is that we only hear the positive aspects of artificial intelligence, but not the negative aspects of artificial intelligence. negative aspects.

Emily Tucker, executive director of the Center for Privacy and Technology at Georgetown Law School, came to a similar conclusion: "Starting today, our center will stop using the terms 'artificial intelligence', 'AI' and 'artificial intelligence' in our work. ' and 'machine learning' to expose and mitigate the harms of digital technology in the lives of individuals and communities... One of the reasons tech companies are so successful in twisting the Turing Test as a strategic means of accessing capital is governments' desire to The ubiquitous supervisory power granted by technology. This supervisory power is convenient and relatively cheap to exercise, and can be obtained through procurement processes that circumvent democratic decision-making or supervision."

In short, for The pursuit of profit hinders the scientific development of artificial intelligence. Faced with these concerns, we need to ask ourselves whether the AI ​​in our products can be trusted to output accurate information and unbiased judgments, rather than being misused by people, criminals, or even governments.

The Failure of Artificial Intelligence

Case 1: A Tesla self-driving car drove directly toward a worker holding a stop sign, slowing down only when the driver intervened. The reason is that AI is trained to recognize humans and recognize stop signs, but it is not trained to recognize humans carrying stop signs.

Case 2: On March 18, 2018, an Uber self-driving car hit and killed a pedestrian pushing a bicycle. According to NBC at the time, the AI ​​was unable to "classify an object as a pedestrian unless the object was close to a crosswalk."

Case 3: During the UK’s 2020 COVID-19 lockdown, students’ test scores were judged by artificial intelligence algorithms. About 40% of students received significantly lower grades than expected. This is because the algorithm places too much emphasis on the historical results of each school. As a result, students in private schools and previously high-performing public schools receive a large scoring advantage over other schools.

Case 4: Tay is an artificial intelligence chatbot launched by Microsoft on Twitter in 2016. By imitating real human language, Tay aims to become an intelligent interactive system that can understand slang. But after just 16 hours of real-person interaction, Tay was forced to go offline. "Hitler was right to hate Jews," it tweeted.

Case 5: Select candidates. Amazon wanted AI to help it automatically select candidates to fill job openings, but the results of the algorithm were sexist and racist, favoring white, male candidates.

Case 6: Mistaken identity. A Scottish football team streamed a match online during the coronavirus lockdown, using AI-powered cameras to track the ball. But this AI shooting system constantly regarded the linesman's bald head as a football, and the focus of the shooting was always on the linesman, not the game.

Case 7: Application rejected. In 2016, a mother applied for her son to move into the apartment where she lived after waking up from a coma for half a year, but was rejected by the housing center. It was a year after the son was sent to a rehabilitation center that the reason was revealed through a lawyer. The artificial intelligence used by the housing center believed that the son had a record of theft, so he was blacklisted from housing. But in fact, the son has been bedridden and unable to commit the crime.

There are many similar examples, and the reasons are nothing more than two. One is design failure caused by unexpected deviations, and the other is learning failure. The case of self-driving cars is one of learning failure. Although errors can be corrected as the number of learning times increases, until they are corrected, there may be a heavy price to pay once put into use. But if you want to completely avoid risks, it means that it will never be put into use.

Cases 3 and 5 are design failures, and unexpected deviations distorted the results. The question is whether developers can remove their biases without knowing they have them.

Misuse and Abuse of Artificial Intelligence

Misuse means that the application effect of artificial intelligence is not what the developer intended. Abuse means doing something intentionally, such as polluting the data fed to an artificial intelligence. Generally speaking, misuse usually results from the actions of the owner of an AI product, while abuse often involves the actions of third parties (such as cybercriminals), resulting in the product being manipulated in ways that were not intended by the owner. Let’s look at misuse first.

Misuse

Kazerounian, head of research at Vectra AI, believes that when human-developed algorithms try to make judgments about other people, hidden biases are inevitable. When it comes to credit applications and rental applications, for example, the United States has a long history of redlining and racism, and these discriminatory policies long predate AI-based automation.

Moreover, when biases are embedded deep into artificial intelligence algorithms, they are more difficult to detect and understand than human biases. "You may be able to see the classification results of matrix operations in deep learning models. But people can only explain the mechanism of the operation, but not the why. It only explains the mechanism. I think, at a higher level, What we must ask is, are some things suitable to be left to artificial intelligence?"

On May 11, 2022, a study by MIT and Harvard University was published in "The Lancet" , confirming that people cannot understand how deep learning reaches its conclusions. The study found that artificial intelligence was able to identify race by relying solely on medical images, such as X-rays and CT scans, but no one knew how the AI ​​did it. If we think about it further, AI medical systems may be able to do much more than we imagine when it comes to determining a patient's race, ethnicity, gender, and even whether they are incarcerated.

Anthony Selly, associate professor of medicine at Harvard Medical School and one of the authors, commented, “Just because you have representation of different groups in your algorithm (the quality and validity of the data), that doesn’t guarantee that it will always be that way. , and there is no guarantee that it will amplify existing disparities and inequalities. Using representation learning to feed algorithms more data is not a panacea. This paper should make us pause and really reconsider whether we are ready to apply artificial intelligence to Clinical diagnosis."

This problem has also spread to the field of network security. On April 22, 2022, Microsoft added a feature called "Leaver Classifier" to its product roadmap. The product is expected to be available in September 2022. "The leaver classifier can early detect employees who intend to leave the organization to reduce the risk of intentional or unintentional data leakage due to employee departure."

When some media try to use artificial intelligence and personal privacy as the theme When interviewing Microsoft, I got this answer: "Microsoft has nothing to share at the moment, but if there is new news we will let you know in time."

In terms of ethics, what must be considered is the use of AI To speculate on the intention to resign and whether it is the correct use of technology. At least most people believe that monitoring communications to determine if someone is considering leaving their job is the right or appropriate thing to do, especially if the consequences could be negative.

Moreover, unexpected biases in algorithms are difficult to avoid and even harder to detect. Since it is difficult for even humans to effectively judge personal motivations when predicting whether someone will leave their job, why won’t artificial intelligence systems make mistakes? Moreover, people communicate at work in various ways of speaking, making assumptions, joking, getting angry, or talking about other people. Even if you go to a recruitment website to update your resume, it may just be a passing thought in your mind. Once employees are determined by machine learning to be highly likely to leave, they are likely to be the first to be laid off during an economic downturn, and will not be eligible for salary increases or promotions.

There is a broader possibility. If businesses can have this technology, law enforcement and intelligence agencies will too. The same error of judgment can occur, and the consequences are much more serious than a promotion or salary increase.

Abuse

Alex Polyakov, founder and CEO of Adversa.ai, is more worried about the abuse of AI by manipulating the machine learning process. “Research conducted by scientists and real-world assessment work by our AI red teams [those who act as attackers] have proven that fooling AI decisions, whether it’s computer vision or natural language processing or anything else, , it is enough to modify a very small set of inputs."

For example, the words "eats shoots and leaves" can represent vegetarians or terrorists by simply adding different punctuation marks. . For artificial intelligence, it is almost an impossible task to exhaust the meaning of all words in all contexts.

In addition, Polyakov has twice proven how easy it is to fool facial recognition systems. The first time the artificial intelligence system was made to believe that all the people in front of it were Elon Musk. The second example was to use an image of a human that obviously looked like the same person, but was interpreted by the artificial intelligence as multiple different people. The principles involved, that of manipulating the learning process of artificial intelligence, can be applied by cybercriminals to almost any artificial intelligence tool.

In the final analysis, artificial intelligence is just machine intelligence taught by humans. We are still many years away from true artificial intelligence, even if we do not discuss whether true artificial intelligence can be achieved. For now, AI should be viewed as a tool for automating many regular human tasks, with similar success and failure rates to humans. Of course, it's much faster and much less expensive than an expensive team of analysts.

Finally, whether it is algorithm bias or AI misuse, all users of artificial intelligence should consider this issue: at least at this stage, we cannot rely too much on the output of artificial intelligence.

The above is the detailed content of Is artificial intelligence (AI) trustworthy?. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
From Friction To Flow: How AI Is Reshaping Legal WorkFrom Friction To Flow: How AI Is Reshaping Legal WorkMay 09, 2025 am 11:29 AM

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

This Is What AI Thinks Of You And Knows About YouThis Is What AI Thinks Of You And Knows About YouMay 09, 2025 am 11:24 AM

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

7 Steps To Building A Thriving, AI-Ready Corporate Culture7 Steps To Building A Thriving, AI-Ready Corporate CultureMay 09, 2025 am 11:23 AM

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Netflix New Scroll, Meta AI's Game Changers, Neuralink Valued At $8.5 BillionNetflix New Scroll, Meta AI's Game Changers, Neuralink Valued At $8.5 BillionMay 09, 2025 am 11:22 AM

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI ​​experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Take These Steps Today To Protect Yourself Against AI CybercrimeTake These Steps Today To Protect Yourself Against AI CybercrimeMay 09, 2025 am 11:19 AM

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber ​​criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

A Symbiotic Dance: Navigating Loops Of Artificial And Natural PerceptionA Symbiotic Dance: Navigating Loops Of Artificial And Natural PerceptionMay 09, 2025 am 11:13 AM

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

AI's Biggest Secret — Creators Don't Understand It, Experts SplitAI's Biggest Secret — Creators Don't Understand It, Experts SplitMay 09, 2025 am 11:09 AM

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

Bulbul-V2 by Sarvam AI: India's Best TTS ModelBulbul-V2 by Sarvam AI: India's Best TTS ModelMay 09, 2025 am 10:52 AM

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools