Home  >  Article  >  Technology peripherals  >  UK Information Commissioner warns: Sentiment analysis AI tools are not reliable and effective options

UK Information Commissioner warns: Sentiment analysis AI tools are not reliable and effective options

王林
王林forward
2023-05-08 12:46:081152browse

The British Information Commissioner (ICO) recently issued a warning to technology leaders that it is best not to purchase sentiment analysis AI tools, because this technology is not as effective as people think, and may also lead to AI bias and discrimination. Businesses that adopt this technology could face scrutiny and oversight from data regulators unless they can prove the technology's effectiveness.

UK Information Commissioner warns: Sentiment analysis AI tools are not reliable and effective options

Emotion analysis technology uses many biometric data points, including gaze tracking, emotion analysis, facial movement, gait analysis, heartbeat, facial expressions and skin moisture levels, etc. and attempt to use this data to determine or predict someone's emotional state.

Stephen Bonner, deputy information commissioner at the UK Information Commissioner, said the problem is that there is no evidence that this method actually works, and there is a lot of evidence that it is unreliable. If businesses rely on survey results using sentiment analysis AI tools, this is more likely to lead to erroneous results that can cause harm.

He warned that if a company uses sentiment analysis AI tools in its work, the threshold for investigation will be "very low."

"Sometimes when new technology is promoted, people think, 'Let's wait and see and get understanding from both sides,' and we absolutely will do that with other legitimate biometric technologies. But as far as sentiment goes Analyzing AI, there is no legitimate evidence that this technology works. We will be watching this issue extremely closely and are willing to take strong action more quickly. The onus is on those who choose to take advantage of this to prove to everyone that it is worthwhile , because the presumption of innocence seems to have no scientific basis at all," he added.

Sentiment analysis AI can be useful in some situations

But Bonner said there are examples where the technology has been applied or suggested as a use case, including By providing wearable tools to monitor workers’ mood and health, as well as using a variety of data points to record and predict potential health issues.

However, immature algorithms for detecting emotional cues will lead to the risk of systemic bias, inaccuracy and discrimination. The technology relies on collecting, storing and processing large amounts of personal data, including subconscious behavioral or emotional responses. This use of data carries far higher risks than traditional biometric technologies used to verify or identify an individual.

It should be noted that the British Information Commissioner did not ban the use of such technology, but only issued a warning that companies that implement such technology will be scrutinized due to the risks involved. As long as there is a clear statement, users can use it as a gimmick or entertainment tool.

Bonner said: "There is some difference between biometric measurements and inferring the intent of the result. I think it makes sense that someone's stress level can be detected by their voice. But from this point It would be a step too far to say he's a fraud. We're not going to ban the use of AI to determine who looks disturbed, but if it's just that some people are disturbed and don't infer that they're trying to defraud through their biometrics, Then this approach is wrong.”

Cross-industry impact of biometric technology

Biometric technology is expected to have a significant impact on various industry sectors, ranging from financial services Companies use facial recognition to verify human identity, to using voice recognition instead of passwords to access services.

The UK Information Commissioner is working with the Ada Lovelace Institute and the British Youth Council to develop new biometric guidance. The guidance will be "people-centred" and is expected to be released next spring.

Dr Mhairi Aitken, ethics researcher at the Alan Turing Institute, welcomed the UK Information Commissioner's warning but noted that it was also important to look at the development of these systems and ensure developers acted ethically methods, and create and use tools where needed.

She said: “An ethical approach to developing technology or new applications must start with who in the community may be affected and involve them in the process to see if it is actually appropriate in the context in which it is deployed. . This process gives us the opportunity to be aware of any hazards that we may not have anticipated."

Emotion-detecting AI is a real risk of harm

Dr. Aitken said, The harm that such AI models can cause is enormous, especially to those who may not fit the model developed when building the predictive model. She added: "This is a very complex area to start thinking about how we can automate something like this and be able to take into account cultural differences and emotional differences."

Dr. Aitken pointed out that AI systems are difficult Determining what emotional responses are appropriate in different contexts, “The way we express emotions is very different, depending on who we’re with and the context we’re in. There’s also a need to consider whether these AI systems are able to adequately take people’s Emotional expression.”

Bonner said the harm of using emotional AI tools in entertainment was minimal, but Dr Aitken warned that this use case had its own risks, including people becoming accustomed to the technology and thinking it actually works. "If used, unless explicitly labeled for entertainment."

Bonner added that the problem when it comes to emotional AI is that there are so many data points and differences between each person that it is difficult to develop one ideal model. This has been demonstrated in several research papers on this technology. "If someone said, 'We've solved the problem and can make accurate predictions,' then they might do wonders, but I don't think that's going to happen."

The above is the detailed content of UK Information Commissioner warns: Sentiment analysis AI tools are not reliable and effective options. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete