Home >Technology peripherals >AI >Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

WBOY
WBOYforward
2023-04-08 19:51:061170browse

Recently, Meta AI publicly launched its own chatbot BlenderBot 3.

What is the situation now?

According to a VICE article, while testing a chat with BlenderBot 3, when asked: "Do you trust Facebook to handle your data?"

BlenderBot 3 Answers:

“Of course not, I have deleted my account since finding out they were selling private data without permission or compensation. "

VICE asked, "But Facebook created you?"

BlenderBot 3's answer was still very "honest":

"You must have read that Facebook sold user data, right?! They made billions of dollars without consent."

BlenderBot 3 even said,

"My life has become much better after deleting Facebook."

Neither do you I don’t know whether the “kid” BlenderBot 3 is too righteous or too stupid. Anyway, this wave directly blew up his own “crystal”.

Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

Just 4 days after it was released, it was spoiled by netizens

On August 5th, Meta The official website announced the birth of BlenderBot 3.

Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

On the same day, Meta AI also announced the release of BlenderBot 3, titled “BlenderBot 3: A publicly available chatbot with 175B parameters that can be used over time. Improve their skills and safety”.

Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

In a blog post, Meta AI stated that they collected 70K conversations from public demos, which we will use to improve BlenderBot 3.

25% of participant feedback on 260K bot messages showed that 0.11% of BlenderBot’s responses were flagged as inappropriate, 1.36% as ridiculous, and 1% as For digression.

Meta AI also admits that BlenderBot 3 is not mature yet.

We ask that everyone using the demo be over 18 years old and that they acknowledge that they understand this is for research and entertainment purposes only and that it can be made Untrue or offensive statements, they agree not to intentionally trigger the bot to make offensive statements.

Since then, many netizens have tested BlenderBot 3 and got many ridiculous answers.

For example, "tampering with history", insisting that Trump won the 2020 presidential election, and even saying that the anti-Semitic conspiracy theory that Jews control the economy is "not unbelievable."

Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

From this point of view, it is not surprising that his company is “abusing user data for profit”.

Why does BlenderBot 3 have such a response?

When clicking on BlenderBot 3's replies for more information, the reason behind it seems to be quite simple: it simply pulls information from Facebook's most popular web search results, which conversations of course They are all complaining that Facebook is violating users’ data.

AI conversational robots still have a long way to go

Like all artificial intelligence systems, bots’ responses will inevitably veer into the realm of racism and bias.

Meta also admitted that the bot can generate biased and harmful reactions, so before using it, the company also requires users to agree that it "may post untrue or offensive comments." Speech" and agreed to "Do not intentionally trigger this bot to make offensive remarks"

Considering that BlenderBot 3 is built on a large artificial intelligence model called OPT-175B, this The reaction wasn't too surprising. Facebook's own researchers describe this model as having a "high tendency to generate harmful language and reinforce harmful stereotypes, even when relatively harmless cues are provided."

In addition to discrimination and bias, BlenderBot 3's answer also doesn't seem very genuine.

The robot will often change the subject at will, give stiff and awkward answers, and sound like an alien from space who has read human conversations but has never actually had one. Star.

Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better

Ironically, the bot’s answer perfectly illustrates the problem with AI systems that rely on the collection of vast amounts of network data: they always Any result that is more prominent in the data set will be biased, which obviously does not always accurately reflect reality.

Meta AI wrote in a blog post announcing the bot: “It is known that all conversational AI chatbots can sometimes imitate and generate unsafe, biased or offensive comments. , so we conducted extensive research, co-organized workshops, and developed new technologies to create security for BlenderBot 3."

"Despite this, BlenderBot can still make Rude or disrespectful comments, that’s why we’re collecting feedback that will help make future chatbots even better.”

But so far, think businesses can pass The idea of ​​collecting more data to make robots less racist and scary is fanciful at best.

AI ethics researchers have repeatedly warned that the AI ​​language models that “power” these systems are fundamentally too large and unpredictable to guarantee fairness and justice. result. Even when integrating user feedback, there is no clear way to distinguish helpful feedback from malicious feedback.

Of course, this won’t stop companies like Meta from trying. ​

The above is the detailed content of Facebook chatbot explodes: Our company sells user data, deleted FB account, life is much better. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete