Home  >  Article  >  Technology peripherals  >  Privacy vulnerability in ChatGPT may reveal conversation titles between users and chatbots

Privacy vulnerability in ChatGPT may reveal conversation titles between users and chatbots

PHPz
PHPzforward
2023-04-07 23:21:051457browse

Privacy vulnerability in ChatGPT may reveal conversation titles between users and chatbots

Users on Reddit and Twitter began reporting a vulnerability in ChatGPT on March 20 and posted a number of screenshots showing their ChatGPT web history The transcript contained conversation titles unfamiliar to them.

While shared chat content appears to be inaccessible in this manner, OpenAI completely deleted the chat history when it closed the vulnerability.

According to industry media reports, ChatGPT also experienced major outages that day, and those users who had access noticed inconsistent services. OpenAI documented the outage on its status page and restored service within hours of initial reporting.

OpenAI CEO Sam Altman tweeted on March 22, “We have experienced a major issue with ChatGPT due to a bug in the open source library and have now released a fix. , we just completed verification. A small number of users were able to see the titles of other users' conversation history. We regret this."

Altman said on Twitter that although this issue is serious, it is now Already solved. He did not reveal the name of the open source library where the issue occurred, nor did he provide the exact percentage of users affected.

With millions of visitors every day, privacy flaws that affect even a small number of people could lead to widespread data sharing, and the "technical post-mortem" Altman promised should address those concerns.

Every chat a user has with ChatGPT is saved as an instance in the user's history, with a title determined based on the content of the conversation.

On March 27, some users discovered that their history contained titles related to unfamiliar topics or features, as well as titles written in other languages, suggesting that the vulnerability was a global one question.

OpenAI companies may have to carefully outline their data protection policies and procedures and reassure users that their open source supply chains are secure and similar issues will not arise again.

Some users who posted feedback on Reddit reported seeing other types of messages, but provided no verifiable evidence to support these claims.

One user said: "I saw someone else's phone number tied to my account. I'm worried about it, but not enough to quit the app yet."

Another user said they signed up for ChatGPT Plus, the platform’s $20-a-month subscription plan, through another email linked to their account, but were not granted access to the service.

The flaw comes at a critical time for OpenAI, an artificial intelligence developer that has just released its GPT-4 model. The GPT-4 variant of the chatbot is already available to users who subscribe to ChatGPT Plus, and OpenAI has promised to deliver human-level performance.

OpenAI’s competitor in artificial intelligence, Google, recently launched its own chatbot Bard for users in the United Kingdom and the United States. Users can register for open access through a waiting list.

According to reports, in order to compete with ChatGPT, Google has reorganized its internal team and expressed hope that Bard, powered by an optimized version of the 540 billion parameter large language model (LLM) LaMDA, can be improved quickly through user feedback.

Industry insiders compared Bard and ChatGPT and pointed out that ChatGPT and Microsoft’s GPT-4-based Bing chatbot will become outstanding competitors in the new era of artificial intelligence.

Although Microsoft and OpenAI have already collaborated, Google relies on its market dominance to provide Bard with more opportunities.

Google has also been outspoken about the shortcomings of generative AI. Alphabet Chairman John Hennessy warned in February that Google had been hesitant to release Bard because it was still in development, and his blog post announcing the release of Bard called it an "experiment."

In internal testing by industry bodies, the chatbot operated similarly to earlier versions of Bing Chat, with text-generated response times that were fast but had a false or false effect known as "illusion." Tendency to fabricate output.

The above is the detailed content of Privacy vulnerability in ChatGPT may reveal conversation titles between users and chatbots. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete