Home > Article > Hardware Tutorial > OpenAI expresses concern about users developing feelings for its chatbot
GPT-4o was introducedas a vastly improved model for ChatGPT. Since its debut, people have praised ChatGPT-4o for its human-like interactions. While this sounds great, OpenAI has noticed a problem: people are starting to treat the chatbot as a human being and form emotional bonds.
OpenAI has observedpeople using language that "might indicate forming connections." The company notes that it has found instances where people used language that expressed "shared bonds." This is described as bad news for two main reasons.
First of all, when ChatGPT-4o appears to be human-like, users can disregard any of the hallucinations coming out of the chatbot. For context, AI hallucination is basically the incorrect or misleading outputs generated by the model. This can happen due to flawed or insufficient training data.
Secondly, human-like interactions with the chatbot could reduce real social interactions among the users. OpenAI says that the chatbot interactionscould potentially be beneficial for "lonely individuals," but they could also affect healthy relationships. The company further notes that people can even begin to talk with humans with the perception that the other person is a chatbot.
That would be bad as OpenAI has designed GPT-4o to stop talking when the user starts to talk over it. With all these concerns, the company says that it will now monitor how the users develop emotional bonds with ChatGPT-4o. OpenAI also states that it will make adjustments to the model where necessary.
Get the PLAUD Note AI-powered voice recorder from Amazon
The above is the detailed content of OpenAI expresses concern about users developing feelings for its chatbot. For more information, please follow other related articles on the PHP Chinese website!