Home > Article > Technology peripherals > Why not ask an AI chatbot for health questions?
News on April 28, since OpenAI, Microsoft and Google launched AI chatbots, many people have begun to try a new way of searching the Internet: talking to a model, and it Knowledge learned from across the web.
Experts say that given that we tend to query health issues through search engines in the past, we will inevitably ask questions to tools such as ChatGPT, Bing and Bard. However, these tools make some familiar privacy mistakes and create some new ones. "Consumers should at least view these tools with skepticism because, like many other popular technologies, they are influenced by the power of advertising and marketing."
Before sharing sensitive health information to an AI chatbot or Before any other secrets, there are a few things you need to know:
1. Will these bots save my chat history?
Yes, ChatGPT, Bing and Bard all store user input. However, Google Bard, which is being tested with a small number of users, offers a setting that allows users to ask the company to stop saving records of queries and remove them from being associated with the user's Google account. Just click on the menu bar in the upper left corner and close "Bard Activity".
Users can also turn off the "Chat History and Training" option in OpenAI's settings to limit the storage time of chat records. The company said it will only save chat records for 30 days after closure and avoid using them for artificial intelligence training.
2. What do these companies do with our chat records?
These companies use users’ queries and responses to train artificial intelligence models to provide better answers. However, these chats can be used for more than that. When Google and Microsoft rolled out artificial intelligence chatbots, they also left room in their privacy policies for using user chat logs for advertising. This means that if you ask a question about orthopedic shoes, you may see advertisements for this category in the future.
Such ads may have little impact on users. But when health concerns and digital advertising intersect, harm can result. Previous reports have revealed that many symptom-checking sites, including WebMD and Drugs.com, shared users' potentially sensitive health concerns, such as depression or HIV, as well as user identifiers with outside advertising companies.
Additionally, data brokers sell extensive lists of people and their health concerns to buyers, which may include governments or insurance companies. Some people with chronic illnesses report seeing disturbing targeted ads on the internet.
Given this, users should consider how much they trust these companies to protect their data from intrusive ads and decide how much health information to share with Google or Microsoft.
OpenAI stated that the company will only save users’ search results to train and improve its models. An OpenAI spokesperson said the company does not use user interactions with chatbots to build user profiles or advertise, and has no plans to do so in the future.
Rory Mir, associate director of community organizing at the Electronic Frontier Foundation, a privacy nonprofit, said some people may not want their data used for AI training, regardless of what the companies are advertising. what position to take. He added: "At some point, the data they hold may fall into the hands of another company that you don't trust that much."
3. Will anyone view my chat history?
Sometimes, human reviewers are involved to review the chatbot’s answers. This means that they will also see questions queried by users. Google, for example, saves many chats for review and annotation for up to four years. Although these moderators cannot view users' Google accounts, Google recommends that users not share any personally identifiable information in chats, including names, addresses, or other details that could identify the user or others.
4. How long will my chat history be saved?
Meer said: “Companies collecting our data and keeping it for long periods of time can pose privacy and security risks, and these companies could be hacked or share it with untrustworthy business partners. ”
OpenAI’s privacy policy states that the company retains user data “only as long as we need to provide services to our users or for other legitimate business purposes.” That could be indefinitely, a company spokesman declined to specify.
In addition, Google and Microsoft will also retain user data until the user requests deletion.
5. Can I trust the health information provided by robots?
Dai Tinglong, a professor of operations management and business analysis of the impact of artificial intelligence on health care at the Johns Hopkins University Business School in the United States, said that the Internet is like a collection bag of health information. Some information is useful and some is not very useful. Large language models like ChatGPT may do a better job than regular search engines at filtering out spam.
For example, ChatGPT may be better than Google Scholar at helping someone find research relevant to their specific symptoms or condition. In Professor Dai’s research, he is examining whether chatbots can correctly diagnose rare diseases that doctors fail to detect.
But Professor Dai Tinglong pointed out that this does not mean that we should rely on chatbots to provide accurate health guidance. He said the models have been shown to make up information and present it as fact, and their wrong answers can appear quite believable.
In addition, artificial intelligence models also obtain information from unreliable sources. For example, when a user asks why they feel tired all the time, Bard might provide a number of possible answers, even citing a website about the temperament of little Shih Tzu dogs. Combined with the human tendency to put too much trust in chatbot recommendations, this could cause big problems.
Professor Dai Tinglong warned: "While the performance of this technology is impressive, now it is like a baby, or maybe like a teenager. People are just testing it, and when people start When you rely on it, it becomes very dangerous."
6. How to query health information to ensure safety?
Not everyone has access to a doctor when they don’t feel well, due to uneven access to or high cost of health care. If you don't want your health information to be stored on a company's server or used as advertising material, you can use a privacy-protecting browser.
Before signing up for any AI chat-based health service, such as a therapy bot, first understand the limitations of the technology and check the company’s privacy policy to see if it uses data to “improve Services” or share data with unidentified “vendors” or “business partners.” Both terms are often euphemisms used by advertisers. (Xiao Xiao)
The above is the detailed content of Why not ask an AI chatbot for health questions?. For more information, please follow other related articles on the PHP Chinese website!