Home > Article > Technology peripherals > The security of ChatGPT has caused concern. The parent company issued a document saying that it always pays attention to security.
With the popularity of large-scale language models represented by ChatGPT, people’s anxiety about their security has become more and more obvious. After Italy announced a temporary ban on ChatGPT due to privacy and security issues on March 31, Canada also announced on April 4 that it was investigating ChatGPT parent company OpenAI for data security issues. On April 5, the official blog of OpenAI updated an article, focusing on how it ensures the safety of AI, which can be regarded as a side response to people's concerns.
The article mentioned that after testing the latest GPT-4 model After completing all training, the team spent more than 6 months conducting internal testing to make it more secure when it is released to the public. It believes that powerful artificial intelligence systems should undergo strict security assessments and need to be supervised on the basis of ensuring supervision. The government actively cooperates to develop the best regulatory approach.
The article also mentioned that although it is impossible to predict all risks during the experimental testing process, AI needs to continuously learn experience and improve in actual use to iterate a safer version, and believes that society does It takes some time to adapt to the increasingly powerful AI.
The article states that one of the key points of safety work is to protect children. Users who use AI tools must be over 18 years old, or under the age of 18. Over 13 years old under parental supervision.
OpenAI emphasizes that it does not allow its technology to be used to generate hate, harassment, violence or adult content. Compared with GPT-3.5, GPT-4 has an 82% improvement in the ability to refuse to respond to banned content. And through the monitoring system to detect possible abuse situations, for example, when users try to upload child sexual abuse material to the image tool, the system will block and report it to the National Center for Missing and Exploited Children.
The article states that OpenAI’s large language model is trained on an extensive corpus of text, including publicly available content, licenses Content and content generated by human reviewers, data will not be used to sell services, advertise, or profile users. OpenAI acknowledges that it obtains personal information from the public Internet during the training process, but will endeavor to remove personal information from the training data set when feasible, fine-tune the model to reject requests for personal information, and actively respond to requests to remove individuals involved in the system. Requests for informational content.
In terms of the accuracy of the content provided, the article stated that through user feedback on marking false content, the accuracy of the content generated by GPT-4 is 40% higher than that of GPT-3.5.
The article believes that the practical way to solve AI security problems is not only to invest more time and resources to research effective mitigation technologies and test abuse in actual usage scenarios in experimental environments, but more importantly, , to improve security and improve AI capabilities go hand in hand, OpenAI has the ability to match the most powerful AI models with the best security protection measures, create and deploy more powerful models with increasing caution, and will Security measures continue to be strengthened as AI systems evolve.
The above is the detailed content of The security of ChatGPT has caused concern. The parent company issued a document saying that it always pays attention to security.. For more information, please follow other related articles on the PHP Chinese website!