Home >Technology peripherals >AI >The ChatGPT craze is sweeping across the country. Can AI supervision surge at the same time?
Recently, the new chat robot ChatGPT launched by OpenAI, an American artificial intelligence company, has become popular on the Internet. Its powerful functions have attracted widespread attention from society and are called AI "milestone" applications. Major companies have jumped into the development wave of ChatGPT-like applications, but the "popularity" of ChatGPT has also triggered discussions on artificial intelligence supervision issues.
ChatGPT has only been released for two months, and its monthly active users have reached 100 million, making it the fastest growing consumer application in history. For a time, many Internet technology companies such as Microsoft, Tencent, Baidu, and Alibaba have successively released their achievements and technical layout in the field of ChatGPT, which can be said to be "flocking together." ChatGPT's fast replies and simple operations have led a large number of users to use it for checking information and writing papers, not just as a chat tool.
However, as the application scope of ChatGPT becomes wider and wider, its hidden risks have begun to enter the public eye. Although ChatGPT is not perfect today, it is still refreshing our understanding of the development of artificial intelligence. AI safety requires legislative supervision. ChatGPT has demonstrated incredible advances in artificial intelligence, but it has also brought about legal and ethical issues.
In terms of intellectual property rights, ChatGPT may be used to generate papers and codes, causing intellectual property infringement and ownership disputes. In terms of personal information protection, ChatGPT cannot fact-check the source of information and data, and there may be hidden dangers such as personal data leakage and the spread of false information. In terms of data security, since ChatGPT can learn human language, it can be more confusing in terms of grammar and expression habits when writing "phishing emails". If used improperly, the harmful consequences are incalculable.
From a global perspective, governments around the world have begun to pay attention to various issues associated with the development of artificial intelligence, and the formulation and implementation of laws and regulations in the field of artificial intelligence have also been put on the agenda. Generative AI systems create opportunities, but also bring us to a historic "crossroads" - whether AI controls humans, or humans control AI.
In December 2022, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued the "Regulations on the In-depth Synthesis Management of Internet Information Services", which will come into effect on January 10, 2023. The "Regulations" put forward a series of requirements for application service providers and technical supporters that use artificial intelligence technology to generate content, such as ChatGPT.
The European Union also plans to update its unpublished artificial intelligence regulations in the near future - the "Artificial Intelligence Act", which is expected to take effect in 2025. The National Artificial Intelligence Strategy issued by the British government points out that the governance and regulatory systems formulated by the government need to keep up with the rapidly changing situation of artificial intelligence.
Due to the duality of this artificial intelligence technology, we not only need to supervise algorithms, but also combine them with data supervision. How to ensure the accuracy and privacy of generative AI data sources, how to protect intellectual property rights, and how to promptly correct AI's unethical speech orientation are all issues that need to be solved urgently in the field of artificial intelligence.
At present, the security assistance function of artificial intelligence still needs to be reviewed and approved by relevant departments, and the application scope of ChatGPT should also be limited. The state should strengthen the ethical regulation of artificial intelligence, enhance the ethical awareness and behavioral consciousness of artificial intelligence in the whole society, and make artificial intelligence comply with moral laws and public order and good customs. On this basis, we will explore the healthy development path of artificial intelligence by gradually liberalizing the field of use and depth of application.
As Intel CEO Pat Gelsinger said, artificial intelligence has given rise to global changes and provided us with powerful tools. Technology itself is neutral, and we must continue to shape it into a force for good.
The above is the detailed content of The ChatGPT craze is sweeping across the country. Can AI supervision surge at the same time?. For more information, please follow other related articles on the PHP Chinese website!