Home > Article > Technology peripherals > How to regulate generative AI bots like ChatGPT and Bard?
Research shows that the widespread use of general artificial intelligence tools creates new challenges for regulators that they may struggle to meet. In fact, how to regulate generative AI tools such as OpenAI’s chatbot ChatGPT has become a problem that troubles policymakers around the world.
ChatGPT can generate any type of content through keywords and is trained to input a large amount of knowledge. The solution will involve assessing risks, some of which, in particular, should be closely monitored.
Two months after the launch of ChatGPT, this AI chatbot has become the fastest-growing consumer product in history, with more than 100 million active users in January this year alone. This has prompted large technology companies around the world to pay attention to or accelerate the launch of AI systems, and has brought new vitality to the field of conversational AI.
Microsoft is embedding conversational AI in its browser, search engine and broader product range; Google plans to do the same with chatbot Bard and other integrations in Gmail and Google Cloud; Baidu and others Tech giants are also launching their own chatbots; startups such as Jasper and Quora are also bringing generative and conversational AI into mainstream consumer and enterprise markets...
Generative AI accelerates regulation Demand
Widespread misinformation and hard-to-find phishing emails pose real risks to the application of AI, which can lead to misdiagnosis and medical errors if used for medical information. There is also a high risk of bias if the data used to populate the model is not diverse.
While Microsoft has a more accurate retrained model, and providers like AI21 Inc. are working to validate generated content against live data, the generated AI "looks real but is completely inaccurate" The risk of response remains high.
European Union Internal Market Commissioner Thierry Breton recently stated that the upcoming EU AI bill will include provisions for generative AI systems such as ChatGPT and Bard.
“As ChatGPT shows, AI solutions can offer businesses and citizens huge opportunities, but they can also bring risks. That’s why we need a solid regulatory framework to ensure quality-based Trustworthy AI for data,” he said.
AI development needs to be ethical
Analytics software provider SAS outlined some of the risks posed by AI in a report titled "AI and Responsible Innovation" . Dr. Kirk Borne, author of the report, said: “AI has become so powerful and so pervasive that it is increasingly difficult to tell whether the content it generates is true or false, good or bad. The pace of adoption of this technology is significantly faster than the pace of regulation.”
Dr Iain Brown, head of data science at SAS UK and Ireland, said both government and industry have a responsibility to ensure AI is used for good rather than harm. This includes using an ethical framework to guide the development of AI models and strict governance to ensure these models make fair, transparent and equitable decisions. We can compare the AI model to the challenger model and optimize it as new data becomes available. ”
Other experts believe that software developers will be required to reduce the risks represented by the software, and only the riskiest activities will face stricter regulatory measures.
Ropes & Gray LLP (Ropes&Gray) Data, Privacy and Cybersecurity Assistant Edward Machin said that it is inevitable that technologies like ChatGPT, which appear seemingly overnight, will be adopted faster than regulatory regulations, especially in an already difficult environment like AI. Areas of regulation.
He said: “Although regulatory policies for these technologies will be introduced, the regulatory methods and timing of regulation remain to be seen. Suppliers of AI systems will bear the brunt, but importers and distributors (at least in the EU) will also bear potential obligations. This may put some open source software developers in trouble. How the responsibilities of open source developers and other downstream parties are handled could have a chilling effect on the willingness of these individuals to innovate and conduct research. ”
Copyright, Privacy and GDPR Regulations
In addition, Machin believes that in addition to the overall supervision of AI, there are also issues regarding copyright and privacy of the content it generates. For example, it's unclear how easily (if at all) developers can handle individuals' requests for removal or correction, nor how they can scrape third-party sites from their sites in a way that might violate their terms of service. Lots of data.
Lilian Edwards, a professor of law, innovation and society at Newcastle University who works on AI regulation at the Alan Turing Institute, said some of these models will be subject to GDPR regulations, which could lead to Issue orders to delete training data or even the algorithms themselves. If website owners lose traffic to AI searches, it could also mean the end of the massive scraping of data from the internet that currently powers search engines like Google.
He pointed out that the biggest problem is the general nature of these AI models. This makes them difficult to regulate under the EU AI Act, which is drafted based on the risks faced, as it is difficult to tell what end users will do with the technology. The European Commission is trying to add rules to govern this type of technology.
Enhancing algorithm transparency may be a solution. "Big Tech is going to start lobbying regulators, saying, 'You can't impose these obligations on us because we can't imagine every risk or use in the future,'" Edwards said. "There are ways of dealing with this problem for Big Tech Companies have helped more or less, including by making the underlying algorithms more transparent. We are in a difficult moment and need incentives to move toward openness and transparency in order to better understand how AI makes decisions and generates content. ”
She also said: “This is the same problem that people encounter when using more boring technology, because technology is global and people with bad intentions are everywhere, so it is very difficult to regulate Difficult. The behavior of general AI is difficult to match AI regulations."
Adam Leon Smith, chief technology officer of DragonFly, an AI consulting service provider, said: "Global regulators are increasingly aware that if the actual use of AI technology is not considered, situation, it is difficult to regulate. Accuracy and bias requirements can only be considered in the context of use, and it is difficult to consider risk, rights and freedom requirements before large-scale adoption."
"Regulators can Mandating transparency and logging from AI technology providers. However, only users who operate and deploy large language model (LLM) systems for specific purposes can understand the risks and implement mitigation measures through manual supervision or continuous monitoring," he added .
AI regulation is imminent
There has been a large-scale debate on AI regulation within the European Commission, and data regulators must deal with this seriously. Eventually, Smith believes that as regulators pay more attention to the issue, AI providers will start to list the purposes for which the technology "must not be used", including issuing legal disclaimers before users log in, putting them in a risk-based position. outside the scope of regulatory action.
Leon Smith said that the current best practices for managing AI systems almost do not involve large-scale language models, which is an emerging field that is developing extremely rapidly. While there is a lot of work to be done in this area, many companies offer these technologies and do not help define them.
OpenAI Chief Technology Officer Mira Muratti also said that generative AI tools need to be regulated, "It is important for companies like ours to make the public aware of this in a controlled and responsible way." .” But she also said that in addition to regulating AI suppliers, more investment in AI systems is needed, including investment from regulatory agencies and governments. ”
The above is the detailed content of How to regulate generative AI bots like ChatGPT and Bard?. For more information, please follow other related articles on the PHP Chinese website!