Home > Article > Technology peripherals > AI companies will be required to report security tests to US government
The Biden administration has announced that it will introduce new artificial intelligence regulations that will require developers to disclose the results of security testing of major artificial intelligence systems.
Under these new rules, tech companies will need to notify the government when they use large amounts of computing power to train artificial intelligence models. This move will ensure that the US government can obtain sensitive data from companies such as Google, AWS (Amazon Web Services) and OpenAI in order to better supervise and manage the activities of these companies in the field of artificial intelligence.
The National Institute of Standards and Technology is working to ensure that artificial intelligence tools are reviewed for safe and reliable standards before release. At the same time, the Ministry of Commerce will issue guidance requiring the addition of watermarks to AI-generated content to clearly distinguish between real content and AI-generated content.
In an interview, White House special adviser on artificial intelligence Ben Buchanan said that the government wants to ensure the safety of artificial intelligence systems before releasing them to the public. The president has made clear that companies must meet this standard.
The U.S. government is paying attention to artificial intelligence as a major economic and national security issue. This is not surprising given the hype surrounding generative AI and the uncertainty surrounding investment in the market.
Three months ago, the President of the United States signed an executive order aimed at regulating rapidly evolving technology. The executive order sets out guiding principles for the development of artificial intelligence, including established safety standards.
The White House Artificial Intelligence Committee recently held a meeting to discuss the review of the implementation of the executive order. The meeting included senior officials from multiple U.S. federal departments and agencies. The committee said in a statement that they have made substantial progress in their efforts to protect Americans from the potential harm of artificial intelligence systems. In addition, the Biden administration is actively working with international allies such as the European Union to develop cross-border rules and regulations to manage this technology.
New regulations require U.S. cloud companies to confirm whether foreign entities are accessing data in U.S. data centers to train artificial intelligence models. The U.S. government has proposed a "Know Your Customer (KYC)" proposal that requires cloud computing companies to verify the identity information of foreign users.
The new regulations may pose additional challenges to U.S. technology companies such as Amazon and Google, requiring them to collect the names and IP addresses of foreign customers and report suspicious activities to the government, while requiring regular compliance certifications.
While self-reporting regulations could provide some protection for U.S. interests and encourage AI developers to be more cautious, it is unclear how the government will deal with those who choose to report inaccurately or not at all. There are also legal and ethical concerns about accessing sensitive data.
The above is the detailed content of AI companies will be required to report security tests to US government. For more information, please follow other related articles on the PHP Chinese website!