Home > Article > Technology peripherals > The UK carries out “Lightning Diplomacy” and plans to establish a global AI regulatory center in London
Artificial intelligence (AI) is evolving far faster than expected. Over time, concerns have grown that this technology could pose catastrophic risks to humanity. At the same time, countries are also scrambling to explore regulatory standards and scales, hoping to regulate development without compromising innovation.
The British government took the lead in launching a round of "lightning diplomacy". According to a report by the British "Times" on June 3, the country's government is considering setting up an international AI regulatory agency in London modeled on the International Atomic Energy Agency (IAEA). Prime Minister Sunak hopes that this action will make the UK a global center for AI and promised to play a leadership role in developing "safe and reliable" rules.
On June 7th and 8th, Sunak will lead a political and business delegation to visit the United States. At that time, he will seek the support of US President Biden and convene a summit in London this fall to discuss with governments and senior executives of multinational companies the formulation of international rules for AI and the establishment of a global institution similar to the IAEA. Founded in 1957, the IAEA currently has 176 signatories and is headquartered in Vienna, the capital of Austria. It is committed to promoting the peaceful use of nuclear energy.
According to the Daily Telegraph, British Minister of Science, Innovation and Technology Chloe Smith will hold talks with foreign counterparts on related AI issues at the OECD Technology Forum in Paris on June 6. . The event is funded by the British government and invited countries include the United States, Japan, South Korea, Israel, Australia, New Zealand, Brazil, Chile, Norway, Turkey, Ukraine, and Senegal.
According to a joint survey conducted by The Economist, over the past six years, experts’ expectations for the impact of AI on humanity have tended to be negative.
British government sources told the Guardian that they hope to play a role in helping to coordinate the different regulatory efforts of various countries. Compared with the EU’s earlier position of choosing to ban some personal AI products (such as facial recognition software), the establishment of An approach based on basic principles is more likely to gain broad support.
The European Union issued strict regulations regulating AI applications in 2021, including restrictions on the use of facial recognition software by police in public places. In March of this year, the Future of Life Institute, a non-profit organization focused on AI, released an open letter signed by more than 1,000 technical experts and researchers, calling for a six-month moratorium on the development of large-scale AI models, and only when ensuring the development of powerful AI systems. Only when the effects are positive and risks are controllable can development continue.
China is not lagging behind either. As early as April 11, the Cyberspace Administration of China publicly solicited opinions on the "Generative AI Service Management Measures" to draw a bottom line for the industry from several aspects: clarifying condition requirements, delineating responsible entities, forming a problem-handling mechanism, and clarifying legal responsibilities. .
In May 2023, the European Union launched a proposal for an Artificial Intelligence Law, requiring companies that develop generative AI such as ChatGPT to disclose any copyrighted materials used to train their models. On May 24, the U.S. White House Office of Science and Technology Policy released a questionnaire for the public to provide basis and reference for the comprehensive national regulatory strategy that is being formulated.
On May 30, the multinational non-profit organization "Artificial Intelligence Security Center" issued a joint open letter - mitigating the risk of AI exterminating humanity should become a global priority together with other social-scale risks (such as epidemics and nuclear war) . More than 350 executives, researchers and engineers working on artificial intelligence around the world signed the first draft of this open letter.
The next day, Sunak held talks with the heads of companies such as OpenAI, Google DeepMind and Anthropic, saying that they would work together to ensure that society benefits from AI technology, and discussed the risks brought by AI, Involving disinformation, national security and existential threats.
When talking about the open letter from the above-mentioned experts, Sunak said that people will be worried about reports that AI poses existential risks such as epidemics or nuclear war. "I hope they can rest assured that the (UK) government is working very carefully." to study this topic.”
As the capital of an established developed country, London has the advantages of political stability, sound legal system, rich talent pool, and dominant academic language. Especially in terms of the industry agglomeration effect, the London City Government has been vigorously promoting the East London Technology City (Tech City) project since 2011. It is the first choice for many high-tech giants to set up European and even global business headquarters, including TikTok, Snapchat, Meta , Google, Amazon.
A senior official expressed full confidence in an interview with The Times, saying that the UK is "a true technological superpower that should demonstrate global leadership and build an alliance rooted in shared values, and Secure international funding to support this work.”
The above is the detailed content of The UK carries out “Lightning Diplomacy” and plans to establish a global AI regulatory center in London. For more information, please follow other related articles on the PHP Chinese website!