


21st Century Business Herald reporter Cai Shuyue Guo Meiting Intern Tan Yanwen Mai Zihao Reporting from Shanghai and Guangzhou
Editor’s note:
In the past few months of 2023, major companies have rushed to develop large-scale models, explored the commercialization of GPT, and are optimistic about computing infrastructure... Just like the Age of Discovery that opened in the 15th century, human exchanges, trade, and wealth have exploded. growth, the space revolution is sweeping the world. At the same time, change also brings challenges to order, such as data leakage, personal privacy risks, copyright infringement, false information... In addition, the post-humanist crisis brought by AI is already on the table. What attitude should people take? Are you facing the myths caused by the mixture of humans and machines?
At this moment, seeking consensus on AI governance and reshaping a new order have become issues faced by all countries. Nancai Compliance Technology Research Institute will launch a series of reports on AI contract theory, analyzing Chinese and foreign regulatory models, subject responsibility allocation, corpus data compliance, AI ethics, industry development and other dimensions, with a view to providing some ideas for AI governance plans and ensuring Responsible innovation.
The rise of self-generated AI technology has led to the current situation of "Battle of Hundreds of Models", and the industry chain map of this technology has also taken initial shape.
(AIGC industrial chain map. Drawing/Nancai Compliance Technology Research Institute, 21st Century Business Herald reporter)
Before generative AI becomes a common technology, every participant in the production chain must consider how to make it a "controllable" tool.
In late March this year, a letter signed by Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and more than a thousand entrepreneurs and scholars signed a letter "Suspension of Large-scale Artificial Intelligence". Intelligent Experiment" open letter released.
The letter mentioned that although artificial intelligence laboratories around the world have been locked in an out-of-control race in recent months to develop and deploy more powerful digital minds, including the developers of the technology, "No one can truly understand, predict or fully control this technology."
The Yuanshi Culture Laboratory of the School of Journalism and Communication at Tsinghua University also pointed out in the "AIGC Development Research" report that AIGC's strong involvement in the global industrial chain will comprehensively replace programmers, graphic designers, customer service and other tasks, and provide artificial intelligence If the cost is capped, the third world industrial chain will suffer a huge impact.
This means that AIGC, supported by large computing power, may become a sharp blade that separates the global industrial chain of multinational companies, and may also become a dagger that cuts through the illusion of the "global village."
Therefore, with the rapid development of AIGC, putting the generative AI technology behind it into a regulatory cage and clarifying the responsibilities of all parties in the industry chain has become an urgent proposition for countries around the world to deal with.
Regulatory policy review: draw a clear bottom line for industrial research and development
At present, our country is already on the road of generative AI technology regulation. In April this year, the Cyberspace Administration of China issued the "Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments)" (hereinafter referred to as the "Measures"), which is my country's first regulatory document targeting generative AI technology.
Generally speaking, the "Measures" are based on the existing deep synthesis regulatory framework, which regulates the "Internet Information Service Deep Synthesis Management Regulations", "Internet Information Service Algorithm Recommendation Management Regulations", "Network Audio and Video Service Management Regulations" and "Internet Information Service Deep Synthesis Management Regulations". The refinement of the "Regulations on Ecological Governance of Network Information Content" In addition to the general obligations of personal information protection, artificial intelligence service providers are also required to further perform obligations such as security assessment, algorithm filing, and content identification.
Regarding the promulgation of the above relevant policy documents, Xiao Sa, senior partner of Beijing Dacheng Law Firm, pointed out in an interview with a reporter from the 21st Century Business Herald that relevant companies should pay attention to connecting existing algorithm recommendation services, deep synthesis services and other artificial intelligence In accordance with the requirements of regulatory regulations, we strive to achieve internal compliance, combine technology and legal power to propose creative compliance solutions, and win more institutional space for industrial development.
Most of the industry is supportive and supports the successively introduced "Measures" and other bills regulating the development of artificial intelligence technology. In an interview with 21 reporters, Wei Chaoqun, senior product director of Liangfengtai, shared his views. He believes that when generative AI technology has just begun, the implementation of relevant management methods is crucial to the healthy development of the entire industry, and these methods will play a significant role in promoting it.
"On the one hand, the promulgation of the "Measures" means that the entire industry has clear operating specifications, which can guide a complete set of R&D processes for enterprises. On the other hand, it also sets a R&D bottom line for the entire industry, including What can be done and what cannot be done." Wei Chaoqun pointed out.
For example, Article 17 of the "Measures" requires artificial intelligence service providers to "provide necessary information that can affect user trust and choice, including descriptions of the source, scale, type, quality, etc. of pre-training and optimized training data, artificial intelligence Labeling rules, the scale and type of manually labeled data, basic algorithms and technical systems, etc." to achieve the governance of artificial intelligence technology with large amounts of data and volatile rules.
However, some people believe that the current domestic laws, regulations and policy documents related to artificial intelligence still need to be further improved.
Xiao Sa mentioned in the interview that although the "Measures" responded to the risks and impacts brought by generative artificial intelligence, but by sorting out its content, it will be found that it has many differences in terms of responsible subjects, scope of application, and compliance. The provisions on regulatory obligations and other aspects are relatively broad.
For example, Article 5 of the "Measures" stipulates that service providers (i.e. subjects) who use generative artificial intelligence products should bear the responsibilities of content producers.
The original article mentioned that organizations and individuals who use generative artificial intelligence products to provide services such as chatting and text, image, sound generation, etc., including supporting others to generate text, images, sounds, etc. by providing programmable interfaces and other methods, are responsible for this Product generated content is the responsibility of the producer. However, the "Measures" have not yet elaborated on the specific legal responsibilities that service providers should bear.
Development Difficulties: How to Balance Regulation and Technology
How to improve the artificial intelligence supervision system under the premise of technological innovation and development, and strengthen its connection and coordination with data compliance and algorithm governance, is an issue that needs to be solved urgently.
Among them, clarifying the responsible entities of each industrial chain link of AIGC and creating "responsible" AI technology is one of the key points that supervision needs to pay close attention to.
In addition to the issue of the distribution of subject responsibilities mentioned in Article 5 of the "Measures", recently, the EU also mentioned in the revised "Artificial Intelligence Act" that In terms of the distribution of responsibilities in the artificial intelligence value chain, any distribution Authors, importers, deployers or other third parties should be regarded as providers of high-risk artificial intelligence systems and need to perform corresponding obligations. For example, indicate the name and contact information on the high-risk artificial intelligence system, provide data specifications or data set-related information, save logs, etc.
Pei Yi, assistant professor at Beijing Institute of Technology Law School, also pointed out to 21 reporters that as a key entity in providing AI services, enterprises need to ensure transparent data collection and processing on the one hand - clearly inform the data subject of data collection and processing purposes and obtain the necessary consent or authorization. Implement appropriate data security and privacy protection measures to ensure the confidentiality and integrity of data. On the other hand, compliant data sharing is also required. When conducting multi-party data sharing or data transactions, ensure compliant data use rights and authorization mechanisms, and comply with applicable data protection laws and regulations.
21 Reporters observed that some artificial intelligence companies are currently clarifying their obligations as responsible entities.
For example, OpenAI has specially opened a "Security Portal" for users. In this page, users can browse the company's compliance documents, including backup, deletion, and static data in "Data Security" Encrypted information, as well as code analysis, credential management, and more in App Security.
(OpenAI's "Secure Portal" page. Source/OpenAI official website)
In the privacy policy released by the official website of AI painting tool Midjourney, also provides specific instructions on the sharing, retention, transmission scenarios and uses of user data. At the same time, it also lists in detail the application's process of providing services to users. , it is necessary to collect 11 types of personal information such as identification, business information, and biometric information.
It is worth mentioning that the legal person in charge of an emerging technology company in Shanghai said in a conversation with 21 reporters that the terms of service for the company’s internal artificial intelligence-related business are currently being formulated. Part of the rules for the distribution of responsibilities Refer to OpenAI’s approach.
On the other hand, as providers of generative AI services, enterprises also need to pay attention to internal compliance. Xiao Sa pointed out that the business of AIGC-related companies needs to rely on massive data and complex algorithms, and the application scenarios are complex and diverse. Companies are prone to fall into various risks, and it is very difficult to rely entirely on external supervision. Therefore, related companies Be sure to strengthen AIGC’s internal compliance management.
On the one hand, regulatory agencies should take the opportunity to comprehensively implement corporate compliance reform, actively explore the promotion of compliance reform of companies involved in the network digital field and implement third-party supervision and evaluation mechanisms, establish and improve institutional mechanisms for compliance management, and effectively prevent Internet crimes. On the other hand, it is also necessary to actively explore regulatory paths to promote ex-ante compliance construction through ex-post compliance rectification, and promote network regulatory authorities and Internet companies to jointly study and formulate data compliance guidelines to ensure the healthy development of the digital economy.
"The most important task of the regulatory authorities is to draw the bottom line. Among them, 'tech ethics' and 'national security' are two inalienable bottom lines . Within the bottom line, the industry can be given as much tolerance as possible There is room for development, so as to prevent technology from being timid and restricted in its development for the sake of compliance." Pei Yi told 21 reporters.
Coordinator: Wang Jun
Reporters: Guo Meiting, Cai Shuyue, Tan Yanwen, Mai Zihao
Drawing: Cai Shuyue
For more content, please download 21 Finance APP
The above is the detailed content of AI Contract Theory ⑤: Generative AI is racing with thousands of sails, how to use rules to 'steer'. For more information, please follow other related articles on the PHP Chinese website!

·美国总统科技顾问委员会成立的生成式AI工作组旨在帮助评估人工智能领域的关键机遇和风险,并就尽可能确保公平、安全、负责地开发和部署这些技术向美国总统提供意见。·AMD的首席执行官苏姿丰(LisaSu)和谷歌云首席信息安全官菲尔·维纳布尔斯(PhilVenables)也是这个工作组的成员。华裔数学家、菲尔茨奖获得者陶哲轩。当地时间5月13日,华裔数学家、菲尔茨奖获得者陶哲轩公布消息,他和物理学家劳拉·格林(LauraGreene)共同领导美国总统科技顾问委员会(PCAST)的生成式人工智能工作组。

图片来源@视觉中国文|王吉伟从“人+RPA”到“人+生成式AI+RPA”,LLM如何影响RPA人机交互?换个角度,从人机交互看LLM如何影响RPA?影响程序开发与流程自动化人机交互的RPA,现在也要被LLM改变了?LLM如何影响人机交互?生成式AI怎么改变RPA人机交互?一文看明白:大模型时代来临,基于LLM的生成式AI正在快速变革RPA人机交互;生成式AI重新定义人机交互,LLM正在影响RPA软件架构变迁。如果问RPA对程序开发以及自动化有哪些贡献,其中一个答案便是它改变了人机交互(HCI,h

▲本图由AI生成酷家乐、三维家、东易日盛等已出手,装饰装修产业链大举引入AIGC生成式AI在装饰装修领域有哪些应用?对设计师有啥影响?一文看懂告别各种设计软件一句话生成效果图,生成式AI正颠覆装饰装修领域使用人工智能增强能力提升设计效率,生成式AI变革装饰装修行业生成式AI对装饰装修行业有哪些影响?未来发展趋势如何?一文看懂LLM变革装饰装修,这28款流行生成式AI装修设计工具值得上手体验文/王吉伟在装饰装修领域,最近与AIGC关联的消息着实不少。Collov推出了生成式AI驱动的设计工具Col

根据市场研究公司Omdia的一份最新报告,预计到2023年,生成式人工智能(GenAI)将成为一个引人注目的技术趋势,为企业和个人带来重要的应用,包括教育。在电信领域,GenAI的用例主要集中在提供个性化营销内容或支持更复杂的虚拟助手,以提升客户体验尽管生成式AI在网络运营中的应用并不明显,但EnterpriseWeb进行了一项有趣的概念验证,展示了该领域中生成式AI的潜力生成式AI在网络自动化方面的能力和限制生成式AI在网络运营中的早期应用之一是利用交互式指导替代工程手册来帮助安装网络元件,从

11月1日消息,微软和西门子宣布加深在生成式人工智能(AI)领域的合作,并将其应用于全球各行各业。为了实现人机协作的革命性突破,两家公司推出了西门子工业Copilot,这是一款联合开发的人工智能助手,旨在提高制造业的生产力。通过利用微软的AzureOpenAI服务,结合西门子工业的专业技术和Xcelerator平台的数据,西门子工业Copilot可以轻松生成、优化和调试复杂的自动化代码,实现自然语言交互。两家公司表示,这项技术可以将一些耗时数周的任务缩短到几分钟,例如仿真过程IT之家注意到,Co

在过度炒作了Web3、虚拟世界和区块链等一系列技术之后,企业高管们正在准备迎接生成式人工智能的浪潮。有人认为,人工智能带来的变革将与互联网的诞生或台式电脑的出现相媲美但能力越大,责任越大。生成式人工智能带来的风险与回报一样多。这项技术正在挑战版权和知识产权方面的法律制度,创造新的网络和数据治理威胁,并在劳动密集的活动中引发了“自动化焦虑”。为了满足利益相关者的期望,公司需要迅速采取行动,但必须谨慎行事,以确保在数据隐私和偏见等领域不违反法规或道德标准在运营方面,企业需要重新配置人力资源,并与科技

作为全球开源领域一年一度的行业盛宴,2023红帽全球峰会于近日如约而至。红帽带来全球开源盛宴在本届峰会上,红帽发布了最新版的OpenShiftAI、搭载IBMWatsonCodeAssistant的AnsibleLightspeed等一系列新品,并且针对媒体记者最为关心的热点话题分享了红帽的观点与看法。红帽总裁兼CEOMattHicks表示:“我们对未来充满了激动和期待,特别是在人工智能和新技术方面。我们发布了一些令人兴奋的新产品,其中包括OpenShiftAI。然而要实现这一愿景,我们还需要注

2023年的科技圈什么技术最火,毫无疑问,回答都会指向生成式AI。生成式AI的到来引发了业内外广泛讨论,也引发了大家对AI发展的新一轮思考——未来几年,生成式AI会成为最重要的生产力工具,无论是训练还是推理端,算力需求都将有望爆发式增长。在6月28日举行的2023年亚马逊云科技中国峰会上,亚马逊云科技大中华区产品部总经理陈晓建发表了名为《专注创新,摆脱基础架构束缚》的主题演讲,他认为,“当前,虽然生成式AI只有短短几个月,但其超大规模人工智能模型和海量数据对高算力提出新要求,不断拉动算力需求快速


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Notepad++7.3.1
Easy-to-use and free code editor

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Mac version
God-level code editing software (SublimeText3)
