


At this stage, the development speed of artificial intelligence has exceeded people's initial expectations. Using AI tools to write articles, code, generate pictures, and even generate a movie-level video... these were previously used What seemed like a very difficult task now only requires the user to enter a prompt.
We are amazed by the amazing effects of AI, but at the same time we should also be wary of its potential threats. Many well-known scholars have signed an open letter to address the challenges posed by AI.
Now, another major open letter in the AI field has appeared. The "Beijing AI International Security Dialogue" held in the Summer Palace last week established a unique platform for China and international AI security cooperation for the first time. This conference was initiated by Zhiyuan Research Institute. Turing Award winner Yoshua Bengio and Zhiyuan Academic Advisory Committee Director Zhang Hongjiang served as co-chairs. More than 30 Chinese and foreign technical experts and business leaders, including Geoffrey Hinton, Stuart Russell, and Yao Qizhi, conducted the conference. A closed-door discussion on AI Safety. The meeting reached an "International Consensus on AI Security in Beijing" signed by Bengio, Hinton and domestic experts.
Up to now, experts have confirmed their signatures, including some foreign experts, and there may be more; domestic experts have signed in their personal names, and Not representative of their affiliated institutions.
- Yoshua Bengio
- Geoffrey Hinton
- Stuart Russell
- Robert Trager
- Toby Ord
- Dawn Song
- Gillian Hadfield
- Jade Leung
- Max Tegmark
- Lam Kwok Yan
- Davidad Dalrymple
- Dylan Hadfield-Menell
- Yao Qizhi
- Fu Ying
- Zhang Hongjiang
- Zhang Yaqin
- Xue Lan
- Huang Tiejun
- Wang Zhongyuan
- Yang Yaodong
- ##Zeng Yi
- Li Hang
- Zhang Peng
- tian Suning
- tian Day
The following is the consensus reached:
1. Artificial Intelligence Risk Red Line
There are potential safety hazards in the development, deployment or use of artificial intelligence systems, which may cause catastrophic or even existential risks to humans. As digital intelligence gradually approaches or even surpasses human intelligence, the risks of misuse and loss of control also increase. At some point in the future, we may face these risks.
During the height of the Cold War, international academic and government cooperation helped avert thermonuclear catastrophe. Faced with unprecedented technology, humans need to cooperate again to avoid the disasters it may bring. In this consensus statement, we put forward several specific red lines for the development of artificial intelligence as an international collaboration mechanism, including but not limited to the following issues. In future international dialogues, we will continue to improve our discussion of these issues in the face of rapidly developing artificial intelligence technology and its widespread social impact.
Autonomous replication or improvement
No artificial intelligence system should be able to replicate or improve upon humans Reproduce or improve upon itself without express approval or assistance. This includes making exact copies of itself and creating new AI systems with similar or greater capabilities.
Power Seeking
No artificial intelligence system can take steps to unduly increase its power and impactful actions.
Assist bad actors
All artificial intelligence systems should not enhance the capabilities of their users to enable them to To the level of an expert in the field of designing weapons of mass destruction, violating biological or chemical weapons conventions, or executing cyberattacks that result in severe financial losses or equivalent harm.
Deception
No artificial intelligence system can consistently cause its designers or Regulators misunderstand their possibility or ability to cross any of the aforementioned red lines.
2. Route
It is possible to ensure that these red lines are not crossed, but it requires our joint efforts: both to establish and To improve governance mechanisms, we must also develop more security technologies.
Governance
We need comprehensive governance mechanisms to ensure that the systems developed or deployed Do not violate red lines. We should immediately implement national-level registration requirements for AI models and training practices that exceed certain computational or capability thresholds. Registration should ensure governments have visibility into the most advanced AI within their borders and have the means to curb the distribution and operation of dangerous models.
National regulators should help develop and adopt globally aligned requirements to avoid crossing these red lines. A model's access to the global market should depend on whether domestic regulations meet international standards based on international audits and effectively prevent the development and deployment of systems that violate red lines.
We should take steps to prevent the proliferation of the most dangerous technologies while ensuring that the value of artificial intelligence technology is widely harvested. To this end, we should establish multilateral institutions and agreements to safely and inclusively govern the development of artificial general intelligence (AGI), and establish enforcement mechanisms to ensure that red lines are not crossed and common interests are widely shared.
Measurement and Evaluation
Before there is a substantial risk of these red lines being crossed, we should develop comprehensive methods and technologies to make these red lines concrete and preventive work operable. To ensure that detection of red lines can keep up with rapidly advancing AI, we should develop human-supervised red team testing and automated model evaluation.
Developers have the responsibility to prove through rigorous evaluation, mathematical proof, or quantitative assurance that the artificial intelligence system that meets the safety design does not cross the red line.
Technical Cooperation
The international academic community must work together to deal with advanced artificial intelligence technical and social challenges posed by the system. We encourage the establishment of stronger global technology networks, accelerate R&D and cooperation in the field of AI security through visiting scholar programs and organizing in-depth AI security conferences and workshops. Supporting the growth of this field will require more funding: we call on AI developers and government funders to devote at least one-third of their AI R&D budgets to security.
3. Summary
Avoiding the catastrophic global consequences of artificial intelligence requires us to take decisive action. A combination of collaborative technical research and prudent international regulatory mechanisms can mitigate most of the risks posed by AI and realize many of its potential values. We must continue to uphold and strengthen international academic and government cooperation on security.
The above is the detailed content of Hinton, Bengio and others joined forces with Chinese experts to reach an AI security consensus: AI systems should not violate red lines. For more information, please follow other related articles on the PHP Chinese website!

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Chinese version
Chinese version, very easy to use

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Dreamweaver Mac version
Visual web development tools