After three days of "marathon" negotiations, the Council Chairman and the negotiators have just reached an agreement on artificial intelligence coordination rules, which is expected to eventually become the "Artificial Intelligence Act for Artificial Intelligence Regulation" . The draft regulation focuses on providing a compliance framework that prioritizes the deployment of safe and human rights-respecting AI systems in the EU. This artificial intelligence regulation goes beyond inspiring European countries to invest and innovate in the field of artificial intelligence.
The Artificial Intelligence Act is a landmark legislation that creates an enabling environment in which artificial intelligence The use of intelligence will become a tool for better security and trust, ensuring the engagement of public and private institutional stakeholders across the EU. The main idea is to follow a “risk-based” approach to regulating AI based on its ability to cause harm to society: the more harmful the risk, the more restrictions need to be put in place. The law sets a global precedent for the regulation of artificial intelligence in other jurisdictions. In contrast, the GDPR achieves this in the same way in terms of protecting personal data, thereby promoting the EU's approach to technology regulation globally.
Main contents of the interim agreement
Compared with the European Commission’s preliminary proposal, the main new elements of the interim agreement on the regulation of artificial intelligence can be summarized as follows :
- Categories high-impact general AI models with future systemic risks and the high-risk AI systems that control them.
- Improving governance systems with coordinating powers at EU level or overhauling economic policy.
- The list of prohibited items has been expanded, but police officers can use remote biometric identification in public places; however, this is subject to conditions that prevent abuse.
- Better enforcement of rights by requiring AI actors deploying potentially risky systems to understand the fundamental rights implications of those systems before using them.
More specifically, the provisional agreement includes the following aspects:
- Definition and scope
Depending on the selection agreement, The definition of an AI system attempting to select an OECD shelter corresponds to the proposed procedure. In this way, standards for artificial intelligence systems help distinguish simple software systems more clearly.
Furthermore, the provisional agreement explains in more detail that the rules of the Regulation do not cover sectors falling within the field of EU law and cannot in any way undermine the rights of Member States in the field of national security or of parties sharing responsibilities in the field of national security. ability. Not only that, but the Artificial Intelligence Act will not extend to artificial intelligence systems used solely for military or defense purposes. At the same time, the treaty also states that the law shall apply to artificial intelligence systems when they are used not only for scientific and research purposes, but also for non-scientific and non-innovative reasons, including by non-artificial intelligence technicians or experts.
- Classifies AI systems as high-risk and prohibited AI practices
It creates a horizontal security barrier that includes Level 1.1 "Possible" serious/significant harm to rights" to exclude artificial intelligence systems that were not predicted to pose such a threat. Those AI systems that pose a slight risk of harm to users will have minimal transparency requirements. For example, it should inform users that the content was generated by artificial intelligence so that decisions can be made whether to use the content or perform further actions.
Various artificial intelligence-based systems operating on EU territory will be approved. However, there are requirements and responsibilities for entering the EU market that must be met. These co-legislators added and revised some provisions to make them technically simpler and clearer, easing the burden on stakeholders, for example, regarding the provision of data quality and the technical documentation that SMEs need to prepare to certify their artificial intelligence The system has been built safely and complies with current regulations.
Since AI systems are created and delivered within a demanding value chain environment, a compromise arrangement will require, among other things, a revision of the enterprise’s behavior to reflect the various actors in that chain (i.e. providers and users of the technology system) clarification of the scope of responsibility/influence. It also sets out how the AI-specific obligations derived from the AI Bill interact and conflict with obligations set out in other laws, such as the EU’s data legislation and sectoral legislation.
The application of artificial intelligence will be rejected due to high risk, which is prohibited for certain uses of artificial intelligence. Therefore, these devices will not be introduced into the EU. According to the preliminary agreement, prohibited activities include cognitive technologies for behavioral control, the purposeless collection of facial images from the Internet, emotion recognition in institutions and education, social scoring, and biometric methods for determining sexual orientation or religious beliefs. and some personal genealogy for policing purposes.
- Law Enforcement Exception
Given the special nature of law enforcement organizations and the need for them to use computers in the performance of their duties, the Committee’s exception to law enforcement The rules proposed by AI have made several changes, which are crucial. As long as careful measures are put in place, these changes will translate into the necessity for operators to protect information privacy. An example of this would be initiating emergency procedures for high-risk AI implementations, but not including conducting a conformity assessment in emergency situations. In addition, a specific action has been formulated to provide authority to protect human rights and prevent the abuse of artificial intelligence applications.
Furthermore, the text of the interim agreement clearly expresses the reasons for using real-time remote biometric identification systems in public places, but only for law enforcement reasons, and authorities will only be allowed to do so in exceptional circumstances. The compromise agreement provides for additional safeguards and limits these exceptions to cases of killing of suspects, ensuring that searches are only carried out in cases of real threats and preventing terrorist attacks and searches when people are suspected of committing the most serious crimes. .
- General artificial intelligence system and basic model
For scenarios where the artificial intelligence system is used for multiple purposes, that is, general artificial intelligence and independent systems , formulated new regulations. Another high-risk system, self-driving cars, integrates with general artificial intelligence. The Transitional Agreement also includes the GPA (GPAI). GPA supervision is a core part of the agreement.
Basic models, described as systems capable of demonstrating capabilities in complex activities such as text generation, video segmentation, processing natural language, rendering code and many more computer tasks, have also been concluded. The interim arrangement requires base models to meet decontamination requirements before being marketed. The policies required to build a "high-impact" base model are much more stringent. These average data models, with their massive scale and highly advanced complexity, capabilities and capabilities, can create systemic risks across a business's value chain; these risks are shared by all participating businesses.
- New governance structure
In light of the restrictive measures of the new GPAI model and the need for its standardized monitoring at EU level, the Commission has established a The Unique Artificial Intelligence Office oversees these state-of-the-art AI models, promotes the establishment of specifications and testing procedures, and enforces key rules in all member states. An independent scientific group of technical experts will provide advice to the GPAIAI Office on model sizing by developing methods for assessing base models and technical capabilities, conducting an assessment of GPAI status and base models in preparation for market launch, and potentially monitoring issues related to base models. Material safety.
To this end, the Artificial Intelligence Commission, composed of Member States as its members and serving as the Commission's coordination platform and advisory body, shall enable Member States to play a prominent and key role in the implementation of the Regulation within its field , as a code of practice for the basic model. Last but not least, a forum will be established with individual stakeholders represented by industry players, SMEs, start-ups, civil society and universities. This could provide technical knowledge that the AI Commission could use.
- Penalty
Connect and sanction companies that violate the Artificial Intelligence Law. The minimum fine is a certain amount and the maximum is Percentage of global annual turnover for a financial year. Violations of the above-mentioned artificial intelligence applications will be subject to a fine of 35 million euros (7%), violations of the obligations under the Artificial Intelligence Act will be subject to a fine of 15 million euros (3%), and providing misleading information will be subject to a fine of 7.5 million euros (1.5 %) penalty. Nonetheless, the interim agreement includes special fines that will have a smaller impact on small and medium-sized enterprises and start-ups when they commit to complying with the provisions of the Artificial Intelligence Act.
The concise agreement with the Artificial Intelligence Law provides that natural or legal persons have the right to lodge a formal complaint with the relevant market surveillance entity. Furthermore, the Agency shall follow its specific and prescribed procedures in handling the said complaints.
- Transparency and Protection of Fundamental Rights
Notably, the interim agreement requires deployers of artificial intelligence systems before placing high-risk systems on the market , conduct rights impact assessments of end-user safeguards. This interim agreement also provides a good starting point for the widespread use of sophisticated artificial intelligence and automated truth detection systems. It will provide clarity on the scope of implementation of the system. Notably, some of the proposed Commission's regulations have been modified to reference various occasional government users of high-risk AI systems who have also registered high-risk AI machine learning systems in EU databases. In addition to this, the updated product line will also send a message to users operating the emotion recognition system to let them know when they are exposed to the system.
- Measures to support innovation
This part of the regulations has been significantly revised to encourage innovation, which is the committee's proposal to establish a more scientific innovation An important aspect of the Framework, which continuously adapts to ensure a sustainable regulatory environment across the EU.
The Required Artificial Intelligence Regulatory Sandbox is designed to ensure a controlled environment for the development, testing and validation of new artificial intelligence systems, and also allows institutions to test them under real-world conditions. Additionally, where AI systems are tested in real-world environments, new restrictions have been enabled to allow the systems to be used under specific conditions. This temporary agreement is to reduce the administrative burden on small and medium-sized enterprises and establish a support plan for small and medium-sized enterprises with low income. In such cases, derogations are allowed if they are legally limited and strictly specific.
- Effective Date
The "Interim Agreement on the Management of Artificial Intelligence" stipulates that, subject to certain exceptions, the provisions of the "Artificial Intelligence Law" shall Applicable for 2 years from the effective date.
The above is the detailed content of Explore EU agreement on artificial intelligence regulation. For more information, please follow other related articles on the PHP Chinese website!

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

Atom editor mac version download
The most popular open source editor

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool