Explainable AI (XAI) is an emerging branch of artificial intelligence. It is used to analyze the logic behind every decision made by artificial intelligence capabilities. It is one of the core concerns for the sustainable development of artificial intelligence. With the advent of the era of large models, models are becoming more and more complex, and paying attention to interpretability is of great significance to improving the transparency, security, and reliability of artificial intelligence systems.
Interpretable AIInternational Standard IEEE P2894 is released, open A I“Black Box”
Recently, the IEEE Standards Association’s standard P2894 (Guide for an Architectural Framework for Explainable Artificial Intelligence) on explainable AI architecture was officially released. IEEE is the world's largest non-profit professional and technical society. It is recognized as authoritative in the fields of academic and international standards and has formulated more than 900 current industrial standards.
Standard original text link:https://www.php.cn/link/b252e54edce965ac4408effd7ce41fb7
The interpretable content of this release AI architecture standards provide the industry with a technical blueprint for building, deploying, and managing machine learning models, while meeting the requirements for transparent and trustworthy AI by adopting various explainable AI methods. The standard defines the architectural framework and application guidelines for explainable AI, including the description and definition of explainable AI, the classification of explainable AI methods and applicable application scenarios for each type, as well as the accuracy, privacy and security of explainable AI systems. performance evaluation method.
As early as June 2020, WeChat Bank, Huawei, JD.com, Baidu, Yitu, Hisense, CETC Big Data Research Institute, Institute of Computing Technology, Chinese Academy of Sciences, China Telecom, China Mobile, China Unicom, Shanghai Computer Software More than 20 companies and institutions, including the Technology Development Center, ENN Group, China Asset Management, and Sinovation Ventures, have developed a deep understanding of AI technology security specifications and explainability based on business scenarios in finance, retail, smart cities and other fields, and jointly discussed it with the IEEE Standards Association. The Interpretability Working Group was established and the first standards working group meeting was organized that month. Dr. Fan Lixin, chief scientist of artificial intelligence at WeChat Bank, serves as the chairman of the standards working group, and Dr. Chen Yixin, a professor at the University of Washington in the United States, serves as the vice chairman. Since then, the standards working group has held multiple meetings, and the final draft standard will be officially released by the IEEE Standards Association in February 2024.
Dr. Fan Lixin, chairman of the standards working group, said: "Explainability is an important issue that cannot be ignored in the current development stage of AI technology, but the relevant industry standards and normative documents are still not perfect. This standard formulation has absorbed The cutting-edge practical experience from leading companies and research institutions in various fields such as finance, communications, retail, and the Internet is believed to provide valuable reference for the wider implementation of AI."
Trusted Federation Standards related to learning and trusted AI will be released one after another, focusing on AI data security and privacy protection
"Data Dr. Fan Lixin introduced that the explainable AI system architecture standard released this time is also "trusted federated learning" An important milestone in the research and implementation of the new paradigm. "Trusted federated learning" is a distributed machine learning paradigm that can meet the needs of users and supervision. In this paradigm, privacy protection, model performance, and algorithm efficiency are the core triangular cornerstones. Together with the two pillars of model decision-making interpretability and model supervisability, they form a more secure and trustworthy federated learning. 》This is an article introducing the new paradigm of "Trusted Federated Learning". In this paradigm, privacy protection, model performance, and algorithm efficiency are the core triangular cornerstones. Together with the two pillars of model decision-making interpretability and model supervisability, they form a more secure and trustworthy federated learning. This paradigm can meet the needs of all aspects and is a new distributed machine learning method. This article introduces the importance and components of this paradigm.
The safe circulation of data plays a key role, and trusted federated learning plays a key role in promoting the safe circulation of data elements. The "Data Elements" Three-Year Action Plan (2024-2026) issued by the National Data Administration proposes to "create a safe and trusted circulation environment, deepen the application of technologies such as privacy computing and federated learning, and enhance the credibility and controllability of data utilization." , measurable capabilities, and promote data compliance and efficient circulation and use." Trusted federated learning, as a data compliance circulation method based on privacy computing, federated learning and other technologies, can enhance the credibility, controllability, and accountability of data utilization. Measurement capabilities promote the application of data in compliance, efficient circulation and use, thereby maximizing the value of data.
As industry and academia pay attention to federated learning and trustworthy artificial intelligence, multiple trusted federated learning and trustworthy artificial intelligence standards approved by the IEEE Standards Association will also be released one after another. Among them, the draft of the standard IEEE P2986 (Recommended Practice for Privacy and Security for Federated Machine Learning) on the privacy and security architecture of federated learning has been completed and is expected to be officially released soon. This standard proposes the privacy risk level and security risk level assessment methods of federated learning for the first time in the industry. Specifically, it includes common faults and countermeasures in federated machine learning, privacy and security requirements for federated machine learning, and privacy and security assessment guidelines for federated machine learning.
In addition, based on IEEE P2986, the trusted federated learning standard IEEE P3187 (Guide for Framework for Trustworthy Federated Machine Learning) that focuses more on federated learning's trustworthiness, explainability, optimization, and supervision has also been Complete initial review. The standard proposes the framework and characteristics of trusted federated learning, sets specific constraints on the implementation of these characteristics, and introduces solutions for implementing trusted federated learning.
Big Model AI A#gent Federated learning, creating trustworthy artificial intelligence in the era of large models
Recently, China Telecom and WeBank also jointly initiated the establishment of the IEEE P3427 (Standard for Federated Machine Learning of Semantic Information Agents) working group on the federated learning standard for semantic information agents. Topics discussed in this standard plan include the role definition, incentive mechanism, semantic communication of different semantic agents in the semantic cognitive network based on federated machine learning, the representation of semantic information on semantic agents that is easy for human understanding, and the federation between semantic agents. Information security, efficient interaction, etc. The standards working group plans to launch standard development at the end of March 2024, and is currently recruiting relevant experts from various industries to join in to jointly improve the standards and promote industry development.
The successive release of relevant industry standards will further promote cross-industry and cross-field technical cooperation and innovation, open the "black box" of AI, and promote the safe and efficient circulation of data elements, high accuracy and high interpretability. Artificial intelligence will help achieve widespread, responsible and effective application of technology to bring benefits to mankind.
The above is the detailed content of IEEE Interpretable AI Architecture Standard P2894 Officially Released. For more information, please follow other related articles on the PHP Chinese website!

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

腾讯与中国宋庆龄基金会合作,于9月1日发布了名为“AI编程第一课”的公益项目。该项目旨在为全国零基础的青少年提供AI和编程启蒙平台。只需在微信中搜索“腾讯AI编程第一课”,即可通过官方小程序免费体验该项目由北京师范大学任学术指导单位,邀请全球顶尖高校专家联合参研。“AI编程第一课”首批上线内容结合中国航天、未来交通两项国家重大科技议题,原创趣味探索故事,通过剧本式、“玩中学”的方式,让青少年在1小时的学习实践中认识A


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

Dreamweaver Mac version
Visual web development tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),