Home  >  Article  >  Technology peripherals  >  WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.

WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.

王林
王林Original
2024-07-15 10:08:07340browse

In recent years, people are talking about big models. Especially under the guidance of Scaling Law, people hope to use larger-scale data for training to infinitely improve the intelligence level of the model. In China, as a production factor alongside land, labor, capital, and technology, the value of "data" is increasingly valued. In recent years, the pace of market-oriented construction of China's data elements has accelerated significantly. The key to making "data" realize its value lies in the circulation and reuse of data between different subjects and in different scenarios. Data elements are essentially different from traditional production elements. The value of data is two-sided. The greater the business value, the higher the risk cost. Creating a trusted environment for data circulation is the underlying support for fully releasing the value of data elements.

WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.


In this context, the value of private computing technology has gradually become prominent and has become the focus of attention from academia and industry. From the birth of its concept, privacy computing has spent nearly 40 years extending from a cutting-edge theory to industrial applications. However, whether privacy computing can become the "cornerstone technology" of the data element circulation market still needs to clear a series of obstacles.
The flow of data in a dense form will become a future trend, and traditional privacy computing technology can no longer meet the requirements of the new situation. On the one hand, traditional privacy computing mainly focuses on computing security in multi-party cooperation scenarios, lacks a holistic security perspective, and cannot meet the additional risks brought by the introduction of new scenarios and roles in the large-scale data circulation process (such as operator risks, processing risk). Secondly, it is necessary to adopt different security graded technical solutions for data with different security levels in order to minimize the implementation cost of privacy computing. Therefore, it is particularly important to promote industry standardization.
At the 2024 World Artificial Intelligence Conference, industry, academia and research brought new exploration and practice. On July 5, a number of domestic industry-university-research institutions jointly released two white papers focusing on "General Security Classification of Privacy Computing Products" and "Personal Information Anonymization System" to provide the latest technology for the challenges currently encountered by the data element circulation industry. thinking and industry practice.
What kind of privacy computing technology standard system do we need?
Privacy computing is a comprehensive interdisciplinary technology that cross-integrates knowledge in many fields such as cryptography, artificial intelligence, and computer hardware. It has currently formed technical routes such as multi-party secure computing, federated learning, and trusted execution environments.
Promoting the trusted circulation of data elements requires the cooperation of technology research and development and standard setting. In various directions of privacy computing technology, there is still a lot of standard-setting work to be completed. Industry, academia and research circles generally believe that the three directions of "cross-domain data control", "controlled anonymization" and "universal security classification" deserve attention.
The purpose of cross-domain data control is to ensure that data will not be accessed or tampered with by unauthorized entities during the process of data sharing and flow, while complying with relevant laws, regulations and privacy policies. As a new production factor, the key to the value of data lies in the circulation and reuse of data by different entities and in different scenarios. However, this process often suffers from "unclear responsible entities, inconsistent interest demands, uneven capabilities, and difficult to trace responsibility links." risks of.
Controlled anonymization is usually used to ensure that personal privacy information will not be revealed when data is used and analyzed, while maintaining the usefulness and accuracy of the data. my country's "Cybersecurity Law" and "Personal Information Protection Law" specifically set up "personal information anonymization clauses" to exclude anonymized personal data from personal information protection, but the legal connotation and implementation standards have yet to be clarified. In fact, the existence of personal information anonymization clauses has become one of the biggest bottlenecks in the flow of data transactions and the construction of the market for data elements.
In addition, in privacy computing, universal security classification can help determine the most suitable protection measures for various products, thereby rationally allocating security resources and ensuring that sensitive data is properly protected. Currently, although there are some safety classification standards for a single technology route, the classification standards for different technology routes are completely inconsistent. Users cannot make horizontal comparisons of all products, and these standards are not applicable to emerging technology routes.
With the in-depth cooperation between industry, academia and research circles, we have already seen some progress.
Many domestic industry, academia and research institutions reached a consensus at this conference
Regarding the issue of "cross-domain data management and control", we can find the answer in a white paper released at the end of 2023.
At the end of 2023, the Data Law Research Center of East China University of Political Science and Law and Ant Group took the lead in releasing the "White Paper on Cross-Domain Data Management and Control", which for the first time systematically explained the practical guidelines and strategies for cross-domain data management and control, and proposed the use of technical means such as dense state computing to effectively manage and control data. Risks of circulation and utilization.
The white paper forms a trinity of cross-domain data management and control solutions from the technical, legal and management levels, including data governance mechanisms such as data desensitization and encryption beforehand, process control mechanisms such as defining the scope of use based on scenarios and security levels during the process, and post-event data management and control mechanisms. Audit supervision mechanism.
At the same time, the white paper also proposes five management and control technical requirements that are adapted to data circulation risks, including data sources that can be confirmed, data that is available but not visible, data that can be calculated but not identifiable, data use that can be defined, and data circulation that can be traced, and provides guidance on how to clarify A feasible solution for the responsibilities of each subject in data circulation will help build China's trusted data circulation reference architecture.
Regarding the two propositions of "controlled anonymization" and "universal security classification", we also saw the latest consensus between academia and industry at the recent WAIC conference:
At the 2024 World Artificial Intelligence Conference, many domestic industry-university-research institutes The agencies jointly released two white papers, "General Security Classification of Privacy Computing Products" and "Personal Information Anonymization System: Technology and Law."
Security classification in privacy computing has always had many difficulties.Judging from the experience of industrial practice, privacy computing products with different technical routes, different product forms, and different application scenarios face huge differences in privacy data leakage risks and security requirements. In the absence of unified security classification standards, product development It is difficult for parties and users to evaluate and measure the balance between security and performance.
Luo Feng, technical director of the Shenzhen National Fintech Evaluation Center, once said that the application of privacy computing technology in the financial industry is relatively advanced, but there are still technical and business challenges in large-scale implementation. The routes of privacy computing are diverse, and different application scenarios require different balances between security and performance. Judging from existing evaluations and standards, it is difficult to evaluate the difference between the overall safety and performance of a product before safety classification. The phenomenon of "technological islands" exists objectively. The inability of technologies to interconnect and interoperate may lead to differences in product selection among different financial institutions. In addition, the expected benefits are difficult to estimate and investment costs are high, resulting in many small and medium-sized financial institutions being reluctant to promote privacy computing applications.
A universal security classification scheme that can be applied to more technical routes and has practical guidance is indispensable for the large-scale implementation of privacy computing.
Based on the above situation, 16 domestic institutions including Ant Group, China Communications Standards Association Big Data Technology Standards Promotion Committee, Shenzhen National Financial Technology Evaluation Center, Tsinghua University and other institutions jointly wrote the white paper "General Security Classification of Privacy Computing Products". It is worth mentioning that the members of the writing steering group include Wang Xiaoyun, an academician of the Chinese Academy of Sciences and a fellow of the International Cryptozoology Association, and authoritative scholars such as Ren Kui, dean of the School of Computer Science and Technology of Zhejiang University and deputy director of the National Key Laboratory of Blockchain and Data Security.

WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.

In this white paper, industry, academia and research parties discuss the many difficulties faced by privacy computing security classification one by one, and provide design ideas for universal security classification. For example, we can shield the differences between different technical routes according to the classification of attack and defense effects, add a graded level of "resistance to known attacks" between "provably safe" and "unsafe", and introduce more dimensions such as software credibility to quantify "implementation of security" , clarify the corresponding relationship between all types of technical features and safety classification.
Shi Xinlei, algorithm engineer of the Bank of China's Privacy Computing Team, once said that due to the influence of participant data, different demand scenarios have different security requirements. Through grading, we can provide an appropriate level of security for the business and achieve a balance between performance and security. It can also allocate reasonable computing resources to achieve cost control. Secondly, the degree of risk can be quickly identified through security classification. Different levels of security can take corresponding levels of regulatory control measures to reduce security risks. Reasonable privacy computing product security assessment standards and rating systems can help better understand and evaluate product security, build a trust mechanism for data circulation, and promote industry standards.
How to realize data value development on the basis of personal privacy protection is another thorny challenge faced by the industry. Personal data is the data with the highest utilization value, the most diverse use scenarios, and the most mature processing measures. How to realize the development of data value on the basis of personal privacy protection and promote the realization of trusted and secure data sharing between different industries and different institutions. Openness and trade are the common exploration directions of industry, academia and research.
Among them, anonymization technology is an important and effective means to protect personal data privacy. In the planning and construction process of my country's data infrastructure, the processing technology and institutional specifications related to the anonymization of personal information have also been placed in an important position. From the perspective of industrial implementation, the key to collaboratively promoting the solution of this problem lies in the construction and expansion of a set of infrastructure that integrates law and technology.
To this end, based on the guidance of the "Personal Information Anonymization Clauses" set in the "Cybersecurity Law" and "Personal Information Protection Law", the University of International Business and Economics, the Big Data Technology Standards Promotion Committee and Ant Group jointly wrote the "Personal Information Anonymization Clause" Regime: Technology and Law (2024)" white paper.

WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.

Personal information anonymization system

1. Dilemmas and challenges

  • Enterprises are worried that anonymization measures will be ineffective because they cannot meet legal requirements, or will lose their use value.
  • Regulators are worried that anonymization will become a tool to circumvent regulation.
  • Users worry that anonymity is a false promise.

2. Data Infrastructure Path

  • Shift to the composite "data infrastructure" path.
  • Data infrastructure is the infrastructure of the data element market.
  • Anonymization terms are expanded into infrastructure that integrates law and technology.

3. "presumed anonymity beforehand" and "anonymity determined afterward"

  • Prior "presumed anonymity" is accomplished through anonymization technology solutions.
  • After-the-fact "judgment anonymity" is accomplished by explaining the law and perfecting responsibilities.

4. Controlled anonymization measures

  • Pseudonymization, only used in controlled spaces.
  • Attribute information is only used in controlled spaces and will not be associated with open space data.

From technical standards to large-scale implementation

1. Build standards

  • Reduce the difficulty and enterprise costs of large-scale implementation of new technologies.

2. Construct a system of technical requirements standards and technical methods

  • The circulation of data elements urgently requires the construction of a new technical standard system.

3. Social cooperation

  • to jointly build a new technical standard system.

    WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.

The above is the detailed content of WAIC Observation: Privacy computing is accelerating its implementation in the industry, and a new technical standard system is about to emerge.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn