search
HomeTechnology peripheralsAIApplication practice of privacy computing in the field of big data AI

01 The background and current situation of privacy computing

1. The background of privacy computing

Privacy computing has now become a necessity. On the one hand, individual users’ demands for personal privacy and information security have become stronger. On the other hand, there are a large number of privacy and security-related laws and regulations issued, such as the European Union’s GDPR, the United States’ CCPA and domestic personal information protection laws. Regulations and policies have gradually changed from loose to strict, mainly reflected in rights and interests, implementation scope and execution. Strength, etc. Taking GDPR as an example, since it came into effect in 2018, more than 1,000 cases have emerged, with a total fine of more than 11 billion, and the highest single fine exceeds 5 billion (Amazon).

Application practice of privacy computing in the field of big data AI

Application practice of privacy computing in the field of big data AI##​

2. Privacy Computing The current situation

#In this context, data security has changed from optional to mandatory. This has led to a large number of enterprises, investments, start-ups and practitioners investing in the security and privacy technology ecosystem, and the academic circle has conducted many forward-looking explorations in response to the needs of the industry. These factors have contributed to the vigorous development of security and privacy technologies and ecosystems in recent years, among which technologies such as differential privacy, trusted execution environments, homomorphic encryption, secure multi-party computation, and federated learning have all made great progress. Gartner is also optimistic about the development of this field, believing that it will be a market worth tens of billions or even hundreds of billions in the future.

Application practice of privacy computing in the field of big data AI

##02

Big Data AI Privacy Computing

1. Big Data AI Background

Back to the background of Big Data AI, from a macro perspective of the industry, Big Data The framework and technology have been commercialized and popularized on a large scale. We may be using big data technology all the time, but we don’t feel that programs and model training are running on a server cluster of thousands or even tens of thousands of nodes and large-scale data. In recent years, there have been two new trends in the development direction of this field: one is the improvement of ease of use, and the other is the refinement of application directions. The former has greatly lowered the threshold for using big data technology, while the latter continues to provide new solutions to emerging needs and problems, such as data lakes.

#From the perspective of the combination with the AI ​​framework, big data and the AI ​​ecosystem are now closely integrated. Because for AI models, the larger the amount of data and the higher the quality, the better the training effect of the model, so the two fields of big data and AI will naturally be combined.

Application practice of privacy computing in the field of big data AI

##However, even so, the big data framework and AI Integrating frameworks is not easy. In the process of application development, data acquisition, cleaning, analysis and deployment, many big data and AI frameworks will be involved. If you need to ensure data security and privacy in key processes, many links and frameworks will be involved, including different security technologies, encryption technologies, and key management technologies, which will greatly increase the cost of transformation and migration.

Application practice of privacy computing in the field of big data AI

2. Big Data AI Privacy Computing

Two years ago, during the process of communicating with customers related to big data and AI applications in the industry, we Collected some user pain points. In addition to general performance issues, the first concern of most customers is compatibility issues. For example, some customers already have clusters with thousands or even tens of thousands of nodes. If they need to securely process some modules or links and apply privacy computing technology to achieve privacy protection functions, they may need to make changes to the existing applications. , or even introduce some completely new frameworks or infrastructures. These impacts are the primary issues that customers need to consider. Secondly, customers will consider the impact of data scale on security technology and hope that the introduced new frameworks and technologies can support the calculation of large-scale data and have high computing efficiency. Finally, customers will consider whether federated learning technology can solve the problem of data islands.

Application practice of privacy computing in the field of big data AI

Based on the customer needs obtained from the survey, we launched the BigDL PPML solution , The primary goal is to enable conventional, standard big data and AI solutions to run in a secure environment to ensure end-to-end security. For this purpose, the computing process needs to be protected by SGX (hardware-level TEE). At the same time, it is necessary to ensure that the storage and network are encrypted, and the entire link needs to be remotely attested (also called remote signature) to ensure the confidentiality and integrity of the calculation.

Application practice of privacy computing in the field of big data AI##​

##Next weuse Apache Spark A commonly used big data framework is used as an example to elaborate on the necessity of this solution. Apache Spark is a commonly used distributed computing framework in the field of big data AI. It already has many security-related functions. For example, the network can be encrypted and authenticated, and communication and RPC are protected by TLS and AES; storage mainly involves Local shuffle storage is also protected by AES; however, there are major problems in calculation, because even the latest version of Spark can only perform plaintext calculations. If the computing environment or node is compromised, a large amount of sensitive data can be obtained.

Application practice of privacy computing in the field of big data AI##SGX Technology

is a trusted computing environment technology that combines software and hardware with Intel CPU as the underlying facility. It has:

Hardware-level trustworthiness Information execution environment

  • Relatively small attack surface: Even if part of the system has been compromised, as long as the CPU is safe, the security of the entire program can be ensured
  • Little performance impact
  • Enclaves large enough (maximum 1TB)
  • Back to the Apache Spark application scenario mentioned earlier:

The left side shows the situation where the computing environment is not protected. Even if encrypted storage is used, as long as it is attacked during the plaintext calculation stage, there will be a risk of data leakage; the right side shows some attempts by the Spark community. , by extracting some key steps related to SparkSQL and rewriting this part of the logic with SGX SDK, we can both maximize performance and minimize the attack surface. However, the shortcomings of this method are also obvious, that is, the development cost is too high and the cost is too high. Rebuilding the core logic of SparkSQL requires a clear understanding of Spark; at the same time, the code cannot be reused in other projects.

Application practice of privacy computing in the field of big data AI

In order to solve the shortcomings mentioned above, we use the LibOS solution , in short, through the middle layer of LibOS, it reduces the difficulty of development and migration, and converts system API calls into a form that can be recognized by the SGX SDK, thereby achieving seamless migration of some conventional applications. Common LibOS solutions include Ant Group’s Occlum, Intel’s Gramine, and Imperial College’s sgx-lkl solution. The above LibOS all have their own features and advantages, and they solve the problems of SGX's ease of use and portability in different ways.

Application practice of privacy computing in the field of big data AI

##With LibOS, there is no need to rewrite Spark Instead, it can put the entire Spark into SGX through LibOS without modifying Spark and existing applications.

Application practice of privacy computing in the field of big data AI

##In Spark’s distributed computing, you can Each module in the distribution is protected by LibOS and SGX respectively. The storage side can be configured with key management and encrypted storage. The executor obtains the ciphertext data and decrypts and calculates it in SGX. The entire process is relatively insensitive to developers and has less impact on existing applications.

#However, compared with stand-alone applications, security issues in distributed applications are also more complex. Attackers may compromise some operating nodes or collude with resource management nodes to replace the SGX environment with a malicious operating environment. In this way, keys and encrypted data can be illegally obtained, and ultimately private data can be leaked.

Application practice of privacy computing in the field of big data AI

##In order to solve this problem,

remote attestation technology needs to be applied. To put it simply, applications running in SGX can provide certificates or certificates, and the certificates or certificates cannot be tampered with. The certificate can verify whether the application is running in SGX, whether the application has been tampered with, and whether the platform meets security standards.

Application practice of privacy computing in the field of big data AI

There are two ways to implement remote attestation for distributed applications

. On the left is a relatively complete but significantly modified solution. To perform remote attestation on the driver and executor sides, Spark needs to be modified to a certain extent. Another solution is to implement centralized remote certification through a third-party remote certification server, and use an unchangeable certificate to block modules controlled by attackers from obtaining data. The second option does not require modification of the application, but only requires modification of a small part of the startup script.

Application practice of privacy computing in the field of big data AI##Although LibOS allows Spark to run in SGX, it still costs a certain amount of time to adapt Spark to LibOS and SGX. Labor and time costs.

To this end, we have launched a one-stop solution for PPML

, in which many steps can be automated and seamless migration can be achieved, greatly reducing migration costs.

From a workflow perspective, this solution has another advantage, that is, data scientists cannot perceive underlying changes, and only cluster administrators need to participate in the deployment of SGX and preparation work, data scientists can carry out modeling and query work normally without being aware that the underlying environment has changed. This can well solve the compatibility and migration problems of existing applications, and will not hinder the daily work of data scientists and developers.

Application practice of privacy computing in the field of big data AI

The following is an overview of the entire PPML solution. In order to meet the different needs of customers, the functions supported by PPML have been continuously expanded in the past two years. For example, in the middle layer Library and Framework, commonly used computing frameworks such as Spark, Flink, and Ray are all supported; at the same time, PPML also supports machine learning, deep learning, and federated learning functions, and is equipped with support for encrypted storage and homomorphic encryption. , ensuring end-to-end full link security.

Application practice of privacy computing in the field of big data AI

##03 Application Practice

The following is Some customers' application practice cases, the more famous one is last year's Tianchi Competition. In a sub-competition last year, the participants hoped that the training and model inference process could be completely protected by SGX. Through the Flink function provided by PPML and combined with Ant Group's LibOS project Occlum, the training and model inference could be made invisible at the application level. In the end, more than 4,000 teams participated in the entire competition, and hundreds of servers were used, proving that PPML can support large-scale commercial use, and overall, the operators did not perceive big changes.

Application practice of privacy computing in the field of big data AI

##In September-October of the same year, Korea Telecom hoped to build an end-to-end secure , real-time model inference environment based on BigDL and Flink, they have more stringent performance requirements. After Tianchi’s experience, BigDL’s real-time model inference solution based on Flink and SGX has become more mature. The end-to-end performance loss is less than 5%, and the throughput has also met the basic needs of Korea Telecom.

Application practice of privacy computing in the field of big data AI

We also conducted Spark performance testing. In conclusion, even if the test data reaches hundreds of GB, there are no scalability and performance problems when the PPML solution runs Spark. Based on the customer's needs, we specifically selected TPC-DS, an IO-intensive application that is not friendly to SGX. TPC-DS is a commonly used SQL benchmark standard. It has relatively high IO and computing requirements. When the amount of data is large, large-scale disk, memory and network IO will occur. As a hardware-level TEE, data entering and exiting SGX needs to be decrypted and encrypted, so the cost of reading and writing data will be greater than that of non-SGX. After a complete TPC-DS test, the entire end-to-end loss was within 2 times, meeting customer expectations. Through the TPC-DS benchmark, we proved that even in this worst case, we can ensure that the end-to-end loss is reduced to an acceptable range (1.8).

Application practice of privacy computing in the field of big data AI

After realizing the seamless migration of big data applications, we also tried federated learning with some customers. Because SGX provides a secure environment, it can solve the most critical server and local data security issues in the federated learning process. There is a big difference between the federated learning solution provided by BigDL and the general solution, that is, the entire solution is essentially a federated learning solution for large-scale data. Among them, the workload and data size of each worker are relatively large, and each worker is equivalent to a small cluster. We have verified the feasibility and effectiveness of this solution with some customers.

04 Summary and Outlook

As mentioned above, in more than two years of communication and cooperation with customers, we have discovered We have reached several pain points related to privacy computing and big data AI. These pain points can be solved through security technologies such as SGX. Among them, LibOS can solve compatibility issues, SGX can solve security environment and performance issues; Spark or Flink support can solve big data and migration issues; federated learning can solve the data island problem. BigDL PPML is a one-stop privacy computing solution that integrates the above services.

Application practice of privacy computing in the field of big data AI

The ecology of SGX and TEE is currently developing rapidly. In the foreseeable future, TEE will be greatly improved in terms of ease of use, security and performance. For example, Intel's next-generation TDX can directly provide OS support, which can fundamentally solve application compatibility issues; open source The community is also improving support for confidential containers to ensure container security and greatly reduce the cost of application migration. From a security perspective, work such as microkernel will also appear to further strengthen the security of the TEE ecosystem. From a scalability perspective, Intel and the community are also promoting support for accelerators and IO devices, bringing them into the trusted domain to reduce the performance overhead of data flow.

Application practice of privacy computing in the field of big data AI

The above is the detailed content of Application practice of privacy computing in the field of big data AI. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
ai合并图层的快捷键是什么ai合并图层的快捷键是什么Jan 07, 2021 am 10:59 AM

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西怎么办ai橡皮擦擦不掉东西怎么办Jan 13, 2021 am 10:23 AM

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式吗ai可以转成psd格式吗Feb 22, 2023 pm 05:56 PM

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了怎么办ai顶部属性栏不见了怎么办Feb 22, 2023 pm 05:27 PM

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程Mar 31, 2023 pm 10:38 PM

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西了怎么办ai移动不了东西了怎么办Mar 07, 2023 am 10:03 AM

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function