Home  >  Article  >  Technology peripherals  >  Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

PHPz
PHPzforward
2023-04-12 08:46:021084browse

In recent years, with the development of deep learning and the emergence of large-scale data sets, deep learning has made progress in many fields, but the "Face Restoration" task still lacks a systematic review.

Recently, researchers from Nanjing University, Australian National University, Sun Yat-sen University, Imperial College London and Tencent comprehensively reviewed and summarized the research progress of face restoration technology based on deep learning , classified face restoration methods, discussed network architecture, loss functions and benchmark data sets, and conducted a systematic performance evaluation of existing SOTA methods.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Paper link: https://arxiv.org/abs/2211.02831

Repository link: https://github. com/TaoWangzj/Awesome-Face-Restoration

This article is also the first review in the field of face restoration. Its main contributions are:

1. Reviewed the main degradation models and commonly used evaluation indicators in face restoration tasks, and summarized the characteristics of face image saliency;

2. Summarized the current face Restoration challenges, classification and overview of existing approaches. Methods mainly include two categories: deep learning restoration methods based on priors and deep learning restoration methods without priors;

3. The basic network architecture and basic methods used in the methods are sorted out. Network modules, loss functions and standard data sets;

4. Systematic experimental evaluation of existing SOTA methods on public benchmark data sets;

5. Analyzes the future development prospects of the face restoration task.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

The overall structure of the article

Research background

Face Restoration (FR) is a specific image restoration problem in underlying vision, aiming to restore high-quality face images from low-quality input face images. Generally speaking, the degradation model can be described as:

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

where I(lq) is a low-quality face image and D is uncorrelated with noise Degenerate function, n is additive Gaussian noise. When the degradation function D is different, it corresponds to different degradation models. Therefore, the FR task can be regarded as the inverse process of solving the above degradation model, which can be expressed as:

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

where according to Depending on the degradation function, face restoration tasks can be mainly divided into the following five categories, which correspond to different degradation models:

1. Face Denoising Task (Face Denoising, FDN) : Remove noise from face images and restore high-quality faces;

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

2. Face deblurring Task (Face Deblurring, FDB): Remove blur from face images and restore high-quality faces;

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

# #3. Face Super-Resolution (FSR): Recover high-resolution and high-quality faces from low-quality low-resolution faces;

4. Face Artifact Removal (FAR): remove artifacts that appear during the face image compression process and restore high-quality faces;

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

##5. Blind Face Restoration (BFR): Restore unknown degraded low-quality faces into High-quality faces;

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Facial features

are different from general natural image restoration tasks. Face images have strong structural information, so the face restoration task can use the prior information of the face image to assist the face restoration process. The prior information can be mainly divided into the following three parts:

Person’s attribute information: such as gender, age, whether to wear glasses, as shown in the figure below;

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

##person Identity information;

Other prior information: As shown in the figure below, representative priors include face landmarks, face heat maps, face parsing maps and 3D face priors ;

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Main challenges facing face restoration

1. Face restoration itself is an ill-posed problem.

Because the degradation type and degradation parameters of low-quality face images are unknown in advance, estimating high-quality face images from degraded images is an ill-posed problem.

On the other hand, in actual scenes, the degradation of face images is complex and diverse. Therefore, how to design an effective and robust face restoration model to solve this ill-posed problem is challenging.

#2. Exploring unknown face priors is difficult.

It is difficult for existing face restoration algorithms to fully utilize face prior knowledge because face priors (such as facial components and facial landmarks) are usually derived from low-quality faces. The estimated, low-quality faces in the image may cause inaccuracies in the prior estimation, which directly affects the performance of the face restoration algorithm.

On the other hand, face images captured in real scenes often contain complex and diverse degradation types, and it is very difficult to find a suitable face prior to assist the face restoration process. Therefore, how to mine reasonable face priors is challenging.

#3. Lack of large public benchmark datasets.

With the development of deep learning technology, deep learning-based methods have shown impressive performance in face restoration. Most deep learning-based face restoration methods strongly rely on large-scale datasets to train the network.

However, most current face restoration methods are usually trained or tested on non-public data sets. Therefore, it is currently difficult to make a direct and fair comparison of existing face restoration methods.

Furthermore, the lack of high-quality and large-scale benchmarks limits the potential of the model. However, it is still difficult to obtain large-scale face data, so it is challenging to build a reasonable public benchmark dataset for face restoration tasks.

#4. The face restoration algorithm has limited generalization ability in actual scenarios.

Although deep learning-based methods have achieved good performance in face restoration, most methods rely on supervised strategies for training.

That is to say, these methods require paired (low-quality and high-quality image pairs) data sets. If this condition is not met, their performance will be greatly reduced.

On the other hand, it is difficult to collect large-scale data sets with paired samples in real-life scenarios. Therefore, algorithms trained on synthetic datasets have weak generalization capabilities in real scenarios, thus limiting the model's applicability in real scenarios. Therefore, how to improve the generalization ability of face algorithms in actual scenarios is challenging.

Summary and classification of face restoration methods

So far, researchers have proposed many face restoration algorithms to try to solve the above challenges. The figure below shows a concise milestone of the deep learning-based face restoration method.

As shown in the figure, the number of face restoration methods based on deep learning has increased year by year since 2015.

These face restoration methods are divided into two categories: prior-based deep learning restoration methods and non-prior-based deep learning Recovery methods.

As for the deep learning restoration methods based on priors, we divide them into three categories: deep learning restoration methods based on geometric priors, depth restoration methods based on reference priors and Depth restoration method based on generative priors.

The following is a brief introduction to representative face restoration algorithms.

Geometric Prior Based Deep Restoration Methods

This method mainly uses the unique characteristics of the face in the image Geometric shape and spatial distribution information to help the model gradually recover high-quality faces. Typical geometric priors include face landmarks, face heat maps, facial parsing maps and facial components. Representative works include:

SuperFAN: It is the first end-to-end method to simultaneously achieve face super-resolution and face landmark positioning tasks.

The core idea of ​​this method is to use a joint task training strategy to guide the network to learn more face geometric information to assist the model in achieving efficient face super-resolution and face landmark positioning.

MTUN: It is a face restoration method that contains two branch networks. The first branch network is used to achieve super-resolution of face images, and the second branch is used to Heatmaps for estimating facial composition.

This method shows that using the face element information in low-quality face images can further improve the performance of the algorithm's face restoration.

PSFR-GAN: It is a blind face restoration method based on multi-scale progressive network. The core idea of ​​this method is to gradually restore the facial details of the face through semantic-aware style transfer by using multi-scale low-quality face images and human parsing maps as input.

Reference Prior Based Deep Restoration Methods

In the past, face restoration methods only relied on degraded images for estimation Face prior, however, the face image degradation process is usually highly pathological, and these methods cannot obtain accurate face prior only through degraded images.

Therefore, another category The method uses the facial structure or facial component dictionary obtained by using additional high-quality face images as a face reference prior to guide the model to perform efficient face restoration. Representative works include:

GFRNet: This network model consists of a distortion network (WarpNet) and a reconstruction network (RecNet). WarpNet is to provide distortion guidance information, with the purpose of correcting facial posture and expression by generating flow fields to distort the reference image. RecNet takes low-quality images and distorted guidance information as input at the same time to generate high-quality face images.

GWAInet: This work is proposed based on GFRNet, which uses Train in an adversarial generation manner to generate high-quality face images. Compared with GFRNet, GWAInet does not rely on face markers in the training stage. This model pays more attention to the entire face area, thereby increasing the robustness of the model.

DFDNet: This method first uses the K-means algorithm to generate a deep dictionary for perceptually significant facial components (i.e., left/right eyes, nose, and mouth) from high-quality images; then, from Select the most similar component features from the generated component dictionary, transfer details to low-quality face images, and guide the model to perform face restoration.

Depth restoration based on generation prior Methods (Generative Prior Based Deep Restoration Methods)

With the rapid development of generative adversarial networks (GAN), research has found that pre-trained face GAN models, such as StyleGAN and StytleGAN2, can provide Richer face priors (such as geometry and facial texture).

Therefore, researchers began to use the prior auxiliary model generated by GAN for face restoration. Representative works include:

PULSE: The core of this work is to iteratively optimize the latent code of pre-trained StyleGAN until the distance between the output and input is lower than the threshold, thereby achieving efficient face super-resolution.

GFP-GAN: This work uses the rich and diverse priors in the pre-trained GAN model as a generating prior to guide the model for blind face restoration. This method mainly includes a degradation removal module and a prior module based on a pre-trained GAN model. These two modules perform efficient information transfer through a latent code connection and several channel segmentation spatial feature conversion layers.

GPEN: The core idea of ​​this method is to effectively integrate the advantages of different frameworks of GAN and DNN to achieve efficient face restoration. GPEN first learns a GAN model for generating high-quality face images; then embeds this pre-trained GAN model into a deep convolutional network as a priori decoder; and finally fine-tunes this deep convolutional network to achieve face recognition. recovery.

Non-prior Based Deep Restoration Methods

Although most people based on deep learning Face restoration methods can restore satisfactory faces with the help of face priors, but relying on face priors intensifies the cost of generating face images to a certain extent.

To solve this problem, another class of methods aims to design an end-to-end network model to directly learn the mapping function between low-quality and high-quality face images without Any additional face priors need to be introduced. Representative works include:

BCCNN: A two-channel convolutional neural network model for face super-resolution. It consists of a feature extractor and an image generator, where the feature extractor extracts robust face representations from low-resolution face images and the image generator adaptively matches the extracted face representation with the input face The images are fused to generate a high-resolution image.

HiFaceGAN: This method transforms the face restoration problem into a semantic-guided generation problem, and designs the HifaceGAN model to achieve face restoration. This network model is a multi-stage framework containing multiple cooperative inhibitory modules and supplementary modules. This structural design reduces the model's dependence on degenerate priors or training structures.

RestoreFormer: This is an end-to-end face restoration method based on Transformer. It mainly explores the full-space attention mechanism for modeling contextual information.

There are two main core ideas of this method. The first one is to propose a multi-head cross-attention layer to learn the full-space interaction between corrupted queries and high-quality key-value pairs. The second point is that the key-value pairs in the attention mechanism are sampled from high-quality dictionaries, which contain high-quality face features.

The following figure comprehensively summarizes the characteristics of face restoration methods based on deep learning in recent years.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

where Plain represents a non-prior based depth restoration method, Facial component and Geometric prior Represents two types of depth restoration methods based on geometric priors. Reference prior represents depth restoration methods based on reference prior. Generative prior represents depth restoration methods based on non-prior. Deep CNN, GAN, and ViT respectively represent models using depth convolution. Neural networks, generative adversarial networks and Visual Transformer network structures.

Technical Development Review

This section comprehensively reviews the technological development process of face restoration methods based on deep learning, mainly from the following aspects Summarize and analyze several aspects: the basic architecture of the network model, the basic modules used, the loss function used by the model and the face-related benchmark data sets.

Network architecture

The existing network architecture of face restoration methods based on deep learning is mainly divided into three categories : Method based on prior guidance, method based on GAN network structure and method based on ViT network structure. Therefore, we discuss these developments in this section.

Methods based on prior guidance

This type of method can be mainly divided into four types, namely based on prior guidance Prior face restoration method (Pre-prior face restoration method), joint prior face restoration method (Joint prior face restoration method), face restoration method based on intermediate prior (Pre-prior face restoration method) method), a reference-prior face restoration method based on reference prior.

The concise structure diagram of the above four methods is as follows:

Face based on pre-prior Restoration methods usually first use a prior estimation network (such as a face prior estimation network or a pre-trained face GAN model) to estimate the face prior from low-quality input images, and then use a network to use the face prior and the face Images generate high-quality faces.

The typical method is shown in the figure below. The researchers designed a face parsing network to first extract face semantic labels from the input blurred face image, and then combine the blurred image and the face The face semantic labels are simultaneously fed into a deblurring network to generate clear face images.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

The joint prior estimation and face restoration method mainly exploits the face prior Complementary relationships between estimation tasks and face restoration tasks. This type of method usually jointly trains the face restoration network and the prior estimation network, so this type of method takes into account the advantages of both subtasks, which can directly improve the performance of the face restoration task.

The typical method is shown in the figure below. The researchers proposed a network model that combines face alignment and face super-resolution. This method jointly estimates the landmark position and face of the face. Super-resolution face images.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

The core idea of ​​the face restoration method based on intermediate prior is to first use a restoration The network generates coarse face images and then estimates face prior information from the coarse images, so that more accurate prior information can be obtained than directly from the input low-quality images.

The typical method is shown in the figure below. The researchers proposed the FSRNet network model, which performs face prior estimation in the middle of the network.

Specifically, FSRNet first uses a coarse SR network to coarsely restore the image; then uses a fine SR encoder and a priori estimation network to perform a priori estimation of the coarse result image. and refinement; finally, the image refinement features and prior information are simultaneously input to a fine SR decoder to restore the final result.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

##Method based on GAN network structure

This type of method is mainly divided into two types: the method based on the plain GAN architecture (Plain GAN method) and the method based on the pre-trained GAN embedding structure (Pre-trained GAN embedding method).

The concise structure diagram of these two methods is as follows:

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.


Methods based on ordinary GAN architecture usually introduce adversarial losses in the network model, and then use adversarial learning strategies to jointly optimize the discriminator and generator (face restoration network) to generate more realistic people. face image.

​The typical method is shown in the figure below. The researchers proposed the HLGAN network model, which consists of two generative adversarial networks.

The first is the High-to-Low GAN network, which uses unpaired images for training to learn the degradation process of high-resolution images. The output of the first network (i.e., low-resolution face images) is used to train the second Low-to-High GAN network to achieve face super-resolution.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

The core idea of ​​the method based on the pre-trained GAN embedded structure is to use the pre-trained face GAN model (such as StyleGAN), and then integrate the potential prior into the face restoration process, and achieve efficient face restoration with the help of latent prior and adversarial learning strategies.

The typical method is shown in the figure below. The researchers designed a GFP-GAN model. This model mainly includes a degradation removal module and a priori module based on the pre-trained GAN model. These two modules pass A latent code connection and several channel segmentation spatial feature transformation layers perform efficient information transfer.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Method based on ViT network structure

Recently, the Visual Transformer (ViT) network architecture has shown excellent performance in fields such as natural language processing and computer vision, which has also inspired the application of the Transformer architecture in face restoration tasks.

The typical method is shown in the figure below. Based on Swin Transformer, researchers proposed an end-to-end Swin Transformer U-Net (STUNet) network for face restoration.

In STUNet, the transformer module uses the self-attention mechanism and the shift window strategy to help the model focus on more important features that are beneficial to face restoration. This method has achieved good performance .

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Commonly used modules in network models

In the field of face restoration, researchers have designed various types of basic modules to build powerful face restoration networks. Commonly used basic modules are shown in the figure below. These basic modules mainly include residual module (Residual Block), Dense module (Dense Block), attention module (Channel attention block, Residual channel attention block, spatial attention block) and Transformer module. (Transformer block).

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Loss function

Common loss functions in face restoration tasks There are mainly the following categories: Pixel-wise loss (mainly including L1 and L2 loss), Perceptual loss, Adversarial loss, face-specific loss. Various face restoration methods and the loss functions they use are summarized in the following table:

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Dataset

The public data sets and related statistical information related to the face restoration task are summarized as follows:

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Performance comparison

This article summarizes and tests some representative face restoration methods in PSNR/SSIM/MS-SSIM/ Performance of LPIPS/NIQE etc.

Quantitative results comparison

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Comparison of qualitative results

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

##Comparison of method complexity

Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.

## Future Development Direction

Although face restoration methods based on deep learning have made certain progress, there are still many challenges and unsolved problems.

Network structure design

For face restoration methods based on deep learning, the network structure can affect the performance of the method have a significant impact.

For example, recent Transformer-based methods often have better performance due to the powerful capabilities of the Transformer architecture. GAN-based methods can generate more visually pleasing face images.

Therefore, when designing a network, it is worth learning and researching from different structures such as CNN, GAN and ViT.

On the other hand, recent Transformer-based models often contain larger parameters and require higher computational costs, which makes them difficult to deploy in edge devices.

Therefore, how to design a lightweight network with powerful performance is another potential research direction for future work.

Integration of facial prior and network

As an image restoration task in a specific field, facial features can be used for the face restoration task. When designing models, many methods aim to utilize face priors to recover realistic face details.

Although some methods try to introduce geometric priors, facial components, generative priors or 3D priors into the process of face restoration, how to integrate prior information into the network more reasonably remains a promising direction for this task.

In addition, further mining new face-related priors, such as priors from pre-trained GANs or data statistics in the network, is also another direction of this task.

Loss function and evaluation index

For face restoration tasks, the widely used loss functions include L1 loss, L2 loss, perceptual loss, adversarial loss and face-specific loss, as shown in Table 3.

Existing methods usually do not use a single loss function, but combine multiple loss functions with corresponding weights to train the model. However, it is unclear how to design a more reasonable loss function to guide model training.

Therefore, in the future, more work is expected to seek more accurate loss functions (e.g., universal or face task-driven loss functions) to facilitate face restoration. Development of tasks. In addition, the loss function can directly affect the evaluation results of the model. As shown in Tables 5, 6 and 7, L1 loss and L2 loss tend to obtain better results in terms of PSNR, SSIM and MS-SSIM.

Perceptual loss and adversarial loss tend to produce more pleasing results (i.e. produce high LPIPS, FID and NIQE values). Therefore, how to develop indicators that can take into account both human and machine aspects to more reasonably evaluate model performance is also a very important direction in the future.

Computational overhead

Existing face restoration methods usually significantly increase the depth or width of the network to improve the recovery performance, while ignoring the computational cost of the model.

The heavy computational cost prevents these methods from being used in resource-limited environments, such as mobile or embedded devices.

For example, as shown in Table 8, the state-of-the-art method RestoreFormer has 72.37M parameter volume and 340.80G MACs calculation volume, which is very difficult to deploy it in real-world applications. difficult. Therefore, developing models with less computational cost is an important future direction.

Benchmark dataset

with other underlying vision tasks such as image deblurring, image denoising, and image dehazing Differently, there are few standard evaluation benchmarks for face restoration.

For example, most face restoration methods usually conduct experiments on private datasets (synthesized training set from FFHQ).

Researchers may be tempted to use data that is biased towards their proposed method. On the other hand, in order to make a fair comparison, follow-up work requires a lot of time to synthesize private datasets and retrain other comparison methods. Furthermore, recently widely used datasets are often small in size and not suitable for deep learning methods.

Therefore, developing standard benchmark data sets is one direction for the face restoration task. In the future, we expect researchers in the community to build more standard and high-quality benchmark datasets.

Video face restoration

#With the popularity of mobile devices such as mobile phones and cameras, the task of video face restoration become more and more important. However, existing work mainly focuses on image face restoration tasks, while video-related face restoration work is less common.

On the other hand, other low-level visual tasks such as video deblurring, video super-resolution and video denoising have developed rapidly in recent years.

Therefore, video face restoration is a potential direction for the community. The video face restoration task can be considered from the following two aspects.

First of all, for the benchmark data set, we can consider building a high-quality video data set for this task, which can quickly promote the design and evaluation of video-related algorithms, which is beneficial to the development of the face restoration community;

Secondly, for video restoration methods, we should develop video-based face restoration methods by fully considering the spatial and temporal information between consecutive video frames.

Real-world face restoration and applications

## Existing methods rely on synthetic data to train network models. However, the trained network does not necessarily show good generalization ability in real-world scenarios.

As shown in Figure 19, most face restoration methods do not perform well when facing real-world face images. Because there is a large data domain gap between synthetic data and real-world data.

Although some methods have introduced some solutions to solve this problem, such as unsupervised techniques or learning real image degradation techniques. However, they still rely on some specific assumptions that all images suffer from similar degradation.

Therefore, real-world applications remain a challenging direction for face restoration tasks.

Additionally, some methods have shown that face restoration can improve the performance of subsequent tasks such as face verification and face recognition. However, how to combine the face restoration task with these tasks in a framework is also a future research direction.

Other related tasks

In addition to the face restoration tasks discussed above, there are many other related tasks related to face restoration tasks including face retouching, photo sketch synthesis, face-to-face translation, face restoration, color enhancement and old photo restoration.

For example, facial restoration aims to restore missing areas of a facial image through matching or learning. Not only does it need to semantically generate new pixels for missing facial components, but it should also maintain the consistency of facial structure and appearance. Old photo restoration is the task of restoring old photos whose degradation is quite diverse and complex (e.g., noise, blur, and fading).

Additionally, some tasks focus on facial style transfer, such as face-to-face translation and facial expression analysis, which are different from face restoration tasks.

Therefore, applying existing face restoration methods to these related tasks is also a promising direction, which can trigger more applications.

Reference: https://arxiv.org/a​bs/2211.02831​

The above is the detailed content of Wanzi Interpretation’s first “Face Restoration” review! Jointly released by NTU, Sun Yat-sen, Australian National University, Imperial College, etc.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete