Home  >  Article  >  Technology peripherals  >  NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

PHPz
PHPzforward
2023-04-11 12:13:03862browse

Directory:

  1. Quantum machine learning beyond kernel methods
  2. Wearable in- sensor computing reservoir using optoelectronic polymers with through-space charge-transport characteristics for multi-task learning
  3. Dash: Semi-Supervised Learning with Dynamic Thresholding
  4. StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
  5. Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer
  6. ChatGPT is not all you need. A State of the Art Review of large Generative AI models
  7. ClimaX: A foundation model for weather and climate
  8. ArXiv Weekly Radiostation: NLP, CV, ML More selected papers (with audio)

Paper 1: Quantum machine learning beyond kernel methods

  • Author: Sofiene Jerbi et al
  • ##paper Address: https://www.nature.com/articles/s41467-023-36159-y

##Abstract:In this article, A research team from the University of Innsbruck, Austria, has identified a constructive framework that captures all standard models based on parameterized quantum circuits: the linear quantum model.

The researchers show how using tools from quantum information theory to efficiently map data re-upload circuits into a simpler picture of a linear model in quantum Hilbert space. Furthermore, the experimentally relevant resource requirements of these models are analyzed in terms of the number of qubits and the amount of data that needs to be learned. Recent results based on classical machine learning demonstrate that linear quantum models must use many more qubits than data reupload models to solve certain learning tasks, while kernel methods also require many more data points.

The results provide a more comprehensive understanding of quantum machine learning models, as well as insights into the compatibility of different models with NISQ constraints.


NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

## Researched in this work Quantum machine learning model.

Recommended:

Quantum machine learning beyond kernel methods, a unified framework for quantum learning models.

Paper 2: Wearable in-sensor reservoir computing using optoelectronic polymers with through-space charge-transport characteristics for multi-task learning

    Author: Xiaosong Wu et al
  • Paper address: https://www.nature.com/articles/s41467 -023-36205-9
Abstract:

In-sensor multi-task learning is not only a key advantage of biological vision, but also a major advantage of artificial intelligence Target. However, traditional silicon vision chips have a large time and energy overhead. Additionally, training traditional deep learning models is neither scalable nor affordable on edge devices. In this article,

The research team from the Chinese Academy of Sciences and the University of Hong Kong proposes a materials algorithm co-design to simulate the learning paradigm of the human retina with low overhead . Based on the bottlebrush-shaped semiconductor p-NDI with efficient exciton dissociation and through-space charge transport properties, a wearable transistor-based dynamic sensor reservoir computing system is developed that exhibits excellent separability on different tasks properties, attenuation memory and echo state characteristics. Combined with the "readout function" on the memristive organic diode, RC can recognize handwritten letters and numbers, and classify various clothing, with an accuracy of 98.04%, 88.18% and 91.76% (higher than all reported organic semiconductors).

NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

Comparison of the photocurrent response of conventional semiconductors and p-NDI, and detailed semiconductor design principles of the RC system within the sensor.

Recommendation: Low energy consumption and low time consumption, the Chinese Academy of Sciences & University of Hong Kong team used a new method to perform multi-task learning for internal reservoir calculations in wearable sensors.

Paper 3: Dash: Semi-Supervised Learning with Dynamic Thresholding

  • Author: Yi Xu et al
  • Paper address: https://proceedings.mlr.press/v139/xu21e/xu21e.pdf

Abstract: This paper innovatively proposes to use dynamic threshold to filter unlabeled samples for semi-supervised learning (SSL). Method, we transformed the training framework of semi-supervised learning, improved the selection strategy of unlabeled samples during the training process, and selected more effective unlabeled samples for training through dynamically changing thresholds. Dash is a general strategy that can be easily integrated with existing semi-supervised learning methods.

In terms of experiments, we have fully verified its effectiveness on standard data sets such as CIFAR-10, CIFAR-100, STL-10 and SVHN. In theory, the paper proves the convergence properties of the Dash algorithm from the perspective of non-convex optimization.


NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

#Fixmatch Training Framework

Recommendation: Damo Academy’s open source semi-supervised learning framework Dash refreshes many SOTAs.

Paper 4: StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis

  • Author: Axel Sauer et al
  • Paper address: https://arxiv.org/pdf/2301.09515.pdf

Abstract: Are diffusion models the best at text-to-image generation? Not necessarily, the results of the new StyleGAN-T launched by Nvidia and others show that GAN is still competitive. StyleGAN-T only takes 0.1 seconds to generate a 512×512 resolution image:

NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

Recommendation: GAN is back? NVIDIA spent 64 A100 training StyleGAN-T, which outperformed the diffusion model.

Paper 5: Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer

    Author: Sunan He et al
  • ##Paper address: https://arxiv.org/abs/2207.01887
  • Abstract:
In multi-label classification systems, we often encounter a large number of labels that have never appeared in the training set. How to accurately identify these labels is a very important and challenging problem. .

To this end,

Tencent Youtu Lab, together with Tsinghua University and Shenzhen University, proposed a framework MKT based on multi-modal knowledge transfer

, utilize the powerful image-text matching capabilities of the image-text pre-training model to retain key visual consistency information in image classification, and realize Open Vocabulary classification of multi-label scenes. This work has been selected for AAAI 2023 Oral.

NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

Comparison of ML-ZSL and MKT methods.

Recommended: AAAI 2023 Oral | How to identify unknown tags? Multimodal knowledge transfer framework to achieve new SOTA.

Paper 6: ChatGPT is not all you need. A State of the Art Review of large Generative AI models

  • Author: Roberto Gozalo-Brizuela et al
  • ##Paper address: https://arxiv.org/abs/2301.04655

Abstract: In the past two years, a large number of large-scale generative models have appeared in the AI ​​field, such as ChatGPT or Stable Diffusion. Specifically, these models are able to perform tasks like general question answering systems or automatically creating artistic images, which are revolutionizing many fields.

In a recent review paper submitted by researchers from Comillas Pontifical University in Spain, the author tried to describe the impact of generative AI on many current models in a concise way. And classify the major recently released generative AI models.


NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

## Classification icon.

Recommendation:

ChatGPT is not all you need, a review of 9 types of generative AI models from 6 major companies.

Paper 7: ClimaX: A foundation model for weather and climate

Author: Tung Nguyen et al
  • Paper address: https://arxiv.org/abs/2301.10343
  • Abstract:

The Microsoft Autonomous Systems and Robotics research group and the Microsoft Research Center for Scientific Intelligence have developed ClimaX, a flexible and scalable deep learning for weather and climate science Model , can be trained using heterogeneous data sets spanning different variables, spatiotemporal coverage, and physical basis. ClimaX extends the Transformer architecture with novel encoding and aggregation blocks that allow efficient use of available computation while maintaining generality. ClimaX is pretrained using a self-supervised learning objective on climate datasets derived from CMIP6. The pretrained ClimaX can then be fine-tuned to solve a wide range of climate and weather tasks, including those involving atmospheric variables and spatiotemporal scales not seen during pretraining.

NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models

ClimaX architecture used during pre-training

Recommended:

The Microsoft team released the first AI-based weather and climate basic model ClimaX.

The above is the detailed content of NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete