


To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE
Learning low-dimensional representations of high-dimensional data is a fundamental task in unsupervised learning because such representations succinctly capture the essence of the data and enable execution based on low-dimensional inputs downstream tasks are possible. Variational autoencoder (VAE) is an important representation learning method, however due to its objective control representation learning is still a challenging task. Although the evidence lower bound (ELBO) goal of VAE is generatively modeled, learning representations is not directly targeted at this goal, which requires specific modifications to the representation learning task, such as disentanglement. These modifications sometimes lead to implicit and undesirable changes in the model, making controlled representation learning a challenging task.
To solve the representation learning problem in variational autoencoders, this paper proposes a new generative model called Gromov-Wasserstein Autoencoders (GWAE). GWAE provides a new framework for representation learning based on the variational autoencoder (VAE) model architecture. Unlike traditional VAE-based representation learning methods for generative modeling of data variables, GWAE obtains beneficial representations through optimal transfer between data and latent variables. The Gromov-Wasserstein (GW) metric makes possible such optimal transfer between incomparable variables (e.g. variables with different dimensions), which focuses on the distance structure of the variables under consideration. By replacing the ELBO objective with the GW metric, GWAE performs a comparison between the data and the latent space, directly targeting representation learning in variational autoencoders (Figure 1). This formulation of representation learning allows the learned representations to have specific properties that are considered beneficial (e.g., decomposability), which are called meta-priors.
##Figure 1 The difference between VAE and GWAE
This study has so far Accepted by ICLR 2023.
- Paper link: https://arxiv.org/abs/2209.07007 ##
- Code link: https://github.com/ganmodokix/gwae Method introduction
This scheme can also flexibly customize the prior distribution to introduce beneficial features into the low-dimensional representation. Specifically, the paper introduces three prior populations, which are:
Neural Priori (NP) In GWAEs with NP, a fully connected neural network is used to construct a priori sampler. This family of prior distributions makes fewer assumptions about the underlying variables and is suitable for general situations.
Factorized Neural Priors (FNP)In GWAEs with FNP, use locally connected neural priors The network builds a sampler in which entries for each latent variable are generated independently. This sampler produces a factorized prior and a term-independent representation, which is a prominent method for representative meta-prior,disentanglement. Gaussian Mixture Prior (GMP) In GMP, it is defined as a mixture of several Gaussian distributions, and its sampler can use heavy Parameterization techniques and Gumbel-Max techniques are implemented. GMP allows clusters to be hypothesized in the representation, where each Gaussian component of the prior is expected to capture a cluster. This study conducted two main meta-prior empirical evaluations of GWAE:Solution Entanglement and clustering. Disentanglement The study used the 3D Shapes dataset and DCI metric to measure the disentanglement ability of GWAE. The results show that GWAE using FNP is able to learn object hue factors on a single axis, which demonstrates the disentanglement capability of GWAE. Quantitative evaluation also demonstrates the disentanglement performance of GWAE. Clustering To evaluate the representations obtained based on clustering element priors, the study conducted a Out-of-Distribution (OoD) detection. The MNIST dataset is used as In-Distribution (ID) data and the Omniglot dataset is used as OoD data. While MNIST contains handwritten numbers, Omniglot contains handwritten letters with different letters. In this experiment, the ID and OoD datasets share the domain of handwritten images, but they contain different characters. Models are trained on ID data and then use their learned representations to detect ID or OoD data. In VAE and DAGMM, the variable used for OoD detection is the prior log-likelihood, while in GWAE it is the Kantorovich potential. The prior for GWAE was constructed using GMP to capture the clusters of MNIST. The ROC curve shows the OoD detection performance of the models, with all three models achieving near-perfect performance; however, the GWAE built using GMP performed best in terms of area under the curve (AUC). In addition, this study evaluated the generative ability of GWAE. Performance as an Autoencoder-Based Generative Model To evaluate the ability of GWAE to handle the general case without specific meta-priors, the CelebA data is used The set generation performance was evaluated. The experiment uses FID to evaluate the model's generative performance and PSNR to evaluate the autoencoding performance. GWAE achieved the second best generative performance and the best autoencoding performance using NP, demonstrating its ability to capture the data distribution in its model and capture the data information in its representation. Experiments and results
Summary
The above is the detailed content of To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE. For more information, please follow other related articles on the PHP Chinese website!

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Mac version
God-level code editing software (SublimeText3)

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool