Home  >  Article  >  Technology peripherals  >  variable factor inference

variable factor inference

WBOY
WBOYforward
2024-01-24 23:09:04629browse

variable factor inference

Variational inference is a probability inference method used to approximate the posterior distribution of complex probability models. It reduces computational complexity by transforming the original problem into an optimization problem. Variational inference is widely used in fields such as machine learning, statistics, and information theory.

Why is it called variation?

The word "variation" comes from the variation method in function theory, which is a method of solving the extreme value of a functional. In variational inference, we find an approximate posterior distribution by minimizing a distance metric, which is called variational distance, so this inference method is called variational inference.

The basic idea of ​​variational inference is to approximate the true posterior distribution as closely as possible by finding an approximate distribution. To this end, we introduce a parameterized distribution family q(z;\lambda), where z is the hidden variable and \lambda is the parameter to be obtained. Our goal is to find a distribution q(z;\lambda) that minimizes its difference from the true posterior distribution p(z|x). To measure the distance between distributions q(z;\lambda) and p(z|x), we use variational distance, usually measured using KL divergence. KL divergence is a measure of the difference between two probability distributions. Specifically, KL divergence can be calculated by the following formula: KL(q(z;\lambda) || p(z|x)) = \int q(z;\lambda) \log \frac{q(z;\lambda)}{p(z|x)} dz By minimizing the KL divergence, we can find the parameters \lambda that minimize the difference between the distribution q(z;\lambda) and the true posterior distribution p(z|x). In this way, we can obtain an approximate posterior distribution for subsequent inference and prediction tasks. In summary, the basic idea of ​​variational inference is to approximate the true posterior distribution by finding a parameterized family of distributions, and use KL divergence to measure the difference between the two distributions. By minimizing the KL divergence, we can obtain an approximate posterior distribution for subsequent inference tasks.

D_{KL}(q(z;\lambda)||p(z|x))=\int q(z;\lambda)\log\frac{q( z;\lambda)}{p(z|x)}dz

Note that the KL divergence is non-negative if and only if q(z;\lambda) equals p( z|x), the KL divergence takes the minimum value 0. Therefore, our goal can be transformed into minimizing the KL divergence, that is:

\lambda^*=\arg\min_{\lambda}D_{KL}(q(z; \lambda)||p(z|x))

However, since the KL divergence is an intractable and complex function, we cannot directly minimize it. Therefore, we need to use some approximate methods to solve this problem.

In variational inference, we employ a technique called variational lower bounds to approximate the KL divergence. Specifically, we first decompose the KL divergence into:

D_{KL}(q(z;\lambda)||p(z|x))=E_{q( z;\lambda)}[\log q(z;\lambda)-\log p(z,x)]

Then, we introduce a new distribution q(z |x), and using Jensen's inequality, a lower bound was obtained:

##\log p(x)\ge E_{q(z|x)}[\log p(x, z)-\log q(z|x)]

Where, \log p(x) is the marginal probability of the data, p(x,z) is the joint probability distribution, q (z|x) is the approximate posterior distribution.

This lower bound is called the variational lower bound or ELBO (Evidence Lower Bound), and the parameters of the approximate posterior distribution can be optimized by maximizing ELBO\lambda:

\lambda^*=\arg\max_{\lambda}E_{q(z|x;\lambda)}[\log p(x,z)-\log q(z|x;\ lambda)]

Note that this optimization problem can be solved by optimization algorithms such as gradient descent. Finally, the approximate posterior distribution q(z|x) we obtain can be used to calculate various expectations, such as prediction, model selection, etc.

In short, variational inference is a probability inference method based on minimizing KL divergence. By introducing the technique of variational lower bound, optimization algorithm is used to approximately calculate the consequences of complex probability models. empirical distribution.

The above is the detailed content of variable factor inference. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:163.com. If there is any infringement, please contact admin@php.cn delete