3 Variational inference

 

This chapter covers

  • Introducing KL variational inference
  • Mean-field approximation
  • Image denoising in the Ising model
  • Mutual information maximization

In the previous chapter, we covered one of the two main camps of Bayesian inference: Markov chain Monte Carlo. We examined different sampling algorithms and approximated the posterior distribution using samples. In this chapter, we will discuss the second camp of Bayesian inference: variational inference. Variational inference (VI) is an important class of approximate inference algorithms; its basic idea is to choose an approximate distribution q(x) from a family of tractable or easy-to-compute distributions with trainable parameters and then make this approximation as close as possible to the true posterior distribution p(x).

3.1 KL variational inference

3.2 Mean-field approximation

3.3 Image denoising in an Ising model

3.4 MI maximization

3.5 Exercises

Summary