Approximate Inference via Weighted Rademacher Complexity

Highlights from our AAAI 2018 paper, "Approximate Inference via Weighted Rademacher Complexity." In this work we consider the challenging problem of computing the sum of more numbers than can be explicitly enumerated. This sum arises in various contexts, such as the partition function of a graphical model, the permanent of a matrix, or the number of satisfying assignments of a propositional formula. By establishing a novel connection with Rademacher complexity, we show how this sum can be estimated and bounded by solving an optimization problem; finding the largest number in the sum after random perturbations have been applied.

Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models

An overview for our upcoming AAAI 2018 paper, Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models. We propose a new approach to evaluate, compare, and interpolate between objectives based on maximum likelihood estimation and advesarial training for learning generative models. We find that even though adversarial training generates visually appealing samples, it obtains log-likelihoods that are orders of magnitudes worse than maximum likelihood -- even a trivial Gaussian mixture model baseline memorizing the data can obtain better likelihoods (and beautiful samples)! But, why?

A Tutorial on Information Maximizing Variational Autoencoders (InfoVAE)

This tutorial discusses MMD variational autoencoders, a member of the InfoVAE family. It is an alternative to traditional variational autoencoders that is fast to train, stable, and easy to implement.
Shengjia Zhao

Learning Hierarchical Features from Generative Models

Current ways of stacking variational autoencoders may not always provide meaningful structured features. In fact, we showed in a recent ICML paper that while existing approaches have shortcomings, a new ladder architecture can often learn distentangled features.