Welcome to a seminar under the theme: AI Talks at Chalmers, with the speaker Arthur Gretton, Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning (CSML) at UCL
GANs with integral probability metrics: some results and conjectures
This seminar will explore issues of critic design for generative adversarial networks. The talk will focus on integral probability metric (IPM) losses, specifically the Wasserstein loss as implemented in the WGAN-GP, and the MMD GAN. We will begin with an introduction to IPM losses, their relation to moment matching in the case of the Maximum Mean Discrepancy (MMD), and how IPMs relate to f-divergences (such as the KL and Jensen Shannon divergence, as used in the original GAN). Next, we will look at GAN design using these IPM losses, and compare with the f-divergence losses. We'll end with some conjectures on the results that would be needed to establish a “theory of GANs”: in particular, that a problem-specific critic is essential, and that the critic needs to be deliberately weakened to ensure good GAN performance.