Welcome to Chalmers AI Research Center, CHAIR and an AI Talks with Arthur Gretton: Critics for generative adversarial networks: results and conjectures
Generative adversarial networks (GANs) use neural networks as generative models, creating realistic images that mimic real-life reference samples (for instance, images of faces, bedrooms, and more). These networks require an adaptive critic function while training, to teach the generator network how to improve its performance. To achieve this, the critic needs to measure how close generated samples are to true samples, and to provide a useful gradient signal the generator network.
I will explore the design of critic functions for GANs, including f-divergences as used in the original GAN design, and integral probability metrics (such as the Maximum Mean Discrepancy) as used in later GANs. I will provide some observations and conjectures on critic design: in particular, a problem-specific critic seems to be helpful, and the critic needs to be deliberately weakened to ensure good GAN performance.
Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning (CSML) at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University.
Arthur's recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models), nonparametric hypothesis testing, and kernel methods.
About Chalmers AI Talks