Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Deep Advances in Generative Modeling

3.909 Aufrufe

Veröffentlicht am

In recent years, deep learning approaches have come to dominate discriminative problems in many sub-areas of machine learning. Alongside this, they have also powered exciting improvements in generative and conditional modeling of richly structured data such as text, images, and audio. This talk, led by indico's Head of Research, Alec Radford, will serve as an introduction to several emerging application areas of generative modeling and provide a survey of recent techniques in the field.

Boston ML Forum 2016

Veröffentlicht in: Daten & Analysen

Deep Advances in Generative Modeling

  1. 1. Deep Advances in Generative Modeling Alec Radford @AlecRad March 5th 2016
  2. 2. Generative modeling Modeling complex high dimensional data is an open problem. Deep generative models are currently making progress here. Various areas of study/application: unsupervised/representation/manifold learning generative counterparts of discriminative models density/likelihood estimation conditional generation
  3. 3. Examples of Generative Modeling
  4. 4. CNNs and RNNs
  5. 5. Useful Generative Model - Skipthought [1506.06726]
  6. 6. Two promising approaches Variational Autoencoders (VAE) Kingma and Welling [1312.6114] Generative Adversarial Networks (GAN) Goodfellow et al. [1406.2661] encoder Z decoder x̂X z generator x̂ discriminator X
  7. 7. Variational Autoencoder from Kingma and Welling [1312.6114] ● Theoretically elegant autoencoder ● Straightforward to implement ● Impose a prior on code space ○ regularization ○ allows for sampling ● Optimizes variational lower bound on likelihood encoder Z decoder x̂X
  8. 8. Generative Adversarial Networks z x̂ discriminator X generator
  9. 9. Generative Adversarial Networks z x̂ discriminator X generator
  10. 10. VAE Extensions from Kingma et al. [1406.5298] Semi-Supervised Learning from Gregor et al. [1502.04623] DRAW
  11. 11. GAN Extensions - LAPGAN
  12. 12. Deep convolutional GANs (DCGAN) [1511.06434] Luke Metz Soumith ChintalaAlec Radford tl;dr add more layers indico indico FAIR
  13. 13. Deep convolutional GANs (DCGAN) [1511.06434]
  14. 14. DCGAN Architecture tricks No fully connected layers Batch Normalization Ioffe and Szegedy [1502.03167] Leaky Rectifier in the discriminator Use Adam Kingma and Ba [1412.6980] Tweak Adam hyperparameters a bit (lr=0.0002, b1=0.5)
  15. 15. Really really really ridiculously good looking samples on constrained image distributions :(
  16. 16. Interpolation suggests non-overfitting behavior
  17. 17. Vector arithmetic properties of generator
  18. 18. Generator disentangles objects from scene?
  19. 19. Discriminator learns generalizing object detectors These are responses on validation examples!
  20. 20. Results on standard supervised tasks
  21. 21. Conditional DCGAN
  22. 22. Conditional DCGAN (unpublished) Sunrise over the ocean Beautiful falls and stream sahara desert sand dunes Tropical rainforest brazil Stars of the milkyway at night
  23. 23. Issues Still not completely stable especially for deep and higher res Unconstrained natural images Even the biggest models underfit Hard to evaluate no reliable/straightforward metrics No inference model limits kinds of analysis Little work on conv VAE equivalents makes comparison difficult Some funky stuff going on separate data/sample batchnorm statistics train with heuristic cost not GAN theory
  24. 24. Hybridizing VAEs and GANs (best of both worlds?) from Larsen et. al [1512.09300] from Larsen et. al [1512.09300]
  25. 25. Hybridizing VAEs and GANs (best of both worlds?) from Larsen et. al [1512.09300] from Larsen et. al [1512.09300]
  26. 26. Thanks! Questions?
  27. 27. indico.io

×