Detailed Syllabus and Lectures
Lecture 12: Self-supervised Learning (slides)
what is self-supervised learning, self-supervised learning in NLP, self-supervised learning in vision
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
- Distributed Representations of Words and Phrases and their Compositionality, Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean, NIPS 2013.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL 2019.
- Context Encoders: Feature Learning by Inpainting, Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell,
Alexei A. Efros, CVPR 2016.
- Unsupervised Visual Representation Learning by Context Prediction, Carl Doersch, Abhinav Gupta, Alexei A. Efros
- Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction, Richard Zhang, Phillip Isola, Alexei A. Efros, CVPR 2017.
- Unsupervised Representation Learning by Predicting Image Rotations, Spyros Gidaris, Praveer Singh, Nikos Komodakis, ICLR 2018.
- Representation Learning with Contrastive Predictive Learning, Aaron van den Oord, Yazhe Li, Oriol Vinyals, ICLR 2018.
- A Simple Framework for Contrastive Learning of Visual Representations, Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, arXiv preprint arXiv:2002.05709, 2020.
- Revisiting Self-Supervised Visual Representation Learning, Alexander Kolesnikov, Xiaohua Zhai, Lucas Beyer, CVPR 2019.
- Data-Efficient Image Recognition with Contrastive Predictive Coding, Olivier Henaff et al., ICML 2020.
- Momentum Contrast for Unsupervised Visual Representation Learning, Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, CVPR 2020.
- A Simple Framework for Contrastive Learning of Visual Representations, Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, arXiv preprint arXiv:2002.05709, 2020.
- Improved Baselines with Momentum Contrastive Learning, Xinlei Chen, Haoqi Fan, Ross Girshick, Kaiming He, arXiv preprint arXiv:2003.04297, 2020
Lecture 11: Variational Autoencoders (slides)
motivation for variational autoencoders (VAEs), mechanics of VAEs, separatibility of VAEs, training of VAEs,evaluating representations, vector Quantized Variational Autoencoders (VQ-VAEs)
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
- [Blog post] Intuitively Understanding Variational Autoencoders, Irhum Shafkat.
- [Blog post] A Beginner's Guide to Variational Methods: Mean-Field Approximation, Eric Jang.
- [Blog post] Tutorial - What is a variational autoencoder?, Jaan Altosaar
- beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner, ICLR 2017.
- Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations, Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem.
- Generating Diverse High-Fidelity Images with VQ-VAE-2, Ali Razavi, Aaron van den Oord, Oriol Vinyals.
Lecture 10: Generative Adversarial Networks, Flow-Based Generative Model (slides)
generative adversarial networks (GANs), conditional GANs, applications of GANs, normalizing flows
Please study the following material in preparation for the class:
Required Reading:
- NIPS 2016 Tutorial: Generative Adversarial Networks, Ian Goodfellow
- Generative Adversarial Networks: An Overview, Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, Anil A Bharath
- How to Train a GAN? Tips and tricks to make GANs work, Soumith Chintala, Emily Denton, Martin Arjovsky, Michael Mathieu
- [Blog post] Normalizing Flows Tutorial, Part 1: Distributions and Determinants, Eric Jang
- [Blog post] Normalizing Flows Tutorial, Part 2: Modern Normalizing Flows, Eric Jang
- [Blog post] Flow-based Deep Generative Models, Lilian Weng
Suggested Video Material:
Additional Resources:
- [Blog post] How to Train a GAN? Tips and tricks to make GANs work, Soumith Chintala, Emily Denton, Martin Arjovsky and Michael Mathieu.
- [Blog post] The GAN Zoo, Avinash Hindupur
- [Blog post] GAN Playground, Reiichiro Nakano
- [Blog post] GANs comparison without cherry-picking, Junbum Cha
- [Twitter thread] Thread on how to review papers about generic improvements to GANs, Ian Goodfellow
- Normalizing Flows: An Introduction and Review of Current Methods, Ivan Kobyzev, Simon J.D. Prince, and Marcus A. Brubaker, arXiv preprint, arXiv:1908.09257, 2020.
- Normalizing Flows for Probabilistic Modeling and Inference, George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan, arXiv preprint, arXiv:1912.02762, 2019
- [Blog post] Glow: Better Reversible Generative Models, OpenAI
- Density estimation using Real NVP, Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio, ICLR 2017.
Lecture 9: Autoencoders and Autoregressive Models (slides)
unsupervised representation learning, sparse coding, autoencoders, autoregressive models
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
- Pixel Recurrent Neural Networks, Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglum ICML2016.
- Conditional Image Generation with PixelCNN Decoders, Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu, NIPS2016.
- Unsupervised Feature Learning and Deep Learning, Andrew Ng.
- [Blog post] Unsupervised Sentiment Neuron, Alec Radford, Ilya Sutskever, Rafal Jozefowicz, Jack Clark and Greg.
Lecture 8: Attention and Memory (slides)
content-based attention, location-based attention, soft vs. hard attention, self-attention, attention for image captioning, transformer networks
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
- Neural Machine Translation by Jointly Learning to Align and Translate, D. Bahdanau, K. Cho, Y. Bengio, ICLR 2015
- Sequence Modeling with CTC, Awni Hannun, Distill, 2017
- Recurrent Models of Visual Attention, V. Mnih, N. Heess, A. Graves, K. Kavukcuoglu, NIPS 2014
- DRAW: a Recurrent Neural Network for Image Generation, K. Gregor, I. Danihelka, A. Graves, DJ Rezende, D. Wierstra, ICML 2015
- Attention Is All You Need, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS 2017
- [Blog post] What is DRAW (Deep Recurrent Attentive Writer)?, Kevin Frans
- [Blog post] The Transformer Family, Lilian Weng
- [Blog post] Transformers for Image Recognition at Scale, Neil Houlsby and Dirk Weissenborn
Lecture 7: Recurrent Neural Networks (slides)
sequence modeling, recurrent neural networks (RNNs), RNN applications, vanilla RNN, training RNNs, long short-term memory (LSTM), LSTM variants, gated recurrent unit (GRU)
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
- Efstratios Gavves and Max Welling's Lecture 8
Additional Resources:
Lecture 6: Understanding and Visualizing Convolutional Neural Networks (slides)
transfer learning, interpretability, visualizing neuron activations, visualizing class activations, pre-images, adversarial examples, adversarial training
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
- [Blog post] Understanding Neural Networks Through Deep Visualization, Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson.
- [Blog post] The Building Blocks of Interpretability, Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye and Alexander Mordvintsev.
- [Blog post] Feature Visualization, Chris Olah, Alexander Mordvintsev and Ludwin Schubert.
- [Blog post] An Overview of Early Vision in InceptionV1, Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, Shan Carter.
- [Blog post] OpenAI Microscope.
- [Blog post] Breaking Linear Classifiers on ImageNet, Andrej Karpathy.
- [Blog post] Attacking machine learning with adversarial examples, OpenAI.
Lecture 5: Convolutional Neural Networks (slides)
convolution layer, pooling layer, evolution of depth, design guidelines, residual connections, semantic segmentation networks, object detection networks, backpropagation in CNNs
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
Lecture 4: Training Deep Neural Networks (slides)
data preprocessing, weight initialization, normalization, regularization, model ensembles, dropout, optimization methods
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
- Stochastic Gradient Descent Tricks, Leon Bottou.
- Section 3 of Practical Recommendations for Gradient-Based Training of Deep Architectures, Yoshua Bengio.
- Troubleshooting Deep Neural Networks: A Field Guide to Fixing Your Model, Josh Tobin.
- [Blog post] Initializing neural networks, Katanforoosh & Kunin, deeplearning.ai.
- [Blog post] Parameter optimization in neural networks, Katanforoosh et al., deeplearning.ai.
- [Blog post] The Black Magic of Deep Learning - Tips and Tricks for the practitioner, Nikolas Markou.
- [Blog post] An overview of gradient descent optimization algorithms, Sebastian Ruder.
- [Blog post] Why Momentum Really Works, Gabriel Goh
Lecture 3: Multi-layer Perceptrons (slides)
feed-forward neural networks, activation functions, chain rule, backpropagation, computational graph, automatic differentiation, distributed word representations
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
Lecture 2: Machine Learning Overview (slides)
types of machine learning problems, linear models, loss functions, linear regression, gradient descent, overfitting and generalization, regularization, cross-validation, bias-variance tradeoff, maximum likelihood estimation
Please study the following material in preparation for the class:
Required Reading:
Suggested Video Material:
Additional Resources:
Lecture 1: Introduction to Deep Learning (slides)
course information, what is deep learning, a brief history of deep learning, compositionality, end-to-end learning, distributed representations
Please study the following material in preparation for the class:
Required Reading:
Additional Resources:
- The unreasonable effectiveness of deep learning in artificial intelligence, Terrence J. Sejnowski, PNAS, 2020.
- Deep Learning, Yann LeCun, Yoshio Bengio, Geoffrey Hinton. Nature, Vol. 521, 2015.
- Deep Learning in Neural Networks: An Overview, Juergen Schmidhuber. Neural Networks, Vol. 61, pp. 85–117, 2015.
- On the Origin of Deep Learning, Haohan Wang and Bhiksha Raj, arXiv preprint arXiv:1702.07800v4, 2017