*what is reinforcement learning, markov decision processes, deep q-networks (DQNs), deep policy networks, model based reinforcement learning, alphago*

Please study the following material in preparation for the class:

- Deep Reinforcement Learning: An Overview, Yuxi Li.
- Human level control through deep reinforcement learning, V. Mnih et al. Nature 518:529-533, 2015.
- A Brief Survey of Deep Reinforcement Learning, Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath.

- David Silver's tutorial on Deep Reinforcement Learning

- World Models: Can agents learn inside of their own dreams?, David Ha and Jurgen Schmidhuber
- [Blog post] Deep Reinforcement Learning: Pong from Pixels, Andrej Karpathy
- [Blog post] Demystifying Deep Reinforcement Learning, Tambet Matiisen.
- [Blog post] Deep Reinforcement Learning Doesn't Work Yet, Alexander Irpan

*generative adversarial networks (GANs), conditional GANs, tips and tricks, applications of GANs*

Please study the following material in preparation for the class:

- NIPS 2016 Tutorial: Generative Adversarial Networks, Ian Goodfellow
- Generative Adversarial Networks: An Overview, Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, Anil A Bharath
- How to Train a GAN? Tips and tricks to make GANs work, Soumith Chintala, Emily Denton, Martin Arjovsky, Michael Mathieu

- NIPS 2016 Tutorial: Generative Adversarial Networks, Ian Goodfellow

- [Blog post] How to Train a GAN? Tips and tricks to make GANs work, Soumith Chintala, Emily Denton, Martin Arjovsky and Michael Mathieu.
- [Blog post] New Progress on GAN Theory and Practice, Liping Liu
- [Blog post] The GAN Zoo, Avinash Hindupur
- [Blog post] GAN Playground, Reiichiro Nakano
- [Blog post] GANs comparison without cherry-picking, Junbum Cha
- [Twitter thread] Thread on how to review papers about generic improvements to GANs, Ian Goodfellow

*applications of deep generative models, fully-observed models, transformation models, latent variable models, variational auto-encoders*

Please study the following material in preparation for the class:

- Chapter #20 of the Deep Learning text book.

- Foundations of Unsupervised Deep Learning, Ruslan Salakhutdinov

- Pixel Recurrent Neural Networks, Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglum ICML2016.
- Conditional Image Generation with PixelCNN Decoders, Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu, NIPS2016.
- Tutorial on Variational Autoencoders, Carl Doersch.
- [Blog post] Tutorial - What is a variational autoencoder?, Jaan Altosaar
- [Blog post] What is DRAW (Deep Recurrent Attentive Writer)?, Kevin Frans

*supervised representation learning, unsupervised representation learning, sparse coding, autoencoders, restricted boltzman machines, deep belief networks*

Please study the following material in preparation for the class:

- Chapter #13 of the Deep Learning text book.
- Chapter #14 of the Deep Learning text book.
- Chapter #15 of the Deep Learning text book.

- Hugo Larochelle’s video lectures, 5.1-5.8, 6.1-6.7, 7.3, 7.6-7.9, 8.1-8.9

- Unsupervised Feature Learning and Deep Learning, Andrew Ng.
- [Blog post] Unsupervised Sentiment Neuron, Alec Radford, Ilya Sutskever, Rafal Jozefowicz, Jack Clark and Greg.

*attention mechanism for deep learning, attention for image captioning, memory networks, end-to-end memory networks, dynamic memory networks*

Please study the following material in preparation for the class:

- Attention and Augmented Recurrent Neural Networks, Chris Olah and Shan Carter

- Chris Dyer's Oxford Deep NLP course Lecture 8
- Sumit Chopra's tutorial on Reasoning, Attention and Memory, Deep Learning Summer School, Montreal 2016

- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, Y. Bengio. ICML 2015
- Memory Networks, Jason Weston, Sumit Chopra, Antoine Bordes. ICLR 2016
- End-to-end Memory Networks, S. Sukhbaatar, A. Szlam, J. Weston, R. Fergus. NIPS 2015
- Ask Me Anything: Dynamic Memory Networks for Natural Language Processing, Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, Richard Socher. ICML 2016
- Chapter #10 of the Deep Learning text book.
- Efstratios Gavves and Max Welling's Lecture 8
- [Blog post] Understanding LSTM Networks, Chris Olah.
- [Blog post] The Unreasonable Effectiveness of Recurrent Neural Networks, Andrej Karpathy.
- Learning Long-Term Dependencies with Gradient Descest is Difficult, Yoshua Bengio, Patrice Simard, and Paolo Frasconi.
- Long Short-Term Memory, Sepp Hochreiter and Jürgen Schmidhuber.
- Matthew D Zeiler and Rob Fergus, Visualizing and Understanding Convolutional Networks, ECCV 2014.
- Christian Szegedy et al. Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199v4
- Andrej Karpathy's Stanford CS231n Lecture 9
- [Blog post] Understanding Neural Networks Through Deep Visualization, Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson.
- [Blog post] The Building Blocks of Interpretability, Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye and Alexander Mordvintsev.
- [Blog post] Feature VisualizationChris Olah, Alexander Mordvintsev and Ludwin Schubert
- [Blog post] Breaking Linear Classifiers on ImageNet, Andrej Karpathy
- [Blog post] Attacking machine learning with adversarial examples, OpenAI.
- Chapter #9 of the Deep Learning text book.
- Andrej Karpathy's Stanford CS231n Lecture 7
- Justin Johnson's Stanford CS231n Lecture 8
- Kaiming He's tutorial on Deep Residual Networks
- [Blog post] Understanding Convolutions, Christopher Olah.
- [Blog post] Deconvolution and Checkerboard Artifacts, Augustus Odena, Vincent Dumoulin, Chris Olah.
- Andrej Karpathy's CS231n notes on Convolutional Networks.
- Hiroshi Kuwajima’s Memo on Backpropagation in Convolutional Neural Networks.
- A guide to convolution arithmetic for deep learning, Vincent Dumoulin and Francesco Visin.
- Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review, Waseem Rawat and Zenghui Wang.
- [Blog post] Deep Learning for Object Detection: A Comprehensive Review, Joyce Xu.
- [Blog post] A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN, Dhruv Parthasarathy
- [Blog post] Deconvolution and Checkerboard Artifacts, Augustus Odena, Vincent Dumoulin, and Chris Olah
- Chapter #7 and Chapter #8 of the Deep Learning text book.
- Efstratios Gavves' Lecture 3.
- Stochastic Gradient Descent Tricks, Leon Bottou
- Section 3 of Practical Recommendations for Gradient-Based Training of Deep Architectures, Yoshua Bengio.
- [Blog post] The Black Magic of Deep Learning - Tips and Tricks for the practitioner, Nikolas Markou.
- [Blog post] An overview of gradient descent optimization algorithms, Sebastian Ruder.
- [Blog post] Why Momentum Really Works, Gabriel Goh
- [Blog post] Mathematics Behind Neural Network Weights Initialization [Part One] [Part Two] [Part Three], Jefkine Kafunah.
- Chapter 6 of the Deep Learning text book.
- Hugo Larochelle’s video lectures, 1.1 to 1.6, 2.1 to 2.7
- Hinton's Coursera class on Neural Networks, Lecture 1 to 3.
- [Blog post] Neural Networks, Manifolds, and Topology, Christopher Olah.
- [Blog post] Calculus on Computational Graphs: Backpropagation, Christopher Olah.
- Chapter 16 of Jurafsky and Martin's Speech and Language Processing book (3rd Edition draft)
- Chapter 5 of the Deep Learning text book.
- Machine Learning, Doina Precup (Deep Learning Summer School, Montreal 2016)
- A few useful things to know about machine learning, P. Domingos. Communications of the ACM, 55 (10), 78-87, 2012.
- Chapter 1 of the Deep Learning text book.
- [Blog post] AI Winter. How Canadians contributed to end it?, Pavan Mirla.
- The Bandwagon, Claude E. Shannon. IRE Transactions on Information Theory, Vol. 2, Issue 3, 1956
- Deep Learning, Yann LeCun, Yoshio Bengio, Geoffrey Hinton. Nature, Vol. 521, 2015.
- Deep Learning in Neural Networks: An Overview, Juergen Schmidhuber. Neural Networks, Vol. 61, pp. 85–117, 2015.
- On the Origin of Deep Learning, Haohan Wang and Bhiksha Raj, arXiv preprint arXiv:1702.07800v4, 2017

*sequence modeling, recurrent neural networks (RNNs), RNN applications, vanilla RNN, training RNNs, long short-term memory (LSTM), LSTM variants, gated recurrent unit (GRU)*

Please study the following material in preparation for the class:

*transfer learning, interpretability, visualizing neuron activations, visualizing class activations, pre-images, adversarial examples, adversarial training*

Please study the following material in preparation for the class:

*convolution layer, pooling layer, evolution of depth, design guidelines, residual connections, semantic segmentation networks, object detection networks, backpropagation in CNNs*

Please study the following material in preparation for the class:

*data preprocessing, weight initialization, normalization, regularization, model ensembles, dropout, optimization methods*

Please study the following material in preparation for the class:

*feed-forward neural networks, activation functions, chain rule, backpropagation, computational graph, automatic differentiation, distributed word representations*

Please study the following material in preparation for the class:

*types of machine learning problems, linear models, loss functions, linear regression, gradient descent, overfitting and generalization, regularization, cross-validation, bias-variance tradeoff, maximum likelihood estimation*

Please study the following material in preparation for the class:

*course information, what is deep learning, a brief history of deep learning, compositionality, end-to-end learning, distributed representations*

Please study the following material in preparation for the class: