Welcome to S&DS 567 (CBB 567, MBB 567)


Course Description

This course provides an introduction to recent developments in deep learning, covering topics ranging from basic backpropagation, to optimization, to the latest developments in deep generative models and network robustness. Applications in natural language processing and computer vision are used as running examples. Several case studies in biomedical applications are covered in detail. No prior knowledge about natural language processing, computer vision, or biology is assumed for taking this course.

Prerequisite: S&DS 565 or permission of the instructor. Enrollment limited.

Time and Location

Lectures: Monday 9:00am - 11:15am WTS A30 (Watson Center)
Discussion: Friday 9:00am - 10:00am DL 120 (Dunham Laboratory)

Grading:

3 written and programming assignments (15% each)
required readings (10%)
1 final project with a video presentation (a project paper following a preferred NeurIPS conference paper submission style and a less-than-5-minute video presentation by the team, 45%)

Lectures:

Lec 1: Introduction to Neural Networks, Backpropagation, and Deep Learning: introduction to basic concepts of supervised learning, unsupervised learning, reinforcement learning, neural networks, backpropagation, and deep learning

Lec 2: Deep Neural Networks for Supervised Learning: Activation functions, deep convolutional neural networks, deep network architectures for image classification: AlexNet, VGG, ResNet, data augmentation (with mixup), training with stochastic gradient descent, network visualization, and style transfer

Lec 3: Optimization, Regularization, and Robustness of Deep Neural Networks: weight initialization, momentum, weight decay, RMSProp, Adam, Hessian-free optimization, dropout, robust optimization, and attack/defense methods

Lec 4: Recurrent Neural Networks (LSTM & GRU): RNN language model, sequence classification

Lec 5: Deep Autoencoder: applications in dimensionality reduction, data visualization, data denoising, anomaly detection, and data compression

Lec 6: Deep Encoder-Decoder Networks: introduction to neural networks in NLP, word embedding, encoder-decoder models for machine translation, image captioning, and video captioning

Lec 7: Attention Mechanisms and Applications: introduction to attention and its applications in state-of-the-art deep learning models for neural machine translation, question answering, and image captioning, Self-attention (Transformer, BERT)

Lec 8: Deep Generative Models: Variational Autoencoder and Deep Autoregressive Models

Lec 9: Deep Generative Models: Generative Adversarial Networks

Lec 10: Deep Reinforcement Learning : Deep Q-Learning and Policy Gradient Methods

Lec 11: Biomedical Application Case Study I: Deep Learning for Vaccine Design and Drug Discovery

Lec 12: Biomedical Application Case Study II

Lec 13: Biomedical Application Case Study III and Ethics in AI