CS 462: Introduction to Deep Learning (Spring 2022) |
Course Information
Instructor: | Hao Wang |
Email: | hogue.wang@rutgers.edu |
Office: | CBIM 008 |
TA: | Xi Chen |
TA Email: | xi.chen15@rutgers.edu |
TA Office: | CBIM 5 |
Time: | Monday, 3:50-5:10 pm |
Wednesday, 3:50-5:10 pm | |
Location: | BE 252 |
Recitation Time: | Tuesday, 10:20-11:15 am |
Location: | BE 253 |
Office Hours: | Thursday, 3:00-4:00 pm or by appointment |
TA Office Hours: | Tuesday, 1:00-2:00 pm or by appointment |
Office Hour Link: | Please use this Zoom link. |
Mask Requirement in Class
In order to protect the health and well-being of all members of the University community, masks must be worn by all persons on campus when in the presence of others (within six feet) and in buildings in non-private enclosed settings (e.g., common workspaces, workstations, meeting rooms, classrooms, etc.). Masks must be worn during class meetings; any student not wearing a mask will be asked to leave.
Masks should conform to CDC guidelines and should completely cover the nose and mouth:
https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/about-face-coverings.html
Each day before you arrive on campus or leave your residence hall, you must complete the brief survey on the My Campus Pass symptom checker self-screening app.
Announcements
Course Descriptions
A comprehensive artificial intelligence system needs to not only perceive the environment with different “senses” (e.g., seeing and hearing) but also infer the world’s conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks, such as visual object recognition and speech recognition, using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian deep learning (BDL) has emerged as a unified probabilistic framework to tightly integrate deep learning and Bayesian models. In this general framework, the perception of text or images using deep learning can boost the performance of higher-level inference and, in turn, the feedback from the inference process is able to enhance the perception of text or images.
In this course, we will study the state-of-the-art methodology and theory in this emerging area, as well as applications including recommender systems, computer vision, natural language processing, graph learning, forecasting, healthcare, domain adaptation, speech recognition, etc.
Prerequisites
Expected Work
For attendance: (1) One absence is allowed and will not affect your score. From the second absence, a reasonable explanation with supporting evidence is required not to lose the attendance point. (2) Each absence without a reasonable explanation will lose 3% of the score.
For homework submission, each student can have at most 2 late days in the whole semester without losing the score. After that your late days are used up, late submissions will cost you 2% of the overall score every day.
Infrastructure Requirements
Computing Resources
Textbooks and Materials
There is no textbook for this course. However, the following books can be useful though as references on relevant topics:You may also find this tuturial Deep Learning with PyTorch: A 60 Minute Blitz helpful.
For the topic of reinforcement learning, some useful materials are:
Tips and More Details on Final Projects
Below are some tips and details.
Team Forming: Students can form a team of at most three.
Paper Reading (Optional): Final projects can be based on published papers or on something else you are interested in. If you would like to base your project on a published paper, you could: Read the paper many times to really understand the content in depth. Find any resources available (e.g., online talks, animation, demos) to help understand the paper. Read the key references recursively to gain better background knowledge. Discuss with your fellow students and friends on the paper.
Tips for Presentation: Here are some tips that can make the project presentations smoother and more effective:
Tentative Schedule
Note that this is a tentative syllabus to give you an idea of what topics this course will cover. This syllabus is subject to change as the course progresses.
Week | Date | Topic | Assignment | ||
---|---|---|---|---|---|
Machine Learning Basics | |||||
1 | Jan 19 | Course Introduction and Machine Learning Basics (1) | |||
2 | Jan 24 | Machine Learning Basics (2) | |||
2 | Jan 26 | Machine Learning Basics (3) | HW1 Release | ||
3 | Jan 31 | Machine Learning Basics (4) and Linear Models (1) | |||
3 | Feb 2 | Linear Models (2) | |||
Deep Learning Architectures | |||||
4 | Feb 7 | Multi-Layer Perceptrons (MLP) | |||
4 | Feb 9 | MLP, Activation Functions, Backpropagation, Vanishing/Exploding Gradinent | |||
5 | Feb 14 | Convolutional Neural Networks (CNN) (1) | |||
5 | Feb 16 | CNN (2) | HW2 Release | ||
6 | Feb 21 | Modern CNN (1) | |||
6 | Feb 23 | Modern CNN (2) | |||
7 | Feb 28 | RNN Basics | |||
7 | Mar 2 | RNN (Gradient Explosion/Vanishing, LSTM, GRU) | |||
8 | Mar 7 | Attention Operations and Transformer | |||
8 | Mar 9 | Attention Operations and Transformer | |||
9 | Mar 14 | No Class (Spring Recess) | |||
9 | Mar 16 | No Class (Spring Recess) | |||
10 | Mar 21 | BERT and GPT | |||
Advanced Topics on Deep Learning | |||||
10 | Mar 23 | Optimization for Deep Learning | HW3 Release | ||
11 | Mar 28 | Deep Reinforcement Learning (MDP, Dynamic Programming) | |||
11 | Mar 30 | Deep Reinforcement Learning (More on Dynamic Programming) | |||
12 | Apr 4 | Deep Reinforcement Learning (Monte-Carlo, Temporal Difference, DQN) | |||
12 | Apr 6 | Deep Reinforcement Learning (Policy Gradient, MB-RL | HW4 Release | ||
13 | Apr 11 | Deep Reinforcement Learning (More on Policy Gradient, MB-RL) | |||
13 | Apr 13 | Deep Generative Models - VAE (1) | |||
14 | Apr 18 | Deep Generative Models - VAE (2) | |||
14 | Apr 20 | Deep Generative Models - VAE (3) | |||
15 | Apr 25 | Deep Generative Models - GAN | |||
Mini Conference | |||||
15 | Apr 27 | Final Project Presentation (Mini Conference) (1) | |||
16 | May 2 | Final Project Presentation (Mini Conference) (2) | |||
Rutgers CS Diversity and Inclusion Statement
Rutgers Computer Science Department is committed to creating a consciously anti-racist, inclusive community that welcomes diversity in various dimensions (e.g., race, national origin, gender, sexuality, disability status, class, or religious beliefs). We will not tolerate micro-aggressions and discrimination that creates a hostile atmosphere in the class and/or threatens the well-being of our students. We will continuously strive to create a safe learning environment that allows for the open exchange of ideas while also ensuring equitable opportunities and respect for all of us. Our goal is to maintain an environment where students, staff, and faculty can contribute without the fear of ridicule or intolerant or offensive language. If you witness or experience racism, discrimination micro-aggressions, or other offensive behavior, you are encouraged to bring it to the attention to the undergraduate program director, the graduate program director, or the department chair. You can also report it to the Bias Incident Reporting System.