CS 462: Introduction to Deep Learning (Spring 2022)

Course Information

Instructor: Hao Wang
Email: hogue.wang@rutgers.edu
Office: CBIM 008

TA: Xi Chen
TA Email: xi.chen15@rutgers.edu
TA Office: CBIM 5

Time: Monday, 3:50-5:10 pm
Wednesday, 3:50-5:10 pm
Location: BE 252

Recitation Time: Tuesday, 10:20-11:15 am
Location: BE 253

Office Hours: Thursday, 3:00-4:00 pm or by appointment
TA Office Hours: Tuesday, 1:00-2:00 pm or by appointment
Office Hour Link: Please use this Zoom link.

Mask Requirement in Class

In order to protect the health and well-being of all members of the University community, masks must be worn by all persons on campus when in the presence of others (within six feet) and in buildings in non-private enclosed settings (e.g., common workspaces, workstations, meeting rooms, classrooms, etc.). Masks must be worn during class meetings; any student not wearing a mask will be asked to leave.

Masks should conform to CDC guidelines and should completely cover the nose and mouth: https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/about-face-coverings.html

Each day before you arrive on campus or leave your residence hall, you must complete the brief survey on the My Campus Pass symptom checker self-screening app.


  • 1/19/2022: Due to the COVID situation, classes will be moved online via Zoom until Jan 31. Please check Canvas for the Zoom link.

  • Course Descriptions

    A comprehensive artificial intelligence system needs to not only perceive the environment with different “senses” (e.g., seeing and hearing) but also infer the world’s conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks, such as visual object recognition and speech recognition, using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian deep learning (BDL) has emerged as a unified probabilistic framework to tightly integrate deep learning and Bayesian models. In this general framework, the perception of text or images using deep learning can boost the performance of higher-level inference and, in turn, the feedback from the inference process is able to enhance the perception of text or images.

    In this course, we will study the state-of-the-art methodology and theory in this emerging area, as well as applications including recommender systems, computer vision, natural language processing, graph learning, forecasting, healthcare, domain adaptation, speech recognition, etc.


  • M 250 (Introductory Linear Algebra), CS 112 (Data Structures), CS 206 (Introduction to Discrete Structures II) OR M 477 (Mathematical Theory of Probability) OR S 379 (Basic Probability Theory). CS 440 or CS 439 is not required but highly encouraged.
  • Students must be familiar with python programming. The homework will be based on PyTorch, a python framework for deep learning. Thus, some experience of PyTorch will be helpful although the course will provide short intro lecture on using PyTorch. Students should set up the PyTorch programming environment as well. You could use your only computer, Rutgers iLab, or Google Colab to run the code.

  • Expected Work

  • 15%: Attendance (Including Asking Questions)
  • 60%: Homework Assignments (15% * 4)
  • 25%: Final Project (Proposal, Presentation, and Report)

  • For attendance: (1) One absence is allowed and will not affect your score. From the second absence, a reasonable explanation with supporting evidence is required not to lose the attendance point. (2) Each absence without a reasonable explanation will lose 3% of the score.

    For homework submission, each student can have at most 2 late days in the whole semester without losing the score. After that your late days are used up, late submissions will cost you 2% of the overall score every day.

    Infrastructure Requirements

  • (At least until Jan 31:) A computer, mic/audio system, and access to the Internet to participate in the lectures via Zoom
  • Please visit the Rutgers Student Tech Guide page for resources available to all students. If you do not have the appropriate technology for financial reasons, please email Dean of Students deanofstudents@echo.rutgers.edu for assistance. If you are facing other financial hardships, please visit the Office of Financial Aid at https://financialaid.rutgers.edu/.

  • Computing Resources

  • Rutgers iLab Servers: https://report.cs.rutgers.edu/nagiosnotes/iLab-machines.html
  • Google Colab: https://colab.research.google.com

  • Textbooks and Materials

    There is no textbook for this course. However, the following books can be useful though as references on relevant topics:

  • Deep Learning (DL), Goodfellow, Ian and Bengio, Yoshua and Courville, Aaron, MIT Press, 2016, ISBN: 9780262035613
  • Dive into Deep Learning (https://d2l.ai/)
  • Pattern Recognition and Machine Learning (PRML), Christopher C. Bishop, Springer, 2006, ISBN: 9780387310732
  • Natural Language Processing with Distributed Representations (NLP), Kyunghyun Cho, https://arxiv.org/abs/1511.07916

  • You may also find this tuturial Deep Learning with PyTorch: A 60 Minute Blitz helpful.

    For the topic of reinforcement learning, some useful materials are:

  • Reinforcement Learning (2nd)
  • Lecture by David Silver

  • Tips and More Details on Final Projects

    Below are some tips and details.

    Team Forming: Students can form a team of at most three.

    Paper Reading (Optional): Final projects can be based on published papers or on something else you are interested in. If you would like to base your project on a published paper, you could: Read the paper many times to really understand the content in depth. Find any resources available (e.g., online talks, animation, demos) to help understand the paper. Read the key references recursively to gain better background knowledge. Discuss with your fellow students and friends on the paper.

    Tips for Presentation: Here are some tips that can make the project presentations smoother and more effective:

  • If the project is based on published papers, at the start of presentation you could introduce some background and details on the paper. To do this more effectively, you could look for demo videos on the Internet. Showing the demo as part of the presentation can tremendously help others understand the paper. You could also look for talks/slides on the paper available online (from YouTube or the authors’ websites). You can re-use some of the content if you feel it is helpful.
  • Pause for questions. At some points of your presentation, you could pause and ask if there are any questions.

  • Tentative Schedule

    Note that this is a tentative syllabus to give you an idea of what topics this course will cover. This syllabus is subject to change as the course progresses.

    Week Date Topic Assignment
    Machine Learning Basics
    1 Jan 19 Course Introduction and Machine Learning Basics (1)
    2 Jan 24 Machine Learning Basics (2)
    2 Jan 26 Machine Learning Basics (3) HW1 Release
    3 Jan 31 Machine Learning Basics (4) and Linear Models (1)
    3 Feb 2 Linear Models (2)
    Deep Learning Architectures
    4 Feb 7 Multi-Layer Perceptrons (MLP)
    4 Feb 9 MLP, Activation Functions, Backpropagation, Vanishing/Exploding Gradinent
    5 Feb 14 Convolutional Neural Networks (CNN) (1)
    5 Feb 16 CNN (2) HW2 Release
    6 Feb 21 Modern CNN (1)
    6 Feb 23 Modern CNN (2)
    7 Feb 28 RNN Basics
    7 Mar 2 RNN (Gradient Explosion/Vanishing, LSTM, GRU)
    8 Mar 7 Attention Operations and Transformer
    8 Mar 9 Attention Operations and Transformer
    9 Mar 14 No Class (Spring Recess)
    9 Mar 16 No Class (Spring Recess)
    10 Mar 21 BERT and GPT
    Advanced Topics on Deep Learning
    10 Mar 23 Optimization for Deep Learning HW3 Release
    11 Mar 28 Deep Reinforcement Learning (MDP, Dynamic Programming)
    11 Mar 30 Deep Reinforcement Learning (More on Dynamic Programming)
    12 Apr 4 Deep Reinforcement Learning (Monte-Carlo, Temporal Difference, DQN)
    12 Apr 6 Deep Reinforcement Learning (Policy Gradient, MB-RL HW4 Release
    13 Apr 11 Deep Reinforcement Learning (More on Policy Gradient, MB-RL)
    13 Apr 13 Deep Generative Models - VAE (1)
    14 Apr 18 Deep Generative Models - VAE (2)
    14 Apr 20 Deep Generative Models - VAE (3)
    15 Apr 25 Deep Generative Models - GAN
    Mini Conference
    15 Apr 27 Final Project Presentation (Mini Conference) (1)
    16 May 2 Final Project Presentation (Mini Conference) (2)

    Rutgers CS Diversity and Inclusion Statement

    Rutgers Computer Science Department is committed to creating a consciously anti-racist, inclusive community that welcomes diversity in various dimensions (e.g., race, national origin, gender, sexuality, disability status, class, or religious beliefs). We will not tolerate micro-aggressions and discrimination that creates a hostile atmosphere in the class and/or threatens the well-being of our students. We will continuously strive to create a safe learning environment that allows for the open exchange of ideas while also ensuring equitable opportunities and respect for all of us. Our goal is to maintain an environment where students, staff, and faculty can contribute without the fear of ridicule or intolerant or offensive language. If you witness or experience racism, discrimination micro-aggressions, or other offensive behavior, you are encouraged to bring it to the attention to the undergraduate program director, the graduate program director, or the department chair. You can also report it to the Bias Incident Reporting System.