Deep learning, represented by large language models, is revolutionizing human lives. However, trustworthiness threats in deep learning widely exist, posing great challenges to AI safety, security, and reliability. This course introduces state-of-the-art frontiers on deep learning research for a wide range of trustworthiness issues, including threat discovery, mitigation, and certification methods through seminar-style presentations and hands-on projects.
This is a seminar-style course for trustworthy deep learning. The first half of the course is an overview of deep learning and preliminaries for trustworthy AI methods, including training of neural networks, common neural network architectures, large language models, the definition of AI attacks, defences, and certification and verification in the context of AI. The second half of the course visits representative and recent research papers in the field through student presentations, covering topics like evasion attacks and defences, robustness certification, differential privacy, membership inference attacks, watermarks, detection of AI-generated contents, machine unlearning, prompt injection attacks, model stealing, and finetuning attacks.
Prerequisites #
There is no formal pre-requisites. Background in algorithms, calculus, linear algebra (e.g., MATH 151, MATH 152, MATH 232, CMPT 225), CMPT 410/726 Machine Learning strongly recommended. It is also recommended to have background in CMPT 412/762 Computer Vision and CMPT 713 NLP.
Textbook and Reading Materials #
There is no primary reference material. We will read an assortment of research papers during lectures.
- Deep Learning Book
- By Ian Goodfellow, Yoshua Bengio, and Aaron Courville
- Recommended for students to gain background in deep learning before taking the course.
- Online course Intro to ML Safety
- By Dan Hendrycks at the Center for AI Safety
- Optional, advanced reading for interested students
- A well-developed course recommended for those who want to learn general machine learning safety from a systematic and interdisciplinary perspective.
Grading #
(Tentative)
40% course project + 10% Homework 0 + 30% paper presentation + 20% from 5 questions and 5 summaries (each 2%)
Schedule and Syllabus #
Slides will be updated as the term progresses. All slides are available in this OneDrive folder.
Week | Date | Topics (Tentative) | Assignment & Due | Reading |
---|---|---|---|---|
Week 1 (1/5 - 1/11) | Tue (1/7) 2h | (Lecture) Syllabus, Introduction to Deep Learning I | Homework 0 Release | TBA |
Thur (1/9) 1h | (Lecture) Introduction to Deep Learning II | TBA | ||
Week 2 (1/12 - 1/18) | Tue (1/14) 2h | (Lecture) Vision Models; Language Models; Trustworthy Deep Learning Overview | TBA | |
Thur (1/16) 1h | (Lecture) Robustness Threats in Deep Learning - Attacks | Presentation Signing-up Sheet Release Homework 0 Due |
TBA | |
Week 3 (1/19 - 1/25) | Tue (1/21) 2h | (Lecture) Robustness Threats in Deep Learning - Attacks | TBA | |
Thur (1/23) 1h | (Lecture) Robustness Threats in Deep Learning - Defenses | TBA | ||
Week 4 (1/26 - 2/1) | Tue (1/28) 2h | (Lecture) Robustness Threats in Deep Learning - Defenses | TBA | |
Thur (1/30) 1h | (Lecture) Robustness Threats in Deep Learning - Certification | TBA | ||
Week 5 (2/2 - 2/8) | Tue (2/4) 2h | (Lecture) Robustness Threats in Deep Learning - Certification | TBA | |
Thur (2/6) 1h | (Lecture) Course Project Release | Course Project Release Due: 2/11 Questions & Summary |
TBA | |
Week 6 (2/9 - 2/15) | Tue (2/11) 2h | (Presentation) Deep Learning Privacy I - Differential Privacy | Due: 2/13 Questions & Summary | TBA |
Thur (2/13) 1h | (Presentation) Deep Learning Privacy II - Membership Inference Attacks | TBA | ||
Week 7 (2/16 - 2/22) | Reading Break | Due: 2/25 Questions & Summary | TBA | |
Week 8 (2/23 - 3/1) | Tue (2/25) 2h | (Presentation) Deep Learning Privacy III - Machine Unlearning | Due: 2/27 Questions & Summary | TBA |
Thur (2/27) 1h | (Presentation) Deep Learning Privacy IV - Watermarking | Due: 3/4 Questions & Summary | TBA | |
Week 9 (3/2 - 3/8) | Tue (3/4) 2h | (Presentation) Deep Learning Privacy V - Model Stealing | Due: 3/6 Questions & Summary | TBA |
Thur (3/6) 1h | (Presentation) Deep Learning Privacy VI - Data Stealing | Due: 3/11 Questions & Summary | TBA | |
Week 10 (3/9 - 3/15) | Tue (3/11) 2h | (Lecture) Large Language Models - Recap | Due: 3/13 Questions & Summary | TBA |
Thur (3/13) 1h | (Lecture) LLM Trustworthiness Overview | Due: 3/18 Questions & Summary | TBA | |
Week 11 (3/16 - 3/22) | Tue (3/18) 2h | (Presentation) LLM Alignment Tuning - I | Due: 3/20 Questions & Summary | TBA |
Thur (3/20) 1h | (Presentation) LLM Alignment Tuning - II | Due: 3/25 Questions & Summary | TBA | |
Week 12 (3/23 - 3/29) | Tue (3/25) 2h | (Presentation) LLM Prompt Injection - I | Due: 3/27 Questions & Summary | TBA |
Thur (3/27) 1h | (Presentation) LLM Prompt Injection - II | Due: 4/1 Questions & Summary | TBA | |
Week 13 (3/30 - 4/5) | Tue (4/1) 2h | (Presentation) LLM Finetuning Attacks - I | Due: 4/3 & Summary; Course Project | TBA |
Thur (4/3) 1h | (Presentation) LLM Finetuning Attacks - II | TBA | ||
Week 14 (4/6 - 4/12) | Tue (4/8) 2h | (Interactive Lecture) Course Project Discussion, Closing Remarks | Due: All Presentation Feedbacks | TBA |
Week 15 (4/13-4/19) | Fri (4/19) | Grade Released | TBA |
Extended Topics #
Trustworthy deep learning is a broad area. Some important topics are not covered in lectures and presentations due to the limited time frame. Some of them are listed below.
- Data Poisoning Attacks and Defenses
- LLM Hallucination
- Social-economic Impact with Generative AI
- …
Assignments and Project #
- Homework 0:
TBA
- Course Project:
TBA
- Presentation Signing-up Sheet:
TBA
- Google Form for Question Submission:
TBA
- Google Form for Summary Submission:
TBA
Ethics Statement #
This course will include topics related computer security and privacy. As part of this investigation we may cover technologies whose abuse could infringe on the rights of others. As computer scientists, we rely on the ethical use of these technologies. Unethical use includes circumvention of an existing security or privacy mechanisms for any purpose, or the dissemination, promotion, or exploitation of vulnerabilities of these services. Any activity outside the letter or spirit of these guidelines will be reported to the proper authorities and may result in dismissal from the class and possibly more severe academic and legal sanctions.
Academic Integrity Policy #
- Some examples of unacceptable behaviour in homeworks and course projects:
- Handing in assignments that are not 100% your own work (in design, implementation, wording, etc.), without proper citation. There must be a README file in your submission with citations to any external code used.
- Sharing code fragments with others in class (for group project, with others who are not in the same group) is not allowed.
- Keep discussions to high level information rather than specific code hints.
- Copying and then obfuscating code is a serious academic honesty violation.
- Submitting work that has been submitted before, for any course at any institution.
- If you are unclear on what academic honesty is, see Simon Fraser University’s Policy S10-01.
- All instances of academic dishonesty will be dealt with very severely.
- In general, minimum requested penalties will be as follows:
- For assignments and course projcets: a mark of -50% on the assignment. So, academic dishonesty on an assignment worth 10% of your final mark will result in a zero on the assignment, and a penalty of 5% from your final grade.
- Please note that these are minimum penalties. At the instructor’s option, more severe penalties may be given/requested. All instances of academic dishonesty will be noted on your University record.
- The instructor may use an automated service that will check for plagiarism.
Acknowledgement #
The course is developed from CS562 and CS598GS at UIUC. Part of the content is adapted from Intro to ML Safety. Some course policies are developed from CMPT 413 Natural Language Processing.