Please send email to cs774.ethics.uilab@gmail.com regarding any class-related issues, instead of the professor's email.
Join slack channel: Invitation Link
- Lecturer: Alice Oh
- TA: Jaimeen Ahn
- Contact: cs774.ethics.uilab@gmail.com
Please send email to cs774.ethics.uilab@gmail.com. We will not consider any class-related email arriving in our personal accounts. When you send emails, please put "[CS774]" to the title. (e.g., [CS774] Do we have a class on MM/DD?)
- Mon/Wed 13:00 - 14:30
#2111, E3-1 (Information Science and Electronics Bldg.)ZOOM- If there is a guest lecture, lecture time may change flexibly such as 4:00pm ~ 5:30pm
- Knowledge of machine learning, and deep learning (CS570)
week | Day | Type | Topic | notes | Project |
---|---|---|---|---|---|
1 | 08/31, 09/02 | Lecture 1 | Introduction | Google Survey Reading Material(~13:00 09/07) - Only chapter 1 |
|
2 | 09/07, 09/09 | Discussion Discussion |
Bias of AI/ML Systems | Team matching | |
3 | 09/14, 09/16 | Discussion 1 Lecture 2 |
Bias on AI/ML Systems Societal Impact |
||
4 | 09/21, 09/23 | Discussion 2 Lecture 3 |
Societal Impact AI for Social Good |
||
5 | 09/28, 09/30 | Project - |
Societal Impact / Project Description | 09/30 Holiday | Introduction |
6 | 10/05, 10/07 | Discussion 3 Lecture 4 |
AI for Social Good | 10/07 Guest Lecture 4:00pm Joanna Bryson |
|
7 | 10/12, 10/14 | Discussion 4 Lecture 5 |
AI for Social Good | 10/14 Guest Lecture 9:00am Kyunghyun Cho |
|
8 | 10/19, 10/22 | Presentation - |
Proposal Mid-term |
Proposal, Peer-review | |
9 | 10/26, 10/28 | Lecture 6 Discussion 5 |
NLP for detecting Bias | ||
10 | 11/02, 11/04 | Discussion 6 Lecture 7 |
NLP for detecting Bias | 11/04 Guest Lecture 4:00pm Dirk Hovy |
|
11 | 11/09, 11/11 | Lecture 8 Discussion 7 |
AI as Big Brother | ||
12 | 11/16, 11/18 | Presentation Discussion 8 |
Progress Update AI as Big Brother |
Progress Update, Peer-review | |
13 | 11/23, 11/25 | Lecture 9 Discussion 9 |
Interpretability and Fairness | ||
14 | 11/30, 12/02 | Discussion 10 Lecture 7 |
Interpretability and Fairness | ||
15 | 12/07, 12/09 | - | No Class | ||
16 | 12/14, 12/16 | - | Project presentation | Final presentation Peer-review |
The course consists of lectures and discussions.
Experts from around the world in AI and Ethics will give special virtual lectures.
Most of the lectures will be moderated by the main lecturer (Alice Oh) in the form of questions and answers about the lecturers’ publications.
Because of the time difference, some lectures will be pre-recorded.
Possible lecturers include Joanna Bryson (Hertie School) on the topic of general AI Ethics, Shakir Mohamed (DeepMind) on the topic of diversity and inclusion in AI, Dirk Hovy(Bocconi University) on the topic of Predictive Bias in NLP, Kyunghyun Cho (New York University), and additional guests will be added.
Students will read, present, and think about latest research from the reading list which is published in AI and ML conferences (e.g., NeurIPS, ICLR, ACL, CVPR, FAccT) related to ethical considerations.
Readings may also include blog posts, articles in the media, online forum discussions, and publications from global governing bodies.
- Choose a paper related to the subject of the previous lecture from the reading list
- Read the paper before the discussion and prepare some questions to be discussed
Students will lead peers to discuss the readings with thought-provoking questions.
You will challenge the findings in the articles as to their accurate reporting and interpretation; you will discuss relevance to the current time and various locales with different cultural backgrounds.
You will present and discuss ideas for future research directions in AI and ethics.
- 12 in-class discussion (see schedule)
- Organize a group of 5~6 people, and have time to present what you read and discuss (you can use Korean if everyone is comfortable with Korean)
- All groups should submit their result at the end of class.
- See the details on this page
- Team project will be a major part of the class, especially during the second half
- Projects will be basically replication/modifications of recent research in bias in AI/ML
- More details will be described in the document below
- https://uilab-kaist.github.io/cs774-ethics-fall-2020/project
If you actively and honestly participate in every discussion and do the project you will get at least B- (Project will be be a way to divide A+~B- in this case)
-
10 In-Class Discussion : 40%
-
Project : 50%
Note that any team may get up to -25%p for project score if there is a serious problem with teamwork.
-
Peer grading : 10%
- Fairness in Machine Learning by Solon Barocas, Moritz Hardt, and Arvind Narayanan