-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
90 lines (75 loc) · 3.31 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="cs357.css" type="text/css">
<meta http-equiv="Content-Type" content="text/html;charset=utf-8">
<title>
CSCI 357
</title>
</head>
<body>
<p class="title2">
<font size="5">CSCI 357 - Spring 2022</font>
</p>
<p class="title">
Algorithmic Game Theory
</p>
<p class="box">
<a href="./index.html">Home</a> |
<a href="./lectures.html">Lectures</a> |
<a href="./assignments.html">Assignments</a> |
<a href="./project.html">Project</a> |
<a href="./resources.html">Resources</a> |
<a href="https://csci.williams.edu/">CS@Williams</a>
</p>
<p class="heading">
Home
</p>
<table class="info">
<tr><td>Instructor:</td><td>
<a href="http://www.cs.williams.edu/~shikha">Shikha Singh</a></td>
<tr><td>Email: </td><td><a href="mailto:shikha.singh@williams.edu">shikha.singh@williams.edu</a></td>
<tr><td>GLOW page:</td><td><a href="https://glow.williams.edu/courses/3378903">CSCI 357 GLOW</a></td></tr>
<tr><td>Course Slack:</td><td> <a href="https://join.slack.com/t/cs357-s22/shared_invite/zt-12f9x5l7m-sFSNqVWVjTn1rXLS~Fjtpw">CS357-S22</a> </td><tr>
<tr><td>Office Hours:</td><td> Check the <a href="#cal">calendar</a> below. </td> <br>
<tr><td></td></tr>
<tr><td>Lectures: </td><td> MR 2.35-3.50 pm Schow 30A.</td></tr>
<tr><td> </td><td><i>Assignments are typically due Thurs @ 11 pm EST</i></td></tr>
</table>
<p class="heading">
Course Description
</p>
<p><p>
<p class="text">
This course focuses on topics in game theory and mechanism design from a computational perspective. We will explore questions such as: how to design algorithms that incentivize truthful behavior, that is, where the participants have no incentive to cheat? Should we let drivers selfishly minimize their commute time or let a central algorithm direct traffic? Does Arrow’s impossibility result mean that all voting protocols are doomed? The overarching goal of these questions is to understand and analyze selfish behavior and whether it can or should influence system design. Students will learn how to model and reason about incentives in computational systems both theoretically and empirically.<BR><BR>
<b>Objectives</b> By the end of the course, the students should be able to:
<ul>
<li> model strategic interactions in games and reason about them using appropriate solution concepts </li>
<li> design tractable, yet effective, agent strategies for participants in a mechanism </li>
<li> analyze properties of a mechanisms such as strategyproofness and pareto efficiency </li>
<li> understand the design behind online markets such as ad markets, labor and dating markets
</ul>
<BR>
</p>
<p class="heading">
Syllabus & Textbook
</p>
<p>
<p class="text">
<a href="handouts/syllabus.pdf">Course Syllabus</a>
<br><br>
Readings will be assigned from several textbooks, and chapters will be provided via GLOW;
see <a href="lectures.html">Lectures</a> page for more.
</p>
<p class="heading">
Course Calendar <a name="cal"></a> (Office hours)
</p>
<table class="info">
<tr><td>
<iframe src="https://calendar.google.com/calendar/embed?src=c_uvdu23s94gl5pgk7756q19tg44%40group.calendar.google.com&ctz=America%2FNew_York" style="border: 0" width="800" height="600" frameborder="0" scrolling="no"></iframe>
</td></tr>
</table>
<p class="bottom">
</p>
</body>
</html>