-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
executable file
·224 lines (208 loc) · 15.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
<link rel="shortcut icon" href="1.jpg">
<link rel="stylesheet" type="text/css" href="static/css/bootstrap.min.css">
<link rel="stylesheet" type="text/css" href="static/css/main.css" media="screen,projection">
<link rel="stylesheet" type="text/css" href="static/css/custom.css" media="screen,projection">
<link rel="stylesheet" type="text/css" href="style2.css" />
<title>Liang Zheng at ANU</title>
</head>
<body >
<style>
a{ text-decoration: none}
</style>
<div id="header">
<table style="text-align: left; width: 80%;" cellspacing="2"
cellpadding="2" border="0">
<tbody>
<tr>
<td colspan="3" rowspan="1" style="vertical-align: top;"><span
style="font-family: Arial; font-size: 22px">
<a style="text-decoration:underline;"
href="index.html"><font color=black>Home</font></a>
<a style="text-decoration:underline;"
href="people.html"><font color=black>People</font></a>
<a style="text-decoration:underline;"
href="Publication.html"><font color=black>Publications</font></a>
<a style="text-decoration:underline;"
href="Code.html"><font color=black>Code</font></a>
<a style="text-decoration:underline;"
href="Datasets.html"><font color=black>Datasets</font></a>
<a style="text-decoration:underline;"
href="Slides.html"><font color=black>Slides</font></a>
</div>
<br><br>
</table>
<table style="text-align: left;" width="700" height="200"
cellspacing="0" cellpadding="0" border="0">
<tbody>
<tr>
<td style="vertical-align: top;" align="left"><img
alt="Liang Zheng" src="1.jpg" width="200"
><br>
</td>
<td style="vertical-align: top;">
<p style="font-family: Arial;" ><b style=""><span
style="font-size: 14pt;">Liang Zheng<o:p></o:p></span></b><o:p>
<br>
</o:p></p>
<p ><font face="Arial">Associate Professor
</font>
<br><br>
<a style="font-family: Arial;"
href="https://cs.anu.edu.au/"><font color=black>School of Computing</font></a><span style="font-family: Arial;"><br>
</span><span style="font-family: Arial; color: black;"><a
href="http://www.anu.edu.au/"><font color=black>Australian National University</font></a></span><span style="font-family:
'Arial','sans-serif';"><o:p></o:p></span><br>
Room N214, CSIT Building, ANU Campus, Australia 2601 <br /><br>
Email: <a href="mailto:liang.zheng@anu.edu.au"><font color=black>liang.zheng@anu.edu.au</font></a><br/><br/>
<a href="CV.pdf"><font color=blue>CV</font></a> <a href="https://scholar.google.com/citations?user=vNHqr3oAAAAJ&hl=en"><font color=blue>Google Scholar</font></a>
</p>
</tr>
</tbody>
</table>
<br>
</tbody>
</table>
</small></span><span style="font-family: Arial; line-height:22px"><strong>Short Bio:</strong> I am an
Associate Professor (with Tenure) in the School of Computing, Australian National University (ANU). I joined ANU in 2018 and held the CS Futures Fellowship and ARC DECRA Fellowship. I received my Ph.D. (EE) from Tsinghua University in 2015, and my B.S. (Biology) from Tsinghua University in 2010. I was named Top-40 Early Achievers by The Australian. </span></span>
</tr>
<br>
<br>
</small></span><span style="font-family: Arial; line-height:22px"><strong>Research interest:</strong> I have broad interest in computer vision. I am working with a group of talented students designing protocols and architectures to discover the underlying patterns and laws of data and algorithms. </span></span>
</tr>
<div id="content">
<h2 id="news">Position Openings:</h2>
</small></span><span style="font-family: Arial; line-height:22px" >[Research Fellow] My group does not have openings for research fellows. </span>
<br/>
<br/>
</small></span><span style="font-family: Arial; line-height:22px" >[PhD students] I am looking for highly motivated PhD students with exceptional English ability, coding ability and research experience. A prospective student should have high GPA and rich experience in research. </span>
<br/>
<br/>
</small></span><span style="font-family: Arial; line-height:22px" >[Visiting students] If you are interested in a visiting scholar position, please be kindly advised that I do not provide funding for it. If you can secure externel funding support and if you are highly motivated and experienced in research, please drop me an email.</span>
<br/>
<br/>
</small></span><span style="font-family: Arial; line-height:22px" >[ANU students] If you are an ANU undergraduate looking for an Honors project, please note that I usually take 2-3 honors students each year and that 24-unit projects are preferred. If you are an ANU master student looking for individual projects, you should have some research background, good GPA and coding abilities. You should also have sufficient time for research.</span>
<h2 id="news">News</h2>
<span style="line-height:22px;">
<li>
I serve as a Program Co-Chair for <a href="https://2024.acmmm.org/"><font color="blue">ACM Multimedia 2024, Melboune, Australia</font></a>.
<li>
I serve as a Program Co-Chair for <a href="http://www.avss2024.org/"><font color="blue">IEEE International Conference on Advanced Video and Signal-Based Sensing (AVSS 2024), Niagara Fall, Canada</font></a>.
<li>
I co-organize the <a href="https://sites.google.com/view/vdu-cvpr22"><font color="blue">1st Workshop on Vision Datasets Understanding</font></a> in conjunction with CVPR 2022.
<li>
I'm giving a tutorial <a href="https://sites.google.com/view/evalmodel"><font color="blue">Evaluating models beyond the textbook: out-of-distribution and without labels</font></a> in CVPR 2022.
<!--<li>
The <a href="http://45.32.72.229/"><font color="blue">Alice benchmark suite</font></a> is online! We are accepting results on domain adaptive pedestrian recognition. More interesting tasks are coming.<br />
<li>-->
<li>
Random Erasing (AAAI 2020) is included into the official <a href="https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.RandomErasing"><font color="blue">Pytorch</font></a> package! We report <a href="https://github.com/rwightman/pytorch-image-models"><font color="blue">improvement on ImageNet dataset</font></a>.
<li>
We have released the <a href="https://github.com/sxzrt/Instructions-of-the-PersonX-dataset"><font color="blue">PersonX engine</font></a>! It has several subsets to evaluate standard re-id and domain adaptive re-id. But most importantly, it allows us to generate datasets freely!<br />
<li>
I am co-organizing the <a href="https://www.aicitychallenge.org/"><font color="blue">2020 AI City Challenge</font></a> in CVPR 2020. <br />
<li>
<font color="black">I am co-organizing <a href="https://reid-mct.github.io/2019"><font color="blue">2nd Workshop and Challenge on
Target Re-identification and Multi-Target Multi-Camera Tracking</font></a> in CVPR 2019. <br/>
<li>
<font color="black">[Call for paper] IEEE International Conference on Multimedia and Expo (ICME) 2019 Special Session on "Multimedia Technologies Empowering Retail Experiences" [Deadline: 17 December 2018] <a href="http://www.icme2019.org/conf_sessions"><font color="blue">[URL]</font></a></font>
<br/>
<li>
<font color="black"> We have released the code for our paper "Generalizing A Person Retrieval Model Hetero- and Homogeneously". <a href="https://github.com/zhunzhong07/HHL"><font color="blue">Link</font></a></font><br/>
<li>
<font color="black"> We have released the code for our paper "Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline)". <a href="https://github.com/syfafterzy/PCB_RPP"><font color="blue">Link</font></a></font><br/>
<li>
<font color="black">
SIFT Meets CNN: A Decade Survey of Instance Retrieval. TPAMI, Accepted. <a href="http://ieeexplore.ieee.org/document/7935507/"><font color="blue">[PDF]</font></a>
<a href="Project/surveybib.html"><font color="blue">[Bibtex]</font></a>
</font><br />
<!--<li>
<font color="black"> We have released the code for our paper "Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification". <a href="https://github.com/Simon4Yan/Learning-via-Translation"><font color="blue">Link</font></a></font><br/>
<li>
<font color="black"> We have released the code for our paper "Random Erasing Data Augmentation". <a href="https://github.com/zhunzhong07/Random-Erasing"><font color="blue">Link</font></a></font><br/>
<li>
<font color="black"> We have released the code for our paper "Dual-Path Convolutional Image-Text Embedding". <a href="https://github.com/layumi/Image-Text-Embedding"><font color="blue">Link</font></a></font><br/>
<li>
The state-of-the-art methods for Market-1501+500k <a href="Project/state_of_the_art_500k.html"><font color="blue">[link]</font></a> and MARS <a href="Project/state_of_the_art_mars.html"><font color="blue">[link]</font></a> are summarized.<br/>
<li>
We have released ID-level attribute labels for the Market-1501 and Duke datasets. <a href="https://vana77.github.io/"><font color="blue">Link</font></a><br/>
<li>
Code for our CVPR17 paper "Re-ranking person re-identification with k-reciprocal encoding" is released. We have also released an evaluation protocol for CUHK03 that has one train/test split. <a href="https://github.com/zhunzhong07/person-re-ranking"><font color="blue">Link</font></a></font><br/>
<li>
A large-scale image-based re-ID dataset, Duke, is released based on <a href="http://vision.cs.duke.edu/DukeMTMC/"><font color="blue">DukeMTMC</font></a>. It has a similar scale and evaluation procedure to Market-1501. We have release the train/test split, evaluation code, and benchmarking results. <a href="https://github.com/layumi/Duke_evaluation"><font color="blue">Link</font></a><br/>
<li>
A person re-identification survey is available. Person Re-identification: Past, Present and Future. <a href="https://arxiv.org/abs/1610.02984"><font color="blue">Link</font></a> <a href="https://github.com/zhunzhong07/IDE-baseline-Market-1501"><font color="blue">Baseline Code</font></a>.
<br />
<li>
An image retrieval survey is available. SIFT Meets CNN: A Decade Survey of Instance Retrieval. <a href="https://arxiv.org/abs/1608.01807"><font color="blue">Link</font></a>.
<br />
<li>
We have released the MARS dataset for large scale video based person re-identification. <a href="Project/project_mars.html"><font color="blue">Project Page</font></a>.
<br />
<li>
A summary of <a href="./Project/state_of_the_art_market1501.html"><font color="blue">state of the arts</font></a> on Market-1501 is presented.
<br />
<li>
We have released the PRW (Person Re-identification in the Wild) dataset. See the <a href="Project/project_prw.html"><font color="blue">project page</font></a>.
It supports pedestrian detection and person re-identification.
<br />
<li>
A new person re-identification dataset, the "Market-1501" dataset, is released. See the <a href="Project/project_reid.html"><font color="blue">project page</font></a>.
It has 1501 identities, 32k bounding boxes.
<br />
<li>Our ArXiv paper "Person Re-identification Meets Image Search" is covered in "MIT Technology Review".</li>
<li>We have provided the <a href="Project/project_baseline.html"><font color="blue">code</font></a> for constructing the baseline used in our CVPR'14 papers.</li>
<li>We have provided the <a href="Project/project_fusion.html"><font color="blue">code</font></a> for our CVPR'15 paper.</li>-->
</span>
</div>
<div id="content">
<h2 id="news">Professional Service</h2>
<span style="line-height:22px;">
<li>Associate Editor: ACM Computing Survey</li>
<li>Area Chair: CVPR 2021, 2023, 2024</li>
<li>Area Chair: ACM Multimedia 2020, 2021</li>
<li>Program Chair: IEEE Machine Learning for Signal Processing Workshop, 2021</li>
<li>Area Chair: ECCV 2020</li>
<li>Associate Editor: IEEE Transactions on Circuits and Systems for Video Technology</li>
<li>Senior PC: IJCAI 2019, 2020, AAAI 2020, 2022</li>
<li>Organizer: CVPR 2020-2023 workshop on "2020 AI CITY CHALLENGE"</li>
<li>CVPR 2019 tutorial: "Textures, Objects, Scenes: From Handcrafted Features to CNNs and Beyond"</li>
<li>Organizer: CVPR 2019 workshop on "Target Re-Identification and Multi-Target Multi-Camera Tracking"</li>
<li>Area Chair, ICMR 2019</li>
<li>Associate Editor, Visual Computer Journal</li>
<li>Area Chair, ICPR 2018</li>
<li>ECCV 2018 Tutorial: "Representation Learning in Pedestrian Re-identification".</li>
<li>ICPR 2018 Tutorial: "Person Re-identification: State of the Art and Future Trend".</li>
<li>Program Committee / Reviewer: CVPR 2017, 2018, 2019; ICCV 2017; ECCV 2016, 2018; ACM Multimedia 2015, 2016, 2017, 2018; TPAMI, IJCV, TIP, TMM, TCSVT</li>
</div>
</span>
<h2><strong>Funding</strong></h2>
<span style="line-height:22px;">
<lh>
I received research funding from the following institutes.
<li>
Government: ARC DECRA Fellowship (2020-2022), ARC Discovery Project (2021-2023), ARC Linkage Project (ANU-Seeing Machines, 2022-2025).
</li>
<li>
University: CS Futures Fellowship, ANU Alliances Development Fund.
</li>
<li>
Industry: Seeing Machines, Data61 Collaborative Research Project, ANU-Medicago Research Collaboration.
</li>
</span>
<br><br>
<br class="clearfix" />
<div id="clustrmaps-widget"></div>
<script type="text/javascript"> var _clustrmaps = { 'url': 'http://www.liangzheng.com.cn', 'user': 1134134, 'server': '2', 'id': 'clustrmaps-widget', 'version': 1, 'date': '2014-03-01', 'lang': 'zh', 'corners': 'square' }; (function () { var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = 'http://www2.clustrmaps.com/counter/map.js'; var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); })();</script><noscript><a href="http://www2.clustrmaps.com/user/acf114e36"><img src="http://www2.clustrmaps.com/stats/maps-no_clusters/www.liangzheng.com.cn-thumb.jpg" alt="Locations of visitors to this page" /></a></noscript>
</body>
</html>