-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
341 lines (290 loc) · 16 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta name="keywords" content="Weihua Chen, Victor Chen, 陈威华, CRIPAC, NLPR, CASIA, BJTU, Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation Chinese Academy of Sciences, Beijing Jiaotong University" />
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<link rel="stylesheet" href="style.css" type="text/css" />
<link rel="shortcut icon" href="fig/cripac.png">
<title>Weihua Chen's Homepage</title>
</head>
<body>
<div id="layout-content">
<script type="text/javascript">
<!--
// Toggle Display of BibTeX
function toggleBibtex(articleid) {
var bib = document.getElementById(articleid);
// Toggle
if(bib.style.display == "none") {
bib.style.display = "";
}
else {
bib.style.display = "none";
}
}
-->
</script>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-40926388-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
<table class="imgtable"><tr><td>
<img src="fig/cwh.jpg" alt="alt text" width="260px" height="281px" /> </td>
<td align="left">
<div id="toptitle">
<h1>
<a href="http://cwhgn.github.io./">Weihua Chen</a> 陈威华
</h1>
</div>
<p>
work in Alibaba DAMO Academy.<br /><br />
Email: <a href="mailto:kugang.cwh@alibaba-inc.com">kugang.cwh at alibaba-inc.com</a><br /><br />
[<a href= "https://scholar.google.com/citations?user=KWVlYaMAAAAJ&hl=zh-CN">Google Scholar</a>] [<a href= "https://github.com/cwhgn/cwhgn.github.io/blob/master/assets/resume.pdf">Resume</a>]<br />
</p>
</td></tr></table>
<h2>News</h2>
<ul>
<li>
<p>[Paper Accepted] SOLIDER is accepted by <b>CVPR 2023</b>, which achieves state-of-the-art on six downstream human visual tasks, and the project website has been released [<a href="https://github.com/tinyvision/SOLIDER">link</a>], which has attracted <b>1.8K+ star</b>.</p>
</li>
<li>
<p>[Code Released] The project website of <b>DAMO-YOLO</b> is released [<a href="https://github.com/tinyvision/DAMO-YOLO">link</a>], which outperforms state-of-the-art YOLO series and attracts <b>3.8K+ star</b> on github.</p>
</li>
<li>
<p>[Paper Accepted] One paper is accepted by <b>ECCV 2022</b>, and the code has been released [<a href="https://github.com/dcp15/UAL/tree/master">link</a>].</p>
</li>
<li>
<p>[Paper Accepted] TagPerson is accepted by <b>ACM MM 2022</b>, and the code has been released [<a href="https://github.com/tagperson/tagperson-blender">link</a>].</p>
</li>
<li>
<p>[Paper Accepted] One paper is accepted by <b>IEEE TIFS</b>.</p>
</li>
<li>
<p>[Paper Accepted] CDTrans is accepted by <b>ICLR 2022</b>, and the code has been released [<a href="https://github.com/CDTrans/CDTrans">link</a>].</p>
</li>
</ul>
<h2>
Biography
</h2>
<ul>
<li>
<p>
He joined Alibaba in 2018. Now, he is a senior algorithm engineer in Alibaba DAMO Academy.
His current researches are mainly on self-supervised, unsupervised learning and domain adaptation.
He has published over 30+ papers in international journals and top conferences such as CVPR, ICCV, ECCV, ICLR and ACM MM.
He has received the championships of multiple challenges in top conferences, including 5 champions, 2 runners-up, 1 third runner-up.
His current researches are mainly on self-supervised/semi-supervised/unsupervised learning and domain adaptation.
</p>
<li>
<p>
Weihua Chen obtained his Ph.D. degree in <a href="http://www.nlpr.ia.ac.cn/">National Laboratory of Pattern Recognition (NLPR)</a>,
<a href="http://english.ia.cas.cn/">Institute of Automation Chinese Academy of Sciences (CASIA)</a>,
as a member of <a href="http://www.cripac.ia.ac.cn/">Center for Research on Intelligent Perception and Computing (CRIPAC)</a>.
And he received the Bachelor and Master degree in <a href="http://www.bjtu.edu.cn">Beijing Jiaotong University (BJTU)</a>.
</p>
</li>
</ul>
<h2>Selected Publications (# corresponding, * equal contribution)</h2>
<ul>
<li>
<a href="https://arxiv.org/abs/2303.17602">Beyond Appearance: a Semantic Controllable Self-Supervised Learning Framework for Human-Centric Visual Tasks</a><br />
<b>Weihua Chen</b>, Xianzhe Xu, Jian Jia, Hao Luo, Yaohua Wang, Fan Wang, Rong Jin, Xiuyu Sun<br />
<i>The Conference on Computer Vision and Pattern Recognition</i> (<b>CVPR</b>), 2023. <br />
</li>
[<a href="https://arxiv.org/abs/2303.17602">PDF</a>]
[<a href="https://github.com/tinyvision/SOLIDER">Project Website</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/tinyvision/SOLIDER?style=social">
<br /><br />
<li>
<a href="https://arxiv.org/pdf/2305.02722.pdf">Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with Uncertainty</a><br />
Yuan Zhang*, <b>Weihua Chen*</b>, Yichen Lu*, Tao Huang, Xiuyu Sun, Jian Cao<br />
<i>The 30th ACM International Conference on Multimedia</i> (<b>ACM MM</b>), 2023. <br />
</li>
[<a href="https://arxiv.org/pdf/2305.02722.pdf">PDF</a>]
<br /><br />
<li>
<a href="https://arxiv.org/abs/2306.08789">Efficient Token-Guided Image-Text Retrieval with Consistent Multimodal Contrastive Training</a><br />
Chong Liu, Yuqi Zhang, Hongsong Wang#, <b>Weihua Chen#</b>, Fan Wang, Yan Huang, Yi-Dong Shen, Liang Wang<br />
<i>IEEE Transactions on Image Processing</i> (<b>TIP</b>), 2023. <br />
</li>
[<a href="https://arxiv.org/abs/2306.08789">PDF</a>]
[<a href="https://github.com/LCFractal/TGDT">Code</a>]
<br /><br />
<li>
<a href="https://arxiv.org/pdf/2306.08792.pdf">Graph Convolution Based Efficient Re-Ranking for Visual Retrieval
</a> <br />
Yuqi Zhang, Qi Qian, Hongsong Wang, Chong Liu, <b>Weihua Chen</b>, Fan Wang<br />
<i>IEEE Transactions on Multimedia</i> (<b>TMM</b>), 2023. <br />
</li>
[<a href="https://arxiv.org/pdf/2306.08792.pdf">PDF</a>]
[<a href="https://github.com/WesleyZhang1991/GCN_rerank">Code</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/WesleyZhang1991/GCN_rerank?style=social">
<br /><br />
<li>
<a href="https://dl.acm.org/doi/pdf/10.1145/3503161.3548013">TAGPerson: A Target-Aware Generation Pipeline for Person Re-identification</a> <br />
Kai Chen, <b>Weihua Chen#</b>, Tao He, Rong Du, Fan Wang, Xiuyu Sun, Yuchen Guo, Guiguang Ding<br />
<i>The 29th ACM International Conference on Multimedia</i> (<b>ACM MM</b>), 2022. <br />
</li>
[<a href="https://dl.acm.org/doi/pdf/10.1145/3503161.3548013">PDF</a>]
[<a href="https://github.com/tagperson/tagperson-blender">Code</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/tagperson/tagperson-blender?style=social">
<br /><br />
<li>
<a href="https://openreview.net/pdf?id=XGzk5OKWFFc">CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation</a> <br />
Tongkun Xu*, <b>Weihua Chen*</b>, Pichao Wang, Fan Wang, Hao Li, Rong Jin<br />
<i>The International Conference on Learning Representations</i> (<b>ICLR</b>), 2022. <br />
</li>
[<a href="https://openreview.net/pdf?id=XGzk5OKWFFc">PDF</a>]
[<a href="https://github.com/CDTrans/CDTrans">Code</a>]
[<a href="https://www.youtube.com/watch?v=d1WLO61s2Js">Slides</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/CDTrans/CDTrans?style=social">
<br /><br />
<li>
<a href="https://arxiv.org/pdf/2211.15444v2.pdf">DAMO-YOLO: A Report on Real-Time Object Detection Design</a> <br />
Xianzhe Xu*, Yiqi Jiang*, <b>Weihua Chen*</b>, Yilun Huang*, Yuan Zhang* and Xiuyu Sun<br />
<i>Arxiv 2211.15444</i>.<br />
</li>
[<a href="https://arxiv.org/pdf/2211.15444v2.pdf">PDF</a>]
[<a href="https://github.com/tinyvision/DAMO-YOLO">Project Website</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/tinyvision/DAMO-YOLO?style=social">
<br /><br />
<li>
<a href="https://arxiv.org/pdf/2210.13440.pdf">Reliability-Aware Prediction via Uncertainty Learning for Person Image Retrieval</a> <br />
Zhaopeng Dou, Zhongdao Wang, <b>Weihua Chen</b>, Yali Li, and Shengjin Wang<br />
<i>The European Conference on Computer Vision</i> (<b>ECCV</b>), 2022. <br />
</li>
[<a href="https://arxiv.org/pdf/2210.13440.pdf">PDF</a>]
[<a href="https://github.com/dcp15/UAL/tree/master">Code</a>]
<br /><br />
<li>
<a href="https://ieeexplore.ieee.org/abstract/document/9672094">Multi-view Evolutionary Training for Unsupervised Domain Adaptive Person Re-Identification</a> <br />
Jianyang Gu, <b>Weihua Chen</b>, Hao Luo, Fan Wang, Hao Li, Wei Jiang, Weijie Mao<br />
<i>IEEE Transactions on Information Forensics and Security</i> (<b>TIFS</b>), 2022. <br />
</li>
[<a href="https://ieeexplore.ieee.org/abstract/document/9672094">PDF</a>]
<br /><br />
<li>
<a href="https://arxiv.org/pdf/2108.09977.pdf">Exploring the Quality of GAN Generated Images for Person Re-Identification</a> <br />
Yiqi Jiang*, <b>Weihua Chen*</b>, Xiuyu Sun, Xiaoyu Shi, Fan Wang, Hao Li<br />
<i>The 29th ACM International Conference on Multimedia</i> (<b>ACM MM</b>), 2021. <br />
</li>
[<a href="https://arxiv.org/pdf/2108.09977.pdf">PDF</a>]
<br /><br />
<li>
<a href="https://openaccess.thecvf.com/content/ICCV2021/papers/Isobe_Towards_Discriminative_Representation_Learning_for_Unsupervised_Person_Re-Identification_ICCV_2021_paper.pdf">Towards discriminative representation learning for unsupervised person re-identification</a> <br />
Takashi Isobe, Dong Li, Lu Tian, <b>Weihua Chen</b>, Yi Shan, Shengjin Wang<br />
<i>The IEEE/CVF International Conference on Computer Vision</i> (<b>ICCV</b>), 2021. <br />
</li>
[<a href="https://openaccess.thecvf.com/content/ICCV2021/papers/Isobe_Towards_Discriminative_Representation_Learning_for_Unsupervised_Person_Re-Identification_ICCV_2021_paper.pdf">PDF</a>]
<br /><br />
<li>
<a href="https://arxiv.org/abs/1704.01719"> Beyond triplet loss: a deep quadruplet network for person re-identification</a> <br />
<b>Weihua Chen</b>, Xiaotang Chen, Jianguo Zhang, Kaiqi Huang <br />
<i>The Conference on Computer Vision and Pattern Recognition</i> (<b>CVPR Spotlight</b>), 2017. <br />
</li>
[<a href="https://arxiv.org/pdf/1704.01719.pdf">PDF</a>]
[<a href="https://www.youtube.com/watch?v=_o2SLgjejAE">Slides</a>]
<br /><br />
<li>
<a href="http://arxiv.org/abs/1607.05369"> A Multi-task Deep Network for Person Re-identification</a> <br />
<b>Weihua Chen</b>, Xiaotang Chen, Jianguo Zhang, Kaiqi Huang <br />
<i>The Thirty-First AAAI Conference on Artificial Intelligence</i> (<b>AAAI Oral</b>), 2017. <br />
</li>
[<a href="https://arxiv.org/pdf/1607.05369v3.pdf">PDF</a>]
[<a href="https://github.com/cwhgn/MTDnet">Code</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/cwhgn/MTDnet?style=social">
<br /><br />
<li>
<a href="http://arxiv.org/abs/1502.03532"> An Equalised Global Graphical Model-Based Approach for Multi-Camera Object Tracking</a> <br />
<b>Weihua Chen</b>, Lijun Cao, Xiaotang Chen, Kaiqi Huang <br />
<i>IEEE Transactions on Circuits and Systems for Video Technology</i> (<b>TCSVT</b>), 2016. <br />
</li>
[<a href="http://arxiv.org/pdf/1502.03532v1.pdf">PDF</a>]
[<a href="https://github.com/cwhgn/EGTracker">Code</a>]
[<a href="https://www.youtube.com/watch?v=GZ2u2tvzgi4">Demo</a>]
<img alt="GitHub stars" style="vertical-align:middle" src="https://img.shields.io/github/stars/cwhgn/EGTracker?style=social">
<br /><br />
</ul>
<h2>Honors and Awards</h2>
<ul>
<li>
Win the 1st place in <a href="https://www.aicitychallenge.org/">AICITY Challenge</a> Track3 Multi-camera Vehicle Tracking on CVPR 2021.
[<a href="https://openaccess.thecvf.com/content/CVPR2021W/AICity/papers/Liu_City-Scale_Multi-Camera_Vehicle_Tracking_Guided_by_Crossroad_Zones_CVPRW_2021_paper.pdf">PDF</a>]
[<a href="https://github.com/LCFractal/AIC21-MTMC">Code</a>]
</li><br />
<li>
Win the 1st place in <a href="https://www.aicitychallenge.org/">AICITY Challenge</a> Track2 Vehicle Re-Identification on CVPR 2021.
[<a href="https://openaccess.thecvf.com/content/CVPR2021W/AICity/papers/Luo_An_Empirical_Study_of_Vehicle_Re-Identification_on_the_AI_City_CVPRW_2021_paper.pdf">PDF</a>]
[<a href="https://github.com/michuanhaohao/AICITY2021_Track2_DMT">Code</a>]
</li><br />
<li>
Win the 1st place in <a href="https://iccv2021-mmp.github.io/">Multi-camera Multi-Person tracking</a> on ICCV 2021.
</li><br />
<li>
Win the 1st place <a href="https://motchallenge.net/results/TAO_Challenge/">Tracking Any Objects (TAO) Challenge</a> on ECCV 2020.
[<a href="https://arxiv.org/abs/2101.08040">PDF</a>]
[<a href="https://github.com/feiaxyt/Winner_ECCV20_TAO">Code</a>]
</li><br />
<li>
Win the 1st place <a href="http://ai.bu.edu/visda-2020/">Visual Domain Adaptation (VisDA) Challenge</a> on ECCV 2020.
[<a href="https://arxiv.org/abs/2012.13498">PDF</a>]
[<a href="https://github.com/vimar-gu/Bias-Eliminate-DA-ReID">Code</a>]
</li><br />
<li>
Win the 2nd (2/263, top1%) place in Google Landmark Retrieval Competition on ICCV 2021 and silver medal in Google Landmark Recognition Competition.
[<a href="https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution/blob/master/ILR2021_2nd_solution.pdf">PDF</a>]
[<a href="https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution">Code</a>]
[<a href="https://www.kaggle.com/c/landmark-retrieval-2021/discussion/277273">Kaggle Poster</a>]
[<a href="https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution/blob/master/ILR21_RET_2nd-slides.pdf">Slides</a>]
[<a href="https://www.youtube.com/watch?v=bkT2Judxf_s">Video</a>]
</li><br />
<li>
Win the 2nd place in <a href="https://github.com/JonathonLuiten/TrackEval/blob/master/docs/RobMOTS-Official/Readme.md">RobMOTS: The Ultimate Tracking Challenge</a> on CVPR 2021.
[<a href="https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/workshop/challenge/2nd/SBT_RobMOTS.pdf">PDF</a>]
</li><br />
<li>
Win the 3rd place in <a href="https://www.aicitychallenge.org/2020-ai-city-challenge/">AICITY Challenge</a> Track2 Vehicle Re-Identification on CVPR 2020.
[<a href="https://openaccess.thecvf.com/content_CVPRW_2020/papers/w35/He_Multi-Domain_Learning_and_Identity_Mining_for_Vehicle_Re-Identification_CVPRW_2020_paper.pdf">PDF</a>]
[<a href="https://github.com/heshuting555/AICITY2020_DMT_VehicleReID">Code</a>]
</li><br />
<li>
Organize <a href="http://mct.idealtest.org/">the Multi-Camera Object Tracking (<b>MCT</b>) Challenge </a>
in <a href="http://www.vs-re-id-2014.org/">Visual Surveillance and Re-identification Workshop</a> on ECCV 2014
</li><br />
<li>
Serve as Area Chair for VALSE2023/PRCV2023.
</li><br />
<li>
Serve as Reveiwer for top conferences and journals, such as PAMI/TIP/TIFS/TCSVT/CVPR/ICCV/ECCV/NIPS.
</li><br />
<li>
Serve as Member of Technical Committee on Machine Vision for China Society of Image and Graphics (CSIG).
</li><br />
<li>
Serve as Youth Committee for Beijing Society of Image and Graphics (BSIG).
</li><br />
<li>
Invited tutorial talk in IJCB 2021 with the topic of <a href="https://ijcb2021.iapr-tc4.org/tutorials/#Tutorial_3_Human-centric_Visual_Understanding_From_Research_to_Applications">Human-centric Visual Understanding: From Research to Applications</a>.
</li><br />
</ul>
<p>
<a href="http://english.ia.cas.cn/"><img src="fig/casia.jpg" alt="alt text" width="50px" height="50px"/></a>
<a href="http://www.nlpr.ia.ac.cn/"><img src="fig/nlpr.jpg" alt="alt text" width="50px" height="50px"/></a>
<a href="http://www.cripac.ia.ac.cn/"><img src="fig/cripac.png" alt="alt text" width="50px" height="50px"/></a>
<a href="http://www.bjtu.edu.cn/"><img src="fig/bjtu.jpg" alt="alt text" width="50px" height="50px"/></a>
</p>
<div id="footer">
<div id="footer-text">
</br>Last updated at 2022-07-01 by Weihua Chen.
</div>
</div>
</div>
</body>
</html>