Skip to content

Latest commit

 

History

History
206 lines (198 loc) · 19.6 KB

CVPR22.md

File metadata and controls

206 lines (198 loc) · 19.6 KB

CVPR22

Workshop & Tutorial

ML

  • Hyperbolic:
    • Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers, Yunhui Guo, Xudong Wang, Yubei Chen, Stella X. Yu
    • CO-SNE: Dimensionality Reduction and Visualization for Hyperbolic Data
    • Nested Hyperbolic Spaces for Dimensionality Reduction and Hyperbolic NN Design, Xiran Fan, Chun-Hao Yang, Baba C. Vemuri
  • Condensing CNNs With Partial Differential Equations, Anil Kag, Venkatesh Saligrama
  • Deep Equilibrium Optical Flow Estimation, Shaojie Bai, Zhengyang Geng, Yash Savani, J. Zico Kolter
  • Optimization
    • A Unified Framework for Implicit Sinkhorn Differentiation, Marvin Eisenberger, Aysim Toker, Laura Leal-Taixé, Florian Bernard, Daniel Cremers
    • Total Variation Optimization Layers for Computer Vision, Raymond A. Yeh, Yuan-Ting Hu, Zhongzheng Ren, Alexander G. Schwing
  • OpenSet
    • Active Learning for Open-Set Annotation, Kun-Peng Ning, Xun Zhao, Yu Li, Sheng-Jun Huang
  • Metric learning:
    • Hyperbolic Vision Transformers: Combining Improvements in Metric Learning, Aleksandr Ermolov, Leyla Mirvakhabova, Valentin Khrulkov, Nicu Sebe, Ivan Oseledets
    • Non-Isotropy Regularization for Proxy-Based Deep Metric Learning, Karsten Roth, Oriol Vinyals, Zeynep Akata
    • Attributable Visual Similarity Learning, Borui Zhang, Wenzhao Zheng, Jie Zhou, Jiwen Lu
  • Reflection and Rotation Symmetry Detection via Equivariant Learning, Ahyun Seo, Byungjin Kim, Suha Kwak, Minsu Cho
  • Explainable:
    • Cycle-Consistent Counterfactuals by Latent Transformations, Saeed Khorram, Li Fuxin
  • Dataset Distillation by Matching Training Trajectories, George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu
  • Demystifying the Neural Tangent Kernel From a Practical Perspective: Can It Be Trusted for Neural Architecture Search Without Training? Jisoo Mok, Byunggook Na, Ji-Hoon Kim, Dongyoon Han, Sungroh Yoon
  • Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent From the Decision Boundary Perspective, Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, Tom Goldstein

Recognition (Detection/Segmentation/...)

  • CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation, Qihang Yu, Huiyu Wang, Dahun Kim, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang- Chieh Chen
  • Video:
    • Temporal Alignment Networks for Long-Term Video, Tengda Han, Weidi Xie, Andrew Zisserman
    • P3IV: Probabilistic Procedure Planning From Instructional Videos With Weak Supervision, He Zhao, Isma Hadji, Nikita Dvornik, Konstantinos G. Derpanis, Richard P. Wildes, Allan D. Jepson
    • Video Swin Transformer, Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu
  • Fine-grained:
    • Dual Cross-Attention Learning for Fine-Grained Visual Categorization and Object Re-Identification, Haowei Zhu, Wenjing Ke, Dong Li, Ji Liu, Lu Tian, Yi Shan
    • Fine-Grained Object Classification via Self-Supervised Pose Alignment, Xuhui Yang, Yaowei Wang, Ke Chen, Yong Xu, Yonghong Tian
  • 3D:
    • Focal Sparse Convolutional Networks for 3D Object Detection, Yukang Chen, Yanwei Li, Xiangyu Zhang, Jian Sun, Jiaya Jia
  • A ConvNet for the 2020s, Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie

Image + Text

  • Learning To Prompt for Continual Learning, Zifeng Wang,
  • Generative:
    • Zero-Shot Text-Guided Object Generation With Dream Fields, Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole
      1. DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation, Gwanghyun Kim, Taesung Kwon, Jong Chul Ye
      1. CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields, Can Wang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao
  • Recognition:
    • Learning Transferable Human-Object Interaction Detector With Natural Language Supervision, Suchen Wang, Yueqi Duan, Henghui Ding, Yap-Peng Tan, Kim-Hui Yap, Junsong Yuan
  • Align and Prompt: Video-and-Language Pre-Training With Entity Prompts, Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, Steven C.H. Hoi
    1. Prompt Distribution Learning, Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian
  • Grounded Language-Image Pre-Training, Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao
    1. Vision-Language Pre-Training With Triple Contrastive Learning, Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang
    1. Vision-Language Pre-Training for Boosting Scene Text Detectors, Sibo Song, Jianqiang Wan, Zhibo Yang, Jun Tang, Wenqing Cheng, Xiang Bai, Cong Yao
  • CLIP-Event: Connecting Text and Images With Event Structures, Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang
  • Unsupervised Vision-and-Language Pre-Training via Retrieval-Based Multi-Granular Alignment, Mingyang Zhou, Licheng Yu, Amanpreet Singh, Mengjiao Wang, Zhou Yu, Ning Zhang
  • DF-GAN: A Simple and Effective Baseline for Text-to- Image Synthesis, Ming Tao, Hao Tang, Fei Wu, Xiao-Yuan Jing, Bing-Kun Bao, Changsheng Xu

Transformer

  • Multi-Frame Self-Supervised Depth With Transformers, Vitor
  • Continual Learning With Lifelong Vision Transformer, Zhen
  • TransGeo: Transformer Is All You Need for Cross-View Image Geo-Localization, Sijie Zhu, Mubarak Shah, Chen Chen
  • Edge:
  • From Scratch:
    • Weixiang Hong, Jiangwei Lao, Wang Ren, Jian Wang, Jingdong Chen, Wei Chu. Training Object Detectors From Scratch: An Empirical Study in the Era of Vision Transformer
      • One of the key findings is that both architectural changes and more epochs play critical roles in training vision transformer based detectors from scratch.
    • Bootstrapping ViTs: Towards Liberating Vision Transformers From Pre-Training, Haofei Zhang, Jiarui Duan, Mengqi Xue, Jie Song, Li Sun, Mingli Song
  • Vision Transformer With Deformable Attention, Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, Gao Huang
  • Multi-scale:
    • Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, Christoph Feichtenhofer. MViTv2: Improved Multiscale Vision Transformers for Classification and Detection.
  • GAN:
    • Styleformer: Transformer Based Generative Adversarial Networks With Style Vector, Jeeseung Park, Younggeun Kim
  • Detection:
    • Omni-DETR: Omni-Supervised Object Detection With Transformers, Pei Wang, Zhaowei Cai, Hao Yang, Gurumurthy Swaminathan, Nuno Vasconcelos, Bernt Schiele, Stefano Soatto
    • DESTR: Object Detection With Split Transformer, Liqiang He, Sinisa Todorovic
    • Few-Shot Object Detection With Fully Cross-Transformer, Guangxing Han, Jiawei Ma, Shiyuan Huang, Long Chen, Shih-Fu Chang
  • Model
    • Reversible Vision Transformers, Karttikeya Mangalam, Haoqi Fan, Yanghao Li, Chao-Yuan Wu, Bo Xiong, Christoph Feichtenhofer, Jitendra Malik
  • Local (CNN):
    • Swin Transformer V2: Scaling Up Capacity and Resolution, Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo
    • CSWin Transformer: A General Vision Transformer Backbone With Cross-Shaped Windows, Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, Baining Guo
  • Segmentation:
    • TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, Wenqiang Zhang, Zilong Huang, Guozhong Luo, Tao Chen, Xinggang Wang, Wenyu Liu, Gang Yu, Chunhua Shen
  • 3D:
    • Bridged Transformer for Vision and Point Cloud 3D Object Detection, Yikai Wang, TengQi Ye, Lele Cao, Wenbing Huang, Fuchun Sun, Fengxiang He, Dacheng Tao
  • Study:
    • Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, Lucas Beyer. Scaling Vision Transformers. CVPR'22
      • ImageNet Low-shot:
        • 1-example: 69.52%;
        • 10-example: 84.86%;
      • ImageNet:
        • 2 billion para, 90.45% accuracy;
      • Core results:
        • TPU-day up (compute/model/data), error down;
        • Double-saturating power law;
        • Big models are more sample efficient;
        • Scaling laws still apply on fewer images;
        • Saving memory by removing [class] token, with GAP or MAP;
  • Mobile-Former: Bridging MobileNet and Transformer,
  • Delving Deep Into the Generalization of Vision Transformers Under Distribution Shifts, Chongzhi Zhang, Mingyuan Zhang, Shanghang Zhang, Daisheng Jin, Qiang Zhou, Zhongang Cai, Haiyu Zhao, Xianglong Liu, Ziwei Liu
  • [0830] A-ViT: Adaptive Tokens for Efficient Vision Transformer, Hongxu Yin, Arash Vahdat, Jose M. Alvarez, Arun Mallya, Jan Kautz, Pavlo Molchanov
    1. [0835] MetaFormer Is Actually What You Need for Vision, Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, Shuicheng Yan
  • CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawings, Zhiwen Fan, Tianlong Chen, Peihao Wang, Zhangyang Wang
  • GAT-CADNet: Graph Attention Network for Panoptic Symbol Spotting in CAD Drawings, Zhaohua Zheng, Jianfang Li, Lingjie Zhu, Honghua Li, Frank Petzold, Ping Tan
  • CMT: Convolutional Neural Networks Meet Vision Transformers, Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, Chang Xu

Generative

  • Self-Supervised Deep Image Restoration via Adaptive Stochastic Gradient Langevin Dynamics, Weixi Wang, Ji Li, Hui Ji
  • SphericGAN: Semi-Supervised Hyper-Spherical Generative Adversarial Networks for Fine-Grained Image Synthesis, Tianyi Chen, Yunfei Zhang, Xiaoyang Huo, Si Wu, Yong Xu, Hau San Wong
  • CoordGAN: Self-Supervised Dense Correspondences Emerge From GANs, Jiteng Mu, Shalini De Mello, Zhiding Yu, Nuno Vasconcelos, Xiaolong Wang, Jan Kautz, Sifei Liu
  • DDPM:
    • Diffusion Autoencoders: Toward a Meaningful and Decodable Representation, Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, Supasorn Suwajanakorn
    • Vector Quantized Diffusion Model for Text-to-Image Synthesis, Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo
    • RePaint: Inpainting Using Denoising Diffusion Probabilistic Models, Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool
    • Generating High Fidelity Data From Low-Density Regions Using Diffusion Models, Vikash Sehwag, Caner Hazirbas, Albert Gordo, Firat Ozgenel, Cristian Canton
      1. Global Context With Discrete Diffusion in Vector Quantised Modelling for Image Generation, Minghui Hu, Yujie Wang, Tat- Jen Cham, Jianfei Yang, P.N. Suganthan
  • Polymorphic-GAN: Generating Aligned Samples Across Multiple Domains With Learned Morph Maps, Seung Wook Kim, Karsten Kreis, Daiqing Li, Antonio Torralba, Sanja Fidler
  • Ensembling Off-the-Shelf Models for GAN Training, Nupur Kumari, Richard Zhang, Eli Shechtman, Jun-Yan Zhu
  • StyleSwin: Transformer-Based GAN for High-Resolution Image Generation, Bowen Zhang, Shuyang Gu, Bo Zhang, Jianmin Bao, Dong Chen, Fang Wen, Yong Wang, Baining Guo
  • Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing, Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh

3D

  • PnP, pose-estimation:
    • Homography Loss for Monocular 3D Object Detection, Jiaqi Gu, Bojian Wu, Lubin Fan, Jianqiang Huang, Shen Cao, Zhiyu Xiang, Xian-Sheng Hua
    • EPro-PnP: Generalized End-to-End Probabilistic Perspective-N-Points for Monocular Object Pose Estimation, Hansheng Chen, Pichao Wang, Fan Wang, Wei Tian, Lu Xiong, Hao Li
    • Projective Manifold Gradient Layer for Deep Rotation Regression, Jiayi Chen, Yingda Yin, Tolga Birdal, Baoquan Chen, Leonidas J. Guibas, He Wang
  • NERF:
    • AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation, Paritosh Mittal, Yen-Chi Cheng, Maneesh Singh, Shubham Tulsiani
    • LOLNerf: Learn From One Look, Daniel Rebain, Mark Matthews, Kwang Moo Yi, Dmitry Lagun, Andrea Tagliasacchi
    • AutoRF: Learning 3D Object Radiance Fields From Single View Observations, Norman Müller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder
    • Pix2NeRF: Unsupervised Conditional -GAN for Single Image to Neural Radiance Fields Translation, Shengqu Cai, Anton Obukhov, Dengxin Dai, Luc Van Gool
    • Point-NeRF: Point-Based Neural Radiance Fields, Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, Ulrich Neumann
    • NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction, Xiaoshuai Zhang, Sai Bi, Kalyan Sunkavalli, Hao Su, Zexiang Xu
    • Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields, Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
    • RegNeRF: Regularizing Neural Radiance Fields for View Synthesis From Sparse Inputs, Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan
    • [0858] Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields, Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan
    • Gradient-SDF: A Semi-Implicit Surface Representation for 3D Reconstruction, Christiane Sommer, Lu Sang, David Schubert, Daniel Cremers
    • Block-NeRF: Scalable Large Scene Neural View Synthesis, Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P. Srinivasan, Jonathan T. Barron, Henrik Kretzschmar
    • GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation, Yu Deng, Jiaolong Yang, Jianfeng Xiang, Xin Tong
    • Deblur-NeRF: Neural Radiance Fields From Blurry Images, Li Ma, Xiaoyu Li, Jing Liao, Qi Zhang, Xuan Wang, Jue Wang, Pedro V. Sander
    • Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation, Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J. Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser
    • EfficientNeRF - Efficient Neural Radiance Fields, Tao Hu, Shu Liu, Yilun Chen, Tiancheng Shen, Jiaya Jia
      1. Surface-Aligned Neural Radiance Fields for Controllable 3D
      1. Structured Local Radiance Fields for Human Avatar Modeling,
    • NeRF in the Dark: High Dynamic Range View Synthesis From Noisy Raw Images, Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan, Jonathan T. Barron
    • DIVeR: Real-Time and Accurate Neural Radiance Fields With Deterministic Integration for Volume Rendering, Liwen Wu, Jae Yong Lee, Anand Bhattad, Yu-Xiong Wang, David Forsyth
    • HumanNeRF: Free-Viewpoint Rendering of Moving People From Monocular Video, Chung-Yi Weng, Brian Curless, Pratul P. Srinivasan, Jonathan T. Barron, Ira Kemelmacher-Shlizerman
    • Neural Reflectance for Shape Recovery With Shadow Handling, Junxuan Li, Hongdong Li
    • BokehMe: When Neural Rendering Meets Classical Rendering, Juewen Peng, Zhiguo Cao, Xianrui Luo, Hao Lu, Ke Xian, Jianming Zhang
  • Fusion:
      1. TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection With Transformers, Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, Chiew-Lan Tai
  • Rotationally Equivariant 3D Object Detection, Hong-Xing Yu, Jiajun Wu, Li Yi
  • Human Mesh Recovery From Multiple Shots, Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa
  • Disentangled3D: Learning a 3D Generative Model With Disentangled Geometry and Appearance From Monocular Images, Ayush Tewari, Mallikarjun B R, Xingang Pan, Ohad Fried, Maneesh Agrawala, Christian Theobalt
  • Learning 3D Object Shape and Layout Without 3D Supervision, Georgia Gkioxari, Nikhila Ravi, Justin Johnson
  • -SfT: Shape-From-Template With a Physics-Based Deformation Model, Navami Kairanda, Edith Tretschk, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik
  • ROCA: Robust CAD Model Retrieval and Alignment From a Single Image, Can Gümeli, Angela Dai, Matthias Nießner
  • Neural 3D Scene Reconstruction With the Manhattan- World Assumption, Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, Xiaowei Zhou
  • Input-Level Inductive Biases for 3D Reconstruction, Wang Yifan, Carl Doersch, Relja Arandjelović, João Carreira, Andrew Zisserman
  • RGB-Depth Fusion GAN for Indoor Depth Completion, Haowen Wang, Mingyuan Wang, Zhengping Che, Zhiyuan Xu, Xiuquan Qiao, Mengshi Qi, Feifei Feng, Jian Tang
  • PlanarRecon: Real-Time 3D Plane Detection and Reconstruction From Posed Monocular Videos, Yiming Xie, Matheus Gadelha, Fengting Yang, Xiaowei Zhou, Huaizu Jiang
  • Scene Representation Transformer: Geometry-Free Novel View Synthesis Through Set-Latent Scene Representations, Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lučić, Daniel Duckworth, Alexey Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, Andrea Tagliasacchi
    1. NeurMiPs: Neural Mixture of Planar Experts for View Synthesis, Zhi-Hao Lin, Wei-Chiu Ma, Hao-Yu Hsu, Yu-Chiang Frank Wang, Shenlong Wang
    1. JoinABLe: Learning Bottom-Up Assembly of Parametric CAD
    1. ImplicitAtlas: Learning Deformable Shape Templates in Medical
    1. Primitive3D: 3D Object Dataset Synthesis From Randomly Assembled Primitives, Xinke Li, Henghui Ding, Zekun Tong, Yuwei Wu, Yeow Meng Chee

Pre-Training, SSL

  • Revisiting Weakly Supervised Pre-Training of Visual Perception Models, Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, Laurens van der Maaten
  • Self-Supervised Models Are Continual Learners, Enrico Fini, Victor G. Turrisi da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari, Julien Mairal
  • The Two Dimensions of Worst-Case Training and Their Integrated Effect for Out-of-Domain Generalization, Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing
  • SimMIM: A Simple Framework for Masked Image Modeling, Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu
    1. Contrastive Conditional Neural Processes, Zesheng Ye, Lina Yao
  • MAT: Mask-Aware Transformer for Large Hole Image Inpainting, Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia
  • Exploring the Equivalence of Siamese Self-Supervised Learning via a Unified Gradient Framework, Chenxin Tao, Honghui Wang, Xizhou Zhu, Jiahua Dong, Shiji Song, Gao Huang, Jifeng Dai
  • BEVT: BERT Pretraining of Video Transformers, Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Yu-Gang Jiang, Luowei Zhou, Lu Yuan
  • Masked Autoencoders Are Scalable Vision Learners, Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick
  • Point-Level Region Contrast for Object Detection Pre- Training, Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, Alexander C. Berg
  • Efficient Geometry-Aware 3D Generative Adversarial Networks,