毕业快乐!最近更新 2021 年 07 月 05 日:论文更新 +104,开源代码 +9
@Author:吴艳敏
@E-mail :wuyanminmax[AT]gmail.com
@github :wuxiaolang
以下收集的论文、代码等资料主要与本人的硕士期间的学习方向 视觉 SLAM、增强现实 相关。目前(2019-2021)重点关注 VO、物体级 SLAM 和语义数据关联, 对传感器融合、稠密建图也略有关注,所以资料的收集范围也与自己的兴趣比较一致,无法涵盖视觉 SLAM 的所有研究,请谨慎参考,部分整理后发布于知乎。主要内容包括:
1. 开源代码
:经典、优秀的开源工程
2. 优秀作者与实验室
:在自己感兴趣领域比较优秀的值得关注的团队或个人
3. SLAM 学习资料
:SLAM 相关学习资料、视频、数据集、公众号和代码注释
4. 近期论文
:自己感兴趣方向的最新论文,大概一个月一更新。部分论文的详/泛读笔记放在我的博客/List上。
🌚 本仓库于 2019 年 3 月(研一)开始整理(私密);
🌝 本仓库于 2020 年 3月(研二)公开,正好一周年;
🎓 2021 年 7 月,研三硕士毕业,此仓库后续可能不会有很频繁的更新,祝各位学习工作顺利,学术交流欢迎邮件联系 wuyanminmax[AT]gmail.com。
推荐使用 GayHub 插件自动在侧栏展开目录
- 1.开源代码
- 2. 优秀作者与实验室
- 3. SLAM 学习资料
- 4. 近期论文更新
- 2021 年 06 月论文更新(20 篇)
- 2021 年 05 月论文更新(20 篇)
- 2021 年 04 月论文更新(20 篇)
- 2021 年 03 月论文更新(23 篇)
- 2021 年 02 月论文更新(21 篇)
- 2021 年 01 月论文更新(20 篇)
- -- ↑ 2021年 ↑ === ↓ 2020年 ↓ --
- 2020 年 12 月论文更新(18 篇)
- 2020 年 11 月论文更新(20 篇)
- 2020 年 10 月论文更新(22 篇)
- 2020 年 09 月论文更新(20 篇)
- 2020 年 08 月论文更新(30 篇)
- 2020 年 07 月论文更新(20 篇)
- 2020 年 06 月论文更新(20 篇)
- 2020 年 05 月论文更新(20 篇)
- 2020 年 04 月论文更新(22 篇)
- 2020 年 03 月论文更新(23 篇)
- 2020 年 02 月论文更新(17 篇)
- 2020 年 01 月论文更新(26 篇)
- -- ↑ 2020年 ↑ === ↓ 2019年 ↓ --
- 2019 年 12 月论文更新(23 篇)
- 2019 年 11 月论文更新(17 篇)
- 2019 年 10 月论文更新(22 篇)
- 2019 年 09 月论文更新(24 篇)
- 2019 年 08 月论文更新(26 篇)
- 2019 年 07 月论文更新(36 篇)
- 2019 年 06 月论文更新(21 篇)
- 2019 年 05 月论文更新(51 篇)
- 2019 年 04 月论文更新(17 篇)
- 2019 年 03 月论文更新(13 篇)
这一部分整理之后发布在知乎(2020 年 3 月 31 日):https://zhuanlan.zhihu.com/p/115599978/
- 论文:Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on. IEEE, 2007: 225-234.
- 代码:https://github.com/Oxford-PTAM/PTAM-GPL
- 工程地址:http://www.robots.ox.ac.uk/~gk/PTAM/
- 作者其他研究:http://www.robots.ox.ac.uk/~gk/publications.html
- 论文:Taihú Pire,Thomas Fischer, Gastón Castro, Pablo De Cristóforis, Javier Civera and Julio Jacobo Berlles. S-PTAM: Stereo Parallel Tracking and Mapping. Robotics and Autonomous Systems, 2017.
- 代码:https://github.com/lrse/sptam
- 作者其他论文:Castro G, Nitsche M A, Pire T, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robotics and Autonomous Systems, 2019.
- 论文:Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6): 1052-1067.
- 代码:https://github.com/hanmekim/SceneLib2
- 论文:Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
- 代码:https://github.com/raulmur/ORB_SLAM2
- 作者其他论文:
- 单目半稠密建图:Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. 2015, 2015.
- VIORB:Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.
- 多地图:Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, 2019.
以下5, 6, 7, 8几项是 TUM 计算机视觉组全家桶,官方主页:https://vision.in.tum.de/research/vslam/dso
- 论文:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.
- 代码:https://github.com/JakobEngel/dso
- 双目 DSO:Wang R, Schworer M, Cremers D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 3903-3911.
- VI-DSO:Von Stumberg L, Usenko V, Cremers D. Direct sparse visual-inertial odometry using dynamic marginalization[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2510-2517.
- 高翔在 DSO 上添加闭环的工作
- 论文:Gao X, Wang R, Demmel N, et al. LDSO: Direct sparse odometry with loop closure[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 2198-2204.
- 代码:https://github.com/tum-vision/LDSO
- 论文:Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.
- 代码:https://github.com/tum-vision/lsd_slam
- 论文:Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 2100-2106.
- 代码 1:https://github.com/tum-vision/dvo_slam
- 代码 2:https://github.com/tum-vision/dvo
- 其他论文:
- Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//2013 IEEE international conference on robotics and automation. IEEE, 2013: 3748-3754.
- Steinbrücker F, Sturm J, Cremers D. Real-time visual odometry from dense RGB-D images[C]//2011 IEEE international conference on computer vision workshops (ICCV Workshops). IEEE, 2011: 719-722.
- 苏黎世大学机器人与感知课题组
- 论文:Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.
- 代码:https://github.com/uzh-rpg/rpg_svo
- Forster C, Zhang Z, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2016, 33(2): 249-265.
- 论文:Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. arXiv preprint arXiv:1904.06577, 2019.
- 代码:https://github.com/jzubizarreta/dsm ;Video
- 论文:Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework[C]//Proceedings of the 27th ACM International Conference on Multimedia. 2019: 2292-2295.
- 代码:https://github.com/xdspacelab/openvslam ;文档
- 论文:Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3556-3562.
- 代码:https://github.com/izhengfan/se2lam
- 作者的另外一项工作
- 论文:Zheng F, Tang H, Liu Y H. Odometry-vision-based ground vehicle motion estimation with se (2)-constrained se (3) poses[J]. IEEE transactions on cybernetics, 2018, 49(7): 2652-2663.
- 代码:https://github.com/izhengfan/se2clam
- 论文:Chen Y, Shen S, Chen Y, et al. Graph-Based Parallel Large Scale Structure from Motion[J]. arXiv preprint arXiv:1912.10659, 2019.
- 代码:https://github.com/AIBluefisher/GraphSfM
- 论文:Lee S H, Civera J. Loosely-Coupled semi-direct monocular SLAM[J]. IEEE Robotics and Automation Letters, 2018, 4(2): 399-406.
- 代码:https://github.com/sunghoon031/LCSD_SLAM ;谷歌学术 ;演示视频
- 作者另外一篇关于单目尺度的文章 代码开源 :Lee S H, de Croon G. Stability-based scale estimation for monocular SLAM[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 780-787.
- 论文:Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 154-160.
- 代码:https://github.com/fabianschenk/RESLAM ; 项目主页
- 论文:Mo J, Sattar J. Extending Monocular Visual Odometry to Stereo Camera System by Scale Optimization[C]. International Conference on Intelligent Robots and Systems (IROS), 2019.
- 代码:https://github.com/jiawei-mo/scale_optimization
- 论文:Schops T, Sattler T, Pollefeys M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 134-144.
- 代码:https://github.com/ETH3D/badslam
- 论文:Zhao Y, Xu S, Bu S, et al. GSLAM: A general SLAM framework and benchmark[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 1110-1120.
- 代码:https://github.com/zdzhaoyong/GSLAM
- 论文:Nejad Z Z, Ahmadabadian A H. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs[J]. Machine Vision and Applications, 2019: 1-10.
- 代码:https://github.com/zanazakaryaie/ARM-VO
- 论文:Ghaffari M, Clark W, Bloch A, et al. Continuous Direct Sparse Visual Odometry from RGB-D Images[J]. arXiv preprint arXiv:1904.02266, 2019.
- 代码:https://github.com/MaaniGhaffari/cvo-rgbd
- 论文:Bu S, Zhao Y, Wan G, et al. Map2DFusion: Real-time incremental UAV image mosaicing based on monocular slam[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4564-4571.
- 代码:https://github.com/zdzhaoyong/Map2DFusion
- 论文:Schmuck P, Chli M. CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams[J]. Journal of Field Robotics, 2019, 36(4): 763-781.
- 代码:https://github.com/VIS4ROB-lab/ccm_slam Video
- 论文:Carlos Campos, Richard Elvira, et al.ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM[J]. arXiv preprint arXiv:2007.11898, 2020.
- 代码:https://github.com/UZ-SLAMLab/ORB_SLAM3 | Video
- 论文:Ferrera M, Eudes A, Moras J, et al. OV $^{2} $ SLAM: A Fully Online and Versatile Visual SLAM for Real-Time Applications[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1399-1406.
- 代码:https://github.com/ov2slam/ov2slam
- 论文:Zhou Y, Gallego G, Shen S. Event-based stereo visual odometry[J]. IEEE Transactions on Robotics, 2021.
- 代码:https://github.com/HKUST-Aerial-Robotics/ESVO
- 论文:Min Z, Dunn E. VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough[J]. arXiv preprint arXiv:2104.06800, 2021.
- 代码:https://github.com/htkseason/VOLDOR
- 论文:Runz M, Buffier M, Agapito L. Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects[C]//2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2018: 10-20.
- 代码:https://github.com/martinruenz/maskfusion
- 论文:McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.
- 代码:https://github.com/seaun163/semanticfusion
- 论文:Yang S, Huang Y, Scherer S. Semantic 3D occupancy mapping through efficient high order CRFs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 590-597.
- 代码:https://github.com/shichaoy/semantic_3d_mapping
- 论文:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.
- 代码:https://github.com/MIT-SPARK/Kimera ;演示视频
- 论文:Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, 2019: 1-31.
- 代码:https://github.com/cognav/NeuroSLAM
- 第四作者就是 Rat SLAM 的作者,文章也比较了十余种脑启发式的 SLAM
- 论文:Jatavallabhula K M, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.
- 代码(预计 20 年 4 月放出):https://github.com/montrealrobotics/gradSLAM ;项目主页,演示视频
- https://github.com/floatlazer/semantic_slam
- https://github.com/qixuxiang/orb-slam2_with_semantic_labelling
- https://github.com/Ewenwan/ORB_SLAM2_SSD_Semantic
- 论文:Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM[C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019: 121-128.
- 代码:https://github.com/navganti/SIVO
- 论文:Shan An, Guangfu Che, Fangru Zhou, Xianglong Liu, Xin Ma, Yu Chen. Fast and Incremental Loop Closure Detection using Proximity Graphs. pp. 378-385, The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)
- 代码:https://github.com/AnshanTJU/FILD
- 论文:Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, 2019: 1-10.
- 代码:https://github.com/CIFASIS/object-detection-sptam
- 论文:Torres-Camara J M, Escalona F, Gomez-Donoso F, et al. Map Slammer: Densifying Scattered KSLAM 3D Maps with Estimated Depth[C]//Iberian Robotics conference. Springer, Cham, 2019: 563-574.
- 代码:https://github.com/jmtc7/mapSlammer
- 论文:Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM[J]. arXiv preprint arXiv:1907.09760, 2019.
- 代码:https://github.com/bogus2000/NOLBO
- 论文:Tang J, Ericson L, Folkesson J, et al. GCNv2: Efficient correspondence prediction for real-time SLAM[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3505-3512.
- 代码:https://github.com/jiexiong2016/GCNv2_SLAM Video
- 论文:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4530-4537.
- 代码:https://github.com/PRBonn/semantic_suma/ ;Video
- 论文:Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural slam[C]. ICLR 2020.
- 代码:https://github.com/devendrachaplot/Neural-SLAM
- 论文:Wang W, Hu Y, Scherer S. TartanVO: A Generalizable Learning-based VO[J]. arXiv preprint arXiv:2011.00359, 2020.
- 代码:https://github.com/castacks/tartanvo
- 数据集:IROS2020 TartanAir: A Dataset to Push the Limits of Visual SLAM,数据集地址
- 论文:Zhan H, Weerasekera C S, Bian J W, et al. DF-VO: What Should Be Learnt for Visual Odometry?[J]. arXiv preprint arXiv:2103.00933, 2021.
- Zhan H, Weerasekera C S, Bian J W, et al. Visual odometry revisited: What should be learnt?[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 4203-4210.
- 代码:https://github.com/Huangying-Zhan/DF-VO
- 论文:Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 4211-4216.
- 代码:https://github.com/rubengooj/pl-svo
- 论文:Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 2521-2526.
- 代码:https://github.com/rubengooj/stvo-pl
- 论文:Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, 2017.
- 代码:https://github.com/rubengooj/pl-slam
- Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746.
- 论文:He Y, Zhao J, Guo Y, et al. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159.
- 代码:https://github.com/HeYijia/PL-VIO
- VINS + 线段:https://github.com/Jichao-Peng/VINS-Mono-Optimization
- 论文:Vakhitov A, Lempitsky V. Learnable line segment descriptor for visual SLAM[J]. IEEE Access, 2019, 7: 39923-39934.
- 代码:https://github.com/alexandervakhitov/lld-slam ;Video
点线结合的工作还有很多,国内的比如
- 上交邹丹平老师的 Zou D, Wu Y, Pei L, et al. StructVIO: visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics, 2019, 35(4): 999-1013.
- 浙大的 Zuo X, Xie X, Liu Y, et al. Robust visual SLAM with point and line features[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 1775-1782.
- 论文:Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, 2016, 10.
- 代码:https://github.com/LRMPUT/PlaneSLAM
- 作者另外一项开源代码,没有找到对应的论文:https://github.com/LRMPUT/PUTSLAM
- 论文:Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 1278-1284.
- 代码:https://gitlab.com/gferrer/eigen-factors-iros2019 ;演示视频
- 论文:Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, 2019, 113: 160-173.
- 代码:https://github.com/LRMPUT/PlaneLoc
- 论文:Yang S, Song Y, Kaess M, et al. Pop-up slam: Semantic monocular plane slam for low-texture environments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1222-1229.
- 代码:https://github.com/shichaoy/pop_up_slam
- 论文:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.
- 代码:https://github.com/BeipengMu/objectSLAM ;Video
- 论文:Grinvald M, Furrer F, Novkovic T, et al. Volumetric instance-aware semantic mapping and 3D object discovery[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 3037-3044.
- 代码:https://github.com/ethz-asl/voxblox-plusplus
- 论文:Yang S, Scherer S. Cubeslam: Monocular 3-d object slam[J]. IEEE Transactions on Robotics, 2019, 35(4): 925-938.
- 代码:https://github.com/shichaoy/cube_slam
- 对,这就是带我入坑的一项工作,2018 年 11 月份看到这篇论文(当时是预印版)之后开始学习物体级 SLAM,个人对 Cube SLAM 的一些注释和总结:链接。
- 也有很多有意思的但没开源的物体级 SLAM
- Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 669-675.
- Li J, Meger D, Dudek G. Semantic Mapping for View-Invariant Relocalization[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 7108-7115.
- Nicholson L, Milford M, Sünderhauf N. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam[J]. IEEE Robotics and Automation Letters, 2018, 4(1): 1-8.
- 论文:Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems[J]. IEEE Access, 2020.
- 代码:https://bitbucket.org/hridaybavle/semantic_slam/src/master/
- 论文:Li Y, Brasch N, Wang Y, et al. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 6583-6590.
- 代码:https://github.com/yanyan-li/Structure-SLAM-PointLine
- 论文:Fu Q, Wang J, Yu H, et al. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line[J]. arXiv preprint arXiv:2009.07462, 2020.
- 代码:https://github.com/cnqiangfu/PL-VINS
- 论文:Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.
- 代码:https://github.com/KumarRobotics/msckf_vio ;Video
- 论文:Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015: 298-304.
- 代码:https://github.com/ethz-asl/rovio ;Video
- 论文:Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6319-6326.
- 代码:https://github.com/rpng/R-VIO ;Video
- VI_ORB_SLAM2:https://github.com/YoujieXia/VI_ORB_SLAM2
- 论文:Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3): 314-334.
- 代码:https://github.com/ethz-asl/okvis
- 论文:Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.
- 代码:https://github.com/jingpang/LearnVIORB (VIORB 本身是没有开源的,这是王京大佬复现的一个版本)
- 论文:Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
- 代码:https://github.com/HKUST-Aerial-Robotics/VINS-Mono
- 双目版 VINS-Fusion:https://github.com/HKUST-Aerial-Robotics/VINS-Fusion
- 移动段 VINS-mobile:https://github.com/HKUST-Aerial-Robotics/VINS-Mobile
- 论文:Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots[J]. Sensors, 2019, 19(10): 2251.
- 代码:https://github.com/STAR-Center/VINS-RGBD ;Video
- 论文:Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.
- 代码:https://github.com/rpng/open_vins
- 论文:Tschopp F, Riner M, Fehr M, et al. VersaVIS—An Open Versatile Multi-Camera Visual-Inertial Sensor Suite[J]. Sensors, 2020, 20(5): 1439.
- 代码:https://github.com/ethz-asl/versavis
- 论文:Eckenhoff K, Geneva P, Huang G. Closed-form preintegration methods for graph-based visual–inertial navigation[J]. The International Journal of Robotics Research, 2018.
- 代码:https://github.com/rpng/cpi ;Video
- 论文:Usenko V, Demmel N, Schubert D, et al. Visual-inertial mapping with non-linear factor recovery[J]. IEEE Robotics and Automation Letters, 2019.
- 代码:https://github.com/VladyslavUsenko/basalt-mirror ;Video;Project Page
- 论文:Graeter J, Wilczynski A, Lauer M. Limo: Lidar-monocular visual odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 7872-7879.
- 代码:https://github.com/johannes-graeter/limo ; Video
- 论文:Qiu X, Zhang H, Fu W, et al. Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End[J]. Sensors, 2019, 19(8): 1941.
- 代码:https://github.com/PetWorm/LARVIO
- 北航邱笑晨博士的一项工作
- 论文:Li J, Bao H, Zhang G. Rapid and Robust Monocular Visual-Inertial Initialization with Gravity Estimation via Vertical Edges[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 6230-6236.
- 代码:https://github.com/zju3dv/vig-init
- 浙大章国峰老师组的一项工作
- 论文:Nagy B, Foehn P, Scaramuzza D. Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO[J]. arXiv preprint arXiv:2003.13493, 2020.
- 代码:https://github.com/uzh-rpg/vilib
- 论文:A. Rosinol, M. Abate, Y. Chang, L. Carlone, Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping. IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.
- 代码:https://github.com/MIT-SPARK/Kimera-VIO
- 论文:Schneider T, Dymczyk M, Fehr M, et al. maplab: An open framework for research in visual-inertial mapping and localization[J]. IEEE Robotics and Automation Letters, 2018, 3(3): 1418-1425.
- 代码:https://github.com/ethz-asl/maplab
- 多会话建图,地图合并,视觉惯性批处理优化和闭环
- 论文:Li K, Li M, Hanebeck U D. Towards high-performance solid-state-lidar-inertial odometry and mapping[J]. arXiv preprint arXiv:2010.13150, 2020.
- 代码:https://github.com/KIT-ISAS/lili-om
- 论文:ZHU, Yuewen, et al. CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System. arXiv preprint arXiv:2011.11357, 2020.
- 代码:https://github.com/ISEE-Technology/CamVox
- 论文:Wang H, Wang C, Xie L. Lightweight 3-D Localization and Mapping for Solid-State LiDAR[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1801-1807.
- 代码:https://github.com/wh200720041/SSL_SLAM
- 论文:Lin J, Zheng C, Xu W, et al. R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping[J]. arXiv preprint arXiv:2102.12400, 2021.
- 代码:https://github.com/hku-mars/r2live
- 论文:Cao S, Lu X, Shen S. GVINS: Tightly Coupled GNSS-Visual-Inertial for Smooth and Consistent State Estimation[J]. arXiv e-prints, 2021: arXiv: 2103.07899.
- 代码:https://github.com/HKUST-Aerial-Robotics/GVINS
- 论文:Shan T, Englot B, Ratti C, et al. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping[J]. arXiv preprint arXiv:2104.10831, 2021. (ICRA2021)
- 代码:https://github.com/TixiaoShan/LVI-SAM
- 论文:Kochanov D, Ošep A, Stückler J, et al. Scene flow propagation for semantic mapping and object discovery in dynamic street scenes[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 1785-1792.
- 代码:https://github.com/ganlumomo/DynamicSemanticMapping ;wiki
- 论文:Yu C, Liu Z, Liu X J, et al. DS-SLAM: A semantic visual SLAM towards dynamic environments[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1168-1174.
- 代码:https://github.com/ivipsourcecode/DS-SLAM
- 论文:Rünz M, Agapito L. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4471-4478.
- 代码:https://github.com/martinruenz/co-fusion ; Video
- 论文:Newcombe R A, Fox D, Seitz S M. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 343-352.
- 代码:https://github.com/mihaibujanca/dynamicfusion
- 论文:Palazzolo E, Behley J, Lottes P, et al. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals[J]. arXiv preprint arXiv:1905.02082, 2019.
- 代码:https://github.com/PRBonn/refusion ;Video
- 论文:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.
- 代码:https://github.com/AndreiBarsan/DynSLAM
- 作者博士学位论文:Barsan I A. Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.
- 论文:Zhang J, Henein M, Mahony R, et al. VDO-SLAM: A Visual Dynamic Object-aware SLAM System[J]. arXiv preprint arXiv:2005.11052, 2020.(IJRR Under Review)
- 相关论文
- 代码:https://github.com/halajun/VDO_SLAM | video
- 论文:Prisacariu V A, Kähler O, Golodetz S, et al. Infinitam v3: A framework for large-scale 3d reconstruction with loop closure[J]. arXiv preprint arXiv:1708.00783, 2017.
- 代码:https://github.com/victorprad/InfiniTAM ;project page
- 论文:Dai A, Nießner M, Zollhöfer M, et al. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration[J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 76a.
- 代码:https://github.com/niessner/BundleFusion ;工程地址
- 论文:Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011: 127-136.
- 代码:https://github.com/chrdiller/KinectFusionApp
- 论文:Whelan T, Salas-Moreno R F, Glocker B, et al. ElasticFusion: Real-time dense SLAM and light source estimation[J]. The International Journal of Robotics Research, 2016, 35(14): 1697-1716.
- 代码:https://github.com/mp3guy/ElasticFusion
- ElasticFusion 同一个团队的工作,帝国理工 Stefan Leutenegger 谷歌学术
- 论文:Whelan T, Kaess M, Johannsson H, et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion[J]. The International Journal of Robotics Research, 2015, 34(4-5): 598-626.
- 代码:https://github.com/mp3guy/Kintinuous
- 论文:Choi S, Zhou Q Y, Koltun V. Robust reconstruction of indoor scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 5556-5565.
- 代码:https://github.com/qianyizh/ElasticReconstruction ;作者主页
- 论文:Han L, Fang L. FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing[C]. RSS, 2018.
- 代码(一直没放出来):https://github.com/lhanaf/FlashFusion ; Project Page
- 论文:Labbé M, Michaud F. RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, 2019, 36(2): 416-446.
- 代码:https://github.com/introlab/rtabmap ;Video ;project page
- 论文:Lan Z, Yew Z J, Lee G H. Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 9690-9698.
- 代码:https://github.com/ziquan111/RobustPCLReconstruction ;Video
- 论文:Wang C, Guo X. Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019: 49-53.
- 代码:https://github.com/chaowang15/plane-opt-rgbd
- 论文:Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 6919-6925.
- 代码:https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping
- 论文:Schöps T, Sattler T, Pollefeys M. Surfelmeshing: Online surfel-based mesh reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
- 代码:https://github.com/puzzlepaint/surfelmeshing
- 论文:Concha Belenguer A, Civera Sancho J. DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence[C]//Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst. 2015 (ART-2015-92153).
- 代码:https://github.com/alejocb/dpptam
- 相关研究:基于超像素的单目 SLAM:Using Superpixels in Monocular SLAM ICRA 2014 ;谷歌学术
- 论文:Yang Z, Gao F, Shen S. Real-time monocular dense mapping on aerial robots using visual-inertial fusion[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4552-4559.
- 代码:https://github.com/dvorak0/VI-MEAN ;Video
- 论文:Pizzoli M, Forster C, Scaramuzza D. REMODE: Probabilistic, monocular dense reconstruction in real time[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 2609-2616.
- 原始开源代码:https://github.com/uzh-rpg/rpg_open_remode
- 与 ORB-SLAM2 结合版本:https://github.com/ayushgaud/ORB_SLAM2 https://github.com/ayushgaud/ORB_SLAM2
- 帝国理工学院戴森机器人实验室
- 论文:Czarnowski J, Laidlow T, Clark R, et al. DeepFactors: Real-Time Probabilistic Dense Monocular SLAM[J]. arXiv preprint arXiv:2001.05049, 2020.
- 代码:https://github.com/jczarnowski/DeepFactors (还未放出)
- 其他论文:Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2560-2568.
- 港科沈邵劼老师团队
- 论文:Ling Y, Wang K, Shen S. Probabilistic dense reconstruction from a moving camera[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6364-6371.
- 代码:https://github.com/ygling2008/probabilistic_mapping
- 另外一篇稠密重建文章的代码一直没放出来 Github :Ling Y, Shen S. Real‐time dense mapping for online processing and navigation[J]. Journal of Field Robotics, 2019, 36(5): 1004-1036.
- 论文:Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. 2015, 2015.
- 代码(本身没有开源,贺博复现的一个版本):https://github.com/HeYijia/ORB_SLAM2
- 加上线段之后的半稠密建图
- 论文:He S, Qin X, Zhang Z, et al. Incremental 3d line segment extraction from semi-dense slam[C]//2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018: 1658-1663.
- 代码:https://github.com/shidahe/semidense-lines
- 作者在此基础上用于指导远程抓取操作的一项工作:https://github.com/atlas-jj/ORB-SLAM-free-space-carving
- 论文:Reijgwart V, Millane A, Oleynikova H, et al. Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps[J]. IEEE Robotics and Automation Letters, 2019, 5(1): 227-234.
- 代码:https://github.com/ethz-asl/voxgraph
- 论文:Dubé R, Cramariuc A, Dugas D, et al. SegMap: 3d segment mapping using data-driven descriptors[J]. arXiv preprint arXiv:1804.09557, 2018.
- 代码:https://github.com/ethz-asl/segmap
- 论文:Kern A, Bobbe M, Khedar Y, et al. OpenREALM: Real-time Mapping for Unmanned Aerial Vehicles[J]. arXiv preprint arXiv:2009.10492, 2020.
- 代码:https://github.com/laxnpander/OpenREALM
- 论文:Millane A, Taylor Z, Oleynikova H, et al. C-blox: A scalable and consistent tsdf-based dense mapping approach[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 995-1002.
- 代码:https://github.com/ethz-asl/cblox
- GTSAM:https://github.com/borglab/gtsam ;官网
- g2o:https://github.com/RainerKuemmerle/g2o
- ceres:http://ceres-solver.org/
- 论文:Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.
- 代码:https://github.com/baidu/ICE-BA
- 论文:Dong J, Lv Z. miniSAM: A Flexible Factor Graph Non-linear Least Squares Optimization Framework[J]. arXiv preprint arXiv:1909.00903, 2019.
- 代码:https://github.com/dongjing3309/minisam ; 文档
- 论文:Aloise I, Della Corte B, Nardi F, et al. Systematic Handling of Heterogeneous Geometric Primitives in Graph-SLAM Optimization[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2738-2745.
- 代码:https://srrg.gitlab.io/sashago-website/index.html#
- 论文:Hsiao M, Kaess M. MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 1274-1280.
- 代码:https://bitbucket.org/rpl_cmu/mh-isam2_lib/src/master/
- 论文:Blanco-Claraco J L. A Modular Optimization Framework for Localization and Mapping[J]. Proc. of Robotics: Science and Systems (RSS), FreiburgimBreisgau, Germany, 2019, 2.
- 代码:https://github.com/MOLAorg/mola ;Video ;使用文档
这一部分整理之后发布在知乎(2020 年 4 月 19 日):https://zhuanlan.zhihu.com/p/130530891
- 研究方向:机器人感知、结构,服务型、运输、制造业、现场机器
- 研究所主页:https://www.ri.cmu.edu/
- 下属 Field Robotic Center 主页:https://frc.ri.cmu.edu/
- 发表论文:https://www.ri.cmu.edu/pubs/
- 👦 Michael Kaess:个人主页 ,谷歌学术
- 👦 Sebastian Scherer:个人主页 ,谷歌学术
- 📜 Kaess M, Ranganathan A, Dellaert F. iSAM: Incremental smoothing and mapping[J]. IEEE Transactions on Robotics, 2008, 24(6): 1365-1378.
- 📜 Hsiao M, Westman E, Zhang G, et al. Keyframe-based dense planar SLAM[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 5110-5117.
- 📜 Kaess M. Simultaneous localization and mapping with infinite planes[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 4605-4611.
- 研究方向:多模态环境理解,语义导航,自主信息获取
- 实验室主页:https://existentialrobotics.org/index.html
- 发表论文汇总:https://existentialrobotics.org/pages/publications.html
- 👦 Nikolay Atanasov:个人主页 谷歌学术
- 机器人状态估计与感知课程 ppt:https://natanaso.github.io/ece276a2019/schedule.html
- 📜 语义 SLAM 经典论文:Bowman S L, Atanasov N, Daniilidis K, et al. Probabilistic data association for semantic slam[C]//2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017: 1722-1729.
- 📜 实例网格模型定位与建图:Feng Q, Meng Y, Shan M, et al. Localization and Mapping using Instance-specific Mesh Models[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4985-4991.
- 📜 基于事件相机的 VIO:Zihao Zhu A, Atanasov N, Daniilidis K. Event-based visual inertial odometry[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5391-5399.
- 研究方向:SLAM、VINS、语义定位与建图等
- 实验室主页:https://sites.udel.edu/robot/
- 发表论文汇总:https://sites.udel.edu/robot/publications/
- Github 地址:https://github.com/rpng?page=2
- 📜 Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.(代码:https://github.com/rpng/open_vins )
- 📜 Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6319-6326.(代码:https://github.com/rpng/R-VIO )
- 📜 Zuo X, Geneva P, Yang Y, et al. Visual-Inertial Localization With Prior LiDAR Map Constraints[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3394-3401.
- 📜 Zuo X, Ye W, Yang Y, et al. Multimodal localization: Stereo over LiDAR map[J]. Journal of Field Robotics, 2020 ( 左星星博士谷歌学术)
- 👦 黄国权教授主页
- 研究方向:位姿估计与导航,路径规划,控制与决策,机器学习与强化学习
- 实验室主页:http://acl.mit.edu/
- 发表论文:http://acl.mit.edu/publications (实验室的学位论文也可以在这里找到)
- 👦 Jonathan P. How 教授:个人主页 谷歌学术
- 👦 Kasra Khosoussi(SLAM 图优化):谷歌学术
- 📜 物体级 SLAM:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.(代码:https://github.com/BeipengMu/objectSLAM)
- 📜 物体级 SLAM 导航:Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 669-675.
- 📜 SLAM 的图优化:Khosoussi, K., Giamou, M., Sukhatme, G., Huang, S., Dissanayake, G., and How, J. P., Reliable Graphs for SLAM [C]//International Journal of Robotics Research (IJRR), 2019.
- 研究方向:移动机器人环境感知
- 实验室主页:http://web.mit.edu/sparklab/
- 👦 Luca Carlone 教授:个人主页 谷歌学术
- 📜 SLAM 经典综述:Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age[J]. IEEE Transactions on robotics, 2016, 32(6): 1309-1332.
- 📜 VIO 流形预积分:Forster C, Carlone L, Dellaert F, et al. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21.
- 📜 开源语义 SLAM:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.(代码:https://github.com/MIT-SPARK/Kimera )
- 研究方向:水下或陆地移动机器人导航与建图
- 实验室主页:https://marinerobotics.mit.edu/ (隶属于 MIT 计算机科学与人工智能实验室)
- 👦 John Leonard 教授:谷歌学术
- 发表论文汇总:https://marinerobotics.mit.edu/biblio
- 📜 面向物体的 SLAM:Finman R, Paull L, Leonard J J. Toward object-based place recognition in dense rgb-d maps[C]//ICRA Workshop Visual Place Recognition in Changing Environments, Seattle, WA. 2015.
- 📜 拓展 KinectFusion:Whelan T, Kaess M, Fallon M, et al. Kintinuous: Spatially extended kinectfusion[J]. 2012.
- 📜 语义 SLAM 概率数据关联:Doherty K, Fourie D, Leonard J. Multimodal semantic slam with probabilistic data association[C]//2019 international conference on robotics and automation (ICRA). IEEE, 2019: 2419-2425.
- 研究方向:视觉、激光、惯性导航系统,移动设备大规模三维建模与定位
- 实验室主页:http://mars.cs.umn.edu/index.php
- 发表论文汇总:http://mars.cs.umn.edu/publications.php
- 👦 Stergios I. Roumeliotis:个人主页 ,谷歌学术
- 📜 移动设备 VIO:Wu K, Ahmed A, Georgiou G A, et al. A Square Root Inverse Filter for Efficient Vision-aided Inertial Navigation on Mobile Devices[C]//Robotics: Science and Systems. 2015, 2.(项目主页:http://mars.cs.umn.edu/research/sriswf.php )
- 📜 移动设备大规模三维半稠密建图:Guo C X, Sartipi K, DuToit R C, et al. Resource-aware large-scale cooperative three-dimensional mapping using multiple mobile devices[J]. IEEE Transactions on Robotics, 2018, 34(5): 1349-1369. (项目主页:http://mars.cs.umn.edu/research/semi_dense_mapping.php )
- 📜 VIO 相关研究:http://mars.cs.umn.edu/research/vins_overview.php
- 研究方向:自主微型无人机
- 实验室主页:https://www.kumarrobotics.org/
- 发表论文:https://www.kumarrobotics.org/publications/
- 研究成果视频:https://www.youtube.com/user/KumarLabPenn/videos
- 📜 无人机半稠密 VIO:Liu W, Loianno G, Mohta K, et al. Semi-Dense Visual-Inertial Odometry and Mapping for Quadrotors with SWAP Constraints[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-6.
- 📜 语义数据关联:Liu X, Chen S W, Liu C, et al. Monocular Camera Based Fruit Counting and Mapping with Semantic Data Association[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2296-2303.
- 研究方向:三维重构、语义分割、视觉 SLAM、图像定位、深度神经网络
- 👦 Srikumar Ramalingam:个人主页 谷歌学术
- 📜 点面 SLAM:Taguchi Y, Jian Y D, Ramalingam S, et al. Point-plane SLAM for hand-held 3D sensors[C]//2013 IEEE international conference on robotics and automation. IEEE, 2013: 5182-5189.
- 📜 点线定位:Ramalingam S, Bouaziz S, Sturm P. Pose estimation using both points and lines for geo-localization[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011: 4716-4723.(视频)
- 📜 2D 3D 定位:Ataer-Cansizoglu E, Taguchi Y, Ramalingam S. Pinpoint SLAM: A hybrid of 2D and 3D simultaneous localization and mapping for RGB-D sensors[C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016: 1300-1307.(视频)
- 研究方向:SLAM,图像时空重构
- 👦 个人主页,谷歌学术
- 📜 因子图:Dellaert F. Factor graphs and GTSAM: A hands-on introduction[R]. Georgia Institute of Technology, 2012. (GTSAM 代码:http://borg.cc.gatech.edu/ )
- 📜 多机器人分布式 SLAM:Cunningham A, Wurm K M, Burgard W, et al. Fully distributed scalable smoothing and mapping with robust multi-robot data association[C]//2012 IEEE International Conference on Robotics and Automation. IEEE, 2012: 1093-1100.
- 📜 Choudhary S, Trevor A J B, Christensen H I, et al. SLAM with object discovery, modeling and mapping[C]//2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014: 1018-1025.
- 研究方向:机器人控制、定位与导航
- 实验室主页:http://ivalab.gatech.edu/
- 👦 Patricio Vela 个人主页
- 👦 赵轶璞 个人主页 谷歌学术
- 📜 Zhao Y, Smith J S, Karumanchi S H, et al. Closed-Loop Benchmarking of Stereo Visual-Inertial SLAM Systems: Understanding the Impact of Drift and Latency on Tracking Accuracy[J]. arXiv preprint arXiv:2003.01317, 2020.
- 📜 Zhao Y, Vela P A. Good feature selection for least squares pose optimization in VO/VSLAM[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1183-1189.(代码:https://github.com/ivalab/FullResults_GoodFeature )
- 📜 Zhao Y, Vela P A. Good line cutting: Towards accurate pose tracking of line-assisted VO/VSLAM[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 516-531. (代码:https://github.com/ivalab/GF_PL_SLAM )
- 研究方向:SLAM,不确定性建模
- 实验室主页:http://montrealrobotics.ca/
- 👦 Liam Paull 教授:个人主页 谷歌学术
- 📜 Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.(代码:https://github.com/BeipengMu/objectSLAM)
- 📜 Murthy Jatavallabhula K, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.(代码:https://github.com/montrealrobotics/gradSLAM )
- 研究方向:移动机器人软硬件设计
- 实验室主页:https://introlab.3it.usherbrooke.ca/
- 📜 激光视觉稠密重建:Labbé M, Michaud F. RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, 2019, 36(2): 416-446.
- 研究方向:移动机器人、无人机环境感知与导航,VISLAM,事件相机
- 实验室主页:http://rpg.ifi.uzh.ch/index.html
- 发表论文汇总:http://rpg.ifi.uzh.ch/publications.html
- Github 代码公开地址:https://github.com/uzh-rpg
- 📜 Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.
- 📜 VO/VIO 轨迹评估工具 rpg_trajectory_evaluation:https://github.com/uzh-rpg/rpg_trajectory_evaluation
- 📜 事件相机项目主页:http://rpg.ifi.uzh.ch/research_dvs.html
- 👦 人物:Davide Scaramuzza 张子潮
- 研究方向:定位、三维重建、语义分割、机器人视觉
- 实验室主页:http://www.cvg.ethz.ch/index.php
- 发表论文:http://www.cvg.ethz.ch/publications/
- 📜 视觉语义里程计:Lianos K N, Schonberger J L, Pollefeys M, et al. Vso: Visual semantic odometry[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 234-250.
- 📜 视觉语义定位:CVPR 2018 Semantic visual localization
- 📜 大规模户外建图:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.
- 代码:https://github.com/AndreiBarsan/DynSLAM
- 作者博士学位论文:Barsan I A. Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.
- 👦 Marc Pollefeys:个人主页,谷歌学术
- 👦 Johannes L. Schönberger:个人主页,谷歌学术
- 研究方向:机器人视觉场景与物体理解、机器人操纵
- 实验室主页:https://www.imperial.ac.uk/dyson-robotics-lab/
- 发表论文:https://www.imperial.ac.uk/dyson-robotics-lab/publications/
- 代表性工作:MonoSLAM、CodeSLAM、ElasticFusion、KinectFusion
- 📜 ElasticFusion:Whelan T, Leutenegger S, Salas-Moreno R, et al. ElasticFusion: Dense SLAM without a pose graph[C]. Robotics: Science and Systems, 2015.(代码:https://github.com/mp3guy/ElasticFusion )
- 📜 Semanticfusion:McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.(代码:https://github.com/seaun163/semanticfusion )
- 📜 Code-SLAM:Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2560-2568.
- 👦 Andrew Davison:谷歌学术
- 研究方向:SLAM、目标跟踪、运动结构、场景增强、移动机器人运动规划、导航与建图等等等
- 实验室主页:http://www.robots.ox.ac.uk/
- 主动视觉实验室:http://www.robots.ox.ac.uk/ActiveVision/
- 牛津机器人学院:https://ori.ox.ac.uk/
- 发表论文汇总:
- 代表性工作:
- 📜 Klein G, Murray D. PTAM: Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, 2007: 225-234.
- 📜 RobotCar 数据集:https://robotcar-dataset.robots.ox.ac.uk/
- 👦 人物(谷歌学术):David Murray Maurice Fallon
- 部分博士学位论文可以在这里搜到:https://ora.ox.ac.uk/
- 研究方向:三维重建、机器人视觉、深度学习、视觉 SLAM 等
- 实验室主页:https://vision.in.tum.de/research/vslam
- 发表论文汇总:https://vision.in.tum.de/publications
- 代表作:DSO、LDSO、LSD_SLAM、DVO_SLAM
- 📜 DSO:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.(代码:https://github.com/JakobEngel/dso )
- 📜 LSD-SLAM: Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.(代码:https://github.com/tum-vision/lsd_slam )2.
- Github 地址:https://github.com/tum-vision
- 👦 Daniel Cremers 教授:个人主页 谷歌学术
- 👦 Jakob Engel(LSD-SLAM,DSO 作者):个人主页 谷歌学术
- 研究方向:智能体自主环境理解、导航与物体操纵
- 实验室主页:https://ev.is.tuebingen.mpg.de/
- 👦 负责人 Jörg Stückler(前 TUM 教授):个人主页 谷歌学术
- 📜 发表论文汇总:https://ev.is.tuebingen.mpg.de/publications
- Kasyanov A, Engelmann F, Stückler J, et al. Keyframe-based visual-inertial online SLAM with relocalization[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 6662-6669.
- 📜 Strecke M, Stuckler J. EM-Fusion: Dynamic Object-Level SLAM with Probabilistic Data Association[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 5865-5874.
- 📜 Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D. Visual-Inertial Mapping with Non-Linear Factor Recovery IEEE Robotics and Automation Letters (RA-L), 5, 2020
- 研究方向:多机器人导航与协作,环境建模与状态估计
- 实验室主页:http://ais.informatik.uni-freiburg.de/index_en.php
- 发表论文汇总:http://ais.informatik.uni-freiburg.de/publications/index_en.php (学位论文也可以在这里找到)
- 👦 Wolfram Burgard:谷歌学术
- 开放数据集:http://aisdatasets.informatik.uni-freiburg.de/
- 📜 RGB-D SLAM:Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE transactions on robotics, 2013, 30(1): 177-187.(代码:https://github.com/felixendres/rgbdslam_v2 )
- 📜 跨季节的 SLAM:Naseer T, Ruhnke M, Stachniss C, et al. Robust visual SLAM across seasons[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015: 2529-2535.
- 📜 博士学位论文:Robust Graph-Based Localization and Mapping 2015
- 📜 博士学位论文:Discovering and Leveraging Deep Multimodal Structure for Reliable Robot Perception and Localization 2019
- 📜 博士学位论文:Robot Localization and Mapping in Dynamic Environments 2019
- 研究方向:视觉 SLAM、物体 SLAM、非刚性 SLAM、机器人、增强现实
- 实验室主页:http://robots.unizar.es/slamlab/
- 发表论文:http://robots.unizar.es/slamlab/?extra=3 (论文好像没更新,可以访问下面实验室大佬的谷歌学术查看最新论文)
- 👦 J. M. M. Montiel:谷歌学术
- 📜 Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
- Gálvez-López D, Salas M, Tardós J D, et al. Real-time monocular object slam[J]. Robotics and Autonomous Systems, 2016, 75: 435-449.
- 📜 Strasdat H, Montiel J M M, Davison A J. Real-time monocular SLAM: Why filter?[C]//2010 IEEE International Conference on Robotics and Automation. IEEE, 2010: 2657-2664.
- 📜 Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. arXiv preprint arXiv:1904.06577, 2019.
- Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, 2019.
- 研究方向:自主机器人、人工嗅觉、计算机视觉
- 实验室主页:http://mapir.uma.es/mapirwebsite/index.php/topics-2.html
- 发表论文汇总:http://mapir.isa.uma.es/mapirwebsite/index.php/publications-menu-home.html
- 📜 Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746.(代码:https://github.com/rubengooj/pl-slam )
- 👦 Francisco-Angel Moreno
- 👦 Ruben Gomez-Ojeda 点线 SLAM
- 📜 Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 4211-4216.(代码:https://github.com/rubengooj/pl-svo )
- 📜 Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 2521-2526.(代码:https://github.com/rubengooj/stvo-pl )
- 📜 Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, 2017.(代码:https://github.com/rubengooj/pl-slam )
- 研究方向:SLAM,单目稠密重建,传感器融合
- 👦 个人主页:https://sites.google.com/view/alejoconcha/ 谷歌学术
- Github:https://github.com/alejocb
- 📜 IROS 2015 单目平面重建:DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence (代码:https://github.com/alejocb/dpptam )
- 📜 IROS 2017 开源 RGB-D SLAM:RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System(代码:https://github.com/alejocb/rgbdtam )
- 📜 ICRA 2016:Visual-inertial direct SLAM
- 📜 ICRA 2014:Using Superpixels in Monocular SLAM
- RSS 2014:Manhattan and Piecewise-Planar Constraints for Dense Monocular Mapping
- 研究方向:AR/VR,机器人视觉,机器学习,目标识别与三维重建
- 实验室主页:https://www.tugraz.at/institutes/icg/home/
- 👦 Friedrich Fraundorfer 教授:团队主页 谷歌学术
- 📜 Visual Odometry: Part I The First 30 Years and Fundamentals
- 📜 Visual Odometry: Part II: Matching, Robustness, Optimization, and Applications
- 📜 Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 154-160.(代码:https://github.com/fabianschenk/RESLAM )
- 👦 Dieter Schmalstieg 教授:团队主页 谷歌学术
- 📜 教科书:Augmented Reality: Principles and Practice
- 📜 Arth C, Pirchheim C, Ventura J, et al. Instant outdoor localization and slam initialization from 2.5 d maps[J]. IEEE transactions on visualization and computer graphics, 2015, 21(11): 1309-1318.
- 📜 Hachiuma R, Pirchheim C, Schmalstieg D, et al. DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM[J]. arXiv preprint arXiv:1907.09127, 2019.
- 研究方向:SLAM,机器人运动规划,控制
- 实验室主页:http://lrm.put.poznan.pl/
- Github 主页:https://github.com/LRMPUT
- 📜 Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, 2016, 10.(代码:https://github.com/LRMPUT/PlaneSLAM )
- 📜 Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, 2019.(代码:https://github.com/LRMPUT/PlaneLoc )
- 📜 PUTSLAM:http://lrm.put.poznan.pl/putslam/
- 研究方向:SLAM,几何视觉
- 👦 个人主页:https://alexandervakhitov.github.io/ ,谷歌学术
- 📜 点线 SLAM:ICRA 2017 PL-SLAM: Real-time monocular visual SLAM with points and lines
- 📜 点线定位:Pumarola A, Vakhitov A, Agudo A, et al. Relative localization for aerial manipulation with PL-SLAM[M]//Aerial Robotic Manipulation. Springer, Cham, 2019: 239-248.
- 📜 学习型线段:IEEE Access 2019 Learnable line segment descriptor for visual SLAM(代码:https://github.com/alexandervakhitov/lld-slam )
- 研究方向:脑启发式机器人,采矿机器人,机器人视觉
- 实验室主页:https://www.qut.edu.au/research/centre-for-robotics
- 开源代码:https://research.qut.edu.au/qcr/open-source-code/
- 👦 Niko Sünderhauf:个人主页 ,谷歌学术
- 📜 RA-L 2018 二次曲面 SLAM:QuadricSLAM: Dual quadrics from object detections as landmarks in object-oriented SLAM
- 📜 Nicholson L, Milford M, Sunderhauf N. QuadricSLAM: Dual quadrics as SLAM landmarks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018: 313-314.
- 📜 Semantic SLAM 项目主页:http://www.semanticslam.ai/
- 📜 IROS 2017:Meaningful maps with object-oriented semantic mapping
- 👦 Michael Milford:谷歌学术 https://scholar.google.com/citations?user=TDSmCKgAAAAJ&hl=zh-CN&oi=ao
- 📜 ICRA 2012:SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights (代码:https://michaelmilford.com/seqslam/)
- 📜 Ball D, Heath S, Wiles J, et al. OpenRatSLAM: an open source brain-based SLAM system[J]. Autonomous Robots, 2013, 34(3): 149-176.(代码:https://openslam-org.github.io/openratslam.html )
- 📜 Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, 2019, 113(5-6): 515-545. (代码:https://github.com/cognav/NeuroSLAM )
- 研究方向:机器人感知、理解与学习 (集合了昆士兰科技大学,澳大利亚国立大学,阿德莱德大学,昆士兰大学等学校机器人领域的研究者)
- 实验室主页:https://www.roboticvision.org/
- 人物:https://www.roboticvision.org/rv_person_category/researchers/
- 发表论文汇总:https://www.roboticvision.org/publications/scientific-publications/
- 👦 Yasir Latif:个人主页,谷歌学术
- 📜 Latif Y, Cadena C, Neira J. Robust loop closing over time for pose graph SLAM[J]. The International Journal of Robotics Research, 2013, 32(14): 1611-1626.
- 📜 Latif Y, Cadena C, Neira J. Robust loop closing over time[C]//Proc. Robotics: Science Systems. 2013: 233-240.(代码:https://github.com/ylatif/rrr )
- 👦 Ian D Reid:谷歌学术:https://scholar.google.com/citations?user=ATkNLcQAAAAJ&hl=zh-CN&oi=sra
- 📜 ICRA 2019:Real-time monocular object-model aware sparse SLAM
- 📜 Reid I. Towards semantic visual SLAM[C]//2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014: 1-1.
- 人工智能研究中心:https://www.airc.aist.go.jp/en/intro/
- 👦 Ken Sakurada:个人主页,谷歌学术
- 📜 Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework[C]//Proceedings of the 27th ACM International Conference on Multimedia. 2019: 2292-2295.(代码:https://github.com/xdspacelab/openvslam )
- 👦 Shuji Oishi:谷歌学术
- 📜 极稠密特征点建图:Yokozuka M, Oishi S, Thompson S, et al. VITAMIN-E: visual tracking and MappINg with extremely dense feature points[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2019: 9641-9650.
- 📜 Oishi S, Inoue Y, Miura J, et al. SeqSLAM++: View-based robot localization and navigation[J]. Robotics and Autonomous Systems, 2019, 112: 13-21.
- 研究方向:视觉里程计,定位,AR/VR
- 👦 个人主页,谷歌学术
- 📜 平面 SLAM:ECCV 2018:Linear RGB-D SLAM for planar environments
- 📜 光照变化下的鲁棒 SLAM:ICRA 2017:Robust visual localization in changing lighting conditions
- 📜 线面 SLAM:CVPR 2018:Indoor RGB-D Compass from a Single Line and Plane
- 📜 博士学位论文:Low-Drift Visual Odometry for Indoor Robotics
- 研究方向:空中机器人在复杂环境下的自主运行,包括状态估计、建图、运动规划、多机器人协同以及低成本传感器和计算组件的实验平台开发。
- 实验室主页:http://uav.ust.hk/
- 发表论文:http://uav.ust.hk/publications/
- 👦 沈邵劼教授谷歌学术
- 代码公开地址:https://github.com/HKUST-Aerial-Robotics
- 📜 Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.(代码:https://github.com/HKUST-Aerial-Robotics/VINS-Mono )
- 📜 Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 6919-6925.(代码:https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping )
- 研究方向:无人车;无人船;室内定位;机器学习。
- 实验室主页:https://www.ram-lab.com/
- 发表论文:https://www.ram-lab.com/publication/
- 👦 刘明教授谷歌学术
- 📜 Ye H, Chen Y, Liu M. Tightly coupled 3d lidar inertial odometry and mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3144-3150.(代码:https://github.com/hyye/lio-mapping )
- 📜 Zhang J, Tai L, Boedecker J, et al. Neural slam: Learning to explore with external memory[J]. arXiv preprint arXiv:1706.09520, 2017.
- 研究方向:工业、物流、手术机器人,三维影像,机器学习
- 实验室主页:http://ri.cuhk.edu.hk/
- 👦 刘云辉教授:http://ri.cuhk.edu.hk/yhliu
- 👦 李浩昂:个人主页,谷歌学术
- 📜 Li H, Yao J, Bazin J C, et al. A monocular SLAM system leveraging structural regularity in Manhattan world[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2518-2525.
- 📜 Li H, Yao J, Lu X, et al. Combining points and lines for camera pose estimation and optimization in monocular visual odometry[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 1289-1296.
- 📜 消失点检测:Lu X, Yaoy J, Li H, et al. 2-Line Exhaustive Searching for Real-Time Vanishing Point Estimation in Manhattan World[C]//Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, 2017: 345-353.(代码:https://github.com/xiaohulugo/VanishingPointDetection )
- 👦 郑帆:个人主页,谷歌学术
- 📜 Zheng F, Tang H, Liu Y H. Odometry-vision-based ground vehicle motion estimation with se (2)-constrained se (3) poses[J]. IEEE transactions on cybernetics, 2018, 49(7): 2652-2663.(代码:https://github.com/izhengfan/se2clam )
- 📜 Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3556-3562.(代码:https://github.com/izhengfan/se2lam )
- 研究方向:SFM/SLAM,三维重建,增强现实
- 实验室主页:http://www.zjucvg.net/
- Github 代码地址:https://github.com/zju3dv
- 👦 章国峰教授:个人主页,谷歌学术
- 📜 ICE-BA:Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.(代码:https://github.com/zju3dv/EIBA )
- 📜 RK-SLAM:Liu H, Zhang G, Bao H. Robust keyframe-based monocular SLAM for augmented reality[C]//2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2016: 1-10.(项目主页:http://www.zjucvg.net/rkslam/rkslam.html )
- 📜 RD-SLAM:Tan W, Liu H, Dong Z, et al. Robust monocular SLAM in dynamic environments[C]//2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2013: 209-218.
- 研究方向:视觉 SLAM,SFM,多源导航,微型无人机
- 👦 个人主页:http://drone.sjtu.edu.cn/dpzou/index.php , 谷歌学术
- 📜 Co-SLAM:Zou D, Tan P. Coslam: Collaborative visual slam in dynamic environments[J]. IEEE transactions on pattern analysis and machine intelligence, 2012, 35(2): 354-366.(代码:https://github.com/danping/CoSLAM )
- 📜 StructSLAM:Zhou H, Zou D, Pei L, et al. StructSLAM: Visual SLAM with building structure lines[J]. IEEE Transactions on Vehicular Technology, 2015, 64(4): 1364-1375.(项目主页:http://drone.sjtu.edu.cn/dpzou/project/structslam.php )
- 📜 StructVIO:Zou D, Wu Y, Pei L, et al. StructVIO: visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics, 2019, 35(4): 999-1013.
- 研究方向:语义定位与建图、SLAM、在线学习与增量学习
- 👦 个人主页:http://www.adv-ci.com/blog/ 谷歌学术
- 布老师的课件:http://www.adv-ci.com/blog/course/
- 实验室 2018 年暑期培训资料:https://github.com/zdzhaoyong/SummerCamp2018
- 📜 开源的通用 SLAM 框架:Zhao Y, Xu S, Bu S, et al. GSLAM: A general SLAM framework and benchmark[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 1110-1120.(代码:https://github.com/zdzhaoyong/GSLAM )
- 📜 Bu S, Zhao Y, Wan G, et al. Map2DFusion: Real-time incremental UAV image mosaicing based on monocular slam[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4564-4571.(代码:https://github.com/zdzhaoyong/Map2DFusion )
- 📜 Wang W, Zhao Y, Han P, et al. TerrainFusion: Real-time Digital Surface Model Reconstruction based on Monocular SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 7895-7902.
- 研究方向:概率机器人、SLAM、自主导航、视觉激光感知、场景分析与分配、无人飞行器
- 实验室主页:https://www.ipb.uni-bonn.de/
- 👦 个人主页:https://www.ipb.uni-bonn.de/people/cyrill-stachniss/ 谷歌学术
- 发表论文:https://www.ipb.uni-bonn.de/publications/
- 开源代码:https://github.com/PRBonn
- 📜 IROS 2019 激光语义 SLAM:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4530-4537.(代码:https://github.com/PRBonn/semantic_suma/ )
- Cyrill Stachniss 教授 SLAM 公开课:youtube ; bilibili
- 波恩大学另外一个智能自主系统实验室:http://www.ais.uni-bonn.de/research.html
- Mobile Perception Lab:http://mpl.sist.shanghaitech.edu.cn/
- 👦 Laurent Kneip:个人主页;谷歌学术
- 📜 Zhou Y, Li H, Kneip L. Canny-vo: Visual odometry with rgb-d cameras based on geometric 3-d–2-d edge alignment[J]. IEEE Transactions on Robotics, 2018, 35(1): 184-199.
- 自主移动机器人实验室:https://robotics.shanghaitech.edu.cn/zh
- 👦 Sören Schwertfeger:个人主页;谷歌学术
- 📜 Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots[J]. Sensors, 2019, 19(10): 2251.(代码:https://github.com/STAR-Center/VINS-RGBD )
- 学院官网:https://robotics.umich.edu/
- 研究方向:https://robotics.umich.edu/research/focus-areas/
- 感知机器人实验室(PeRL)
- 实验室主页:http://robots.engin.umich.edu/About/
- 👦 Ryan M. Eustice 谷歌学术
- 📜 激光雷达数据集 Pandey G, McBride J R, Eustice R M. Ford campus vision and lidar data set[J]. The International Journal of Robotics Research, 2011, 30(13): 1543-1552. | 数据集
- APRIL robotics lab
- 实验室主页:https://april.eecs.umich.edu/
- 👦 Edwin Olson 个人主页 | 谷歌学术
- 📜 Olson E. AprilTag: A robust and flexible visual fiducial system[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011: 3400-3407. | 代码
- 📜 Wang X, Marcotte R, Ferrer G, et al. ApriISAM: Real-time smoothing and mapping[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2486-2493. | 代码
- 研究方向:复杂多样环境中自主运行的机器人和智能系统
- 实验室主页:https://asl.ethz.ch/
- 发表论文:https://asl.ethz.ch/publications-and-sources/publications.html
- youtube | Github
- 👦 Cesar Cadena 个人主页
- 📜 Schneider T, Dymczyk M, Fehr M, et al. maplab: An open framework for research in visual-inertial mapping and localization[J]. IEEE Robotics and Automation Letters, 2018, 3(3): 1418-1425. | 代码
- 📜 Dubé R, Cramariuc A, Dugas D, et al. SegMap: 3d segment mapping using data-driven descriptors[J]. arXiv preprint arXiv:1804.09557, 2018. | 代码
- 📜 Millane A, Taylor Z, Oleynikova H, et al. C-blox: A scalable and consistent tsdf-based dense mapping approach[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 995-1002. | 代码
- 研究方向:MAV 导航与控制;人机交互的自然语言理解;自主海洋机器人的语义理解
- 实验室主页:http://groups.csail.mit.edu/rrg/index.php
- 👦 Nicholas Roy:Google Scholar
- 📜 Greene W N, Ok K, Lommel P, et al. Multi-level mapping: Real-time dense monocular SLAM[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 833-840. video
- 📜 ICRA 2020 Metrically-Scaled Monocular SLAM using Learned Scale Factors." International Conference on Robotics and Automation | video
- 📜 ICRA 2019 Robust Object-based SLAM for High-speed Autonomous Navigation
- 研究方向:机器人视觉,无人机,自主导航,多机器人协同
- 实验室主页:https://v4rl.ethz.ch/the-group.html
- 👦 Margarita Chli:个人主页 | Google Scholar
- 📜 Schmuck P, Chli M. CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams[J]. Journal of Field Robotics, 2019, 36(4): 763-781. code | video
- 📜 Bartolomei L, Karrer M, Chli M. Multi-robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)(virtual). 2020. code | video
- 📜 Schmuck P, Chli M. Multi-uav collaborative monocular slam[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 3863-3870.
- 研究方向:控制,多智能体,定位
- 个人主页:https://personal.ntu.edu.sg/elhxie/research.html | Google Scholar
- 👦 Wang Han:个人主页 | Github
- 📜 Wang H, Wang C, Xie L. Intensity scan context: Coding intensity and geometry relations for loop closure detection[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 2095-2101. | Code
- 📜 Wang H, Wang C, Xie L. Lightweight 3-D Localization and Mapping for Solid-State LiDAR[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1801-1807. | Code
- 📜 Wang C, Yuan J, Xie L. Non-iterative SLAM[C]//2017 18th International Conference on Advanced Robotics (ICAR). IEEE, 2017: 83-90.
这一部分的内容不太完整,陆续丰富,欢迎补充
- 1) SLAMcn:http://www.slamcn.org/index.php/
- 2) SLAM 最新研究更新 Recent_SLAM_Research :https://github.com/YiChenCityU/Recent_SLAM_Research
- 3) 西北工大智能系统实验室 SLAM 培训:https://github.com/zdzhaoyong/SummerCamp2018
- 4) IROS 2019 视觉惯导导航的挑战与应用研讨会:http://udel.edu/~ghuang/iros19-vins-workshop/index.html
- 5) 泡泡机器人 VIO 相关资料:https://github.com/PaoPaoRobot/Awesome-VIO
- 6) 崔华坤:主流 VIO 论文推导及代码解析:https://github.com/StevenCui/VIO-Doc
- 7) 李言:SLAM 中的几何与学习方法
- 8) 黄山老师状态估计视频:bilibili
- 9) 谭平老师-SLAM 6小时课程:bilibili
- 10) 2020 年 SLAM 技术及应用暑期学校:视频-bilibili | 课件
- 1) 事件相机相关研究与发展:https://github.com/uzh-rpg/event-based_vision_resources
- 2) 加州大学圣地亚哥分校语境机器人研究所 Nikolay Atanasov 教授机器人状态估计与感知课程 ppt:https://natanaso.github.io/ece276a2019/schedule.html
- 3) 波恩大学 Mobile Sensing and Robotics Course 公开课 :youtube ,bilibili
- 泡泡机器人 SLAM:paopaorobot_slam
今天(2020.04.25)刚想到的一个点,就算前面整理了大量的开源工作,但是看原版的代码还是会有很大的困难,感谢国内 SLAM 爱好者的将自己的代码注释分享出来,促进交流,共同进步。这一小节的内容陆续发掘,期待大家的推荐代码注释(可以在 issue 中留言)。
本期更新于 2021 年 7 月 5 日
共 20 篇论文,其中 7 项(待)开源工作
[4,9,13,14,16,17] LiDAR 相关
[1,2,6,7] Mapping
- [1] Bokovoy A, Muravyev K, Yakovlev K. MAOMaps: A Photo-Realistic Benchmark For vSLAM and Map Merging Quality Assessment[J]. arXiv preprint arXiv:2105.14994, 2021.
- 用于视觉 SLAM 和地图合并质量评估的逼真基准
- 俄罗斯科学院;开源数据集
- [2] Demmel N, Sommer C, Cremers D, et al. Square Root Bundle Adjustment for Large-Scale Reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 11723-11732.
- 大规模重建的平方根 BA
- TUM;代码开源
- [3] Chen Y, Zhao L, Zhang Y, et al. Anchor Selection for SLAM Based on Graph Topology and Submodular Optimization[J]. IEEE Transactions on Robotics, 2021.
- 基于图拓扑和子模块优化的SLAM锚点选择
- 悉尼科技大学
- [4] Zhou L, Koppel D, Kaess M. LiDAR SLAM with Plane Adjustment for Indoor Environment[J]. IEEE Robotics and Automation Letters, 2021.
- 室内环境中平面调整的 LiDAR SLAM
- Magic Leap,CMU
- [5] Liu D, Parra A, Chin T J. Spatiotemporal Registration for Event-based Visual Odometry[J]. arXiv preprint arXiv:2103.05955, 2021.
- 基于事件的视觉里程计的时空配准
- 阿德莱德大学;开源数据集(待公开)
- [6] Wimbauer F, Yang N, von Stumberg L, et al. MonoRec: Semi-supervised dense reconstruction in dynamic environments from a single moving camera[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 6112-6122.
- [7] Qin T, Zheng Y, Chen T, et al. RoadMap: A Light-Weight Semantic Map for Visual Localization towards Autonomous Driving[J]. arXiv preprint arXiv:2106.02527, 2021.(ICRA2021)
- 一种用于自动驾驶视觉定位的轻量级语义地图
- 华为
- [8] Tschopp F, Nieto J, Siegwart R Y, et al. Superquadric Object Representation for Optimization-based Semantic SLAM[J]. 2021.
- 基于优化的语义 SLAM 的超二次曲面的物体表示
- ETH;Microsoft
- [9] Miller I D, Cowley A, Konkimalla R, et al. Any Way You Look at It: Semantic Crossview Localization and Mapping With LiDAR[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 2397-2404.
- 以任何方式看待它:使用 LiDAR 进行语义交叉视图的定位和建图
- 宾夕法尼亚大学;代码开源
- [10] Lu Y, Xu X, Ding M, et al. A Global Occlusion-Aware Approach to Self-Supervised Monocular Visual Odometry[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(3): 2260-2268.
- 自监督单目视觉里程计的全局遮挡感知方法
- 中国人民大学
- [11] Li S, Liu S, Zhao Q, et al. Quantized Self-supervised Local Feature for Real-time Robot Indirect VSLAM[J]. IEEE/ASME Transactions on Mechatronics, 2021.
- 实时机器人间接 VSLAM 的量化自监督局部特征
- 上海交大;期刊:中科院二区,JCR Q1,IF 5.3
- [12] Seiskari O, Rantalankila P, Kannala J, et al. HybVIO: Pushing the Limits of Real-time Visual-inertial Odometry[J]. arXiv preprint arXiv:2106.11857, 2021.
- HybVIO: 突破实时视觉惯性里程计的极限
- Spectacular AI,阿尔托大学,坦佩雷大学
- [13] Li L, Kong X, Zhao X, et al. SA-LOAM: Semantic-aided LiDAR SLAM with Loop Closure[J]. arXiv preprint arXiv:2106.11516, 2021. (ICRA2021)
- 具有闭环的语义辅助的 LiDAR SLAM
- 浙大
- [14] Li K, Ouyang Z, Hu L, et al. Robust SRIF-based LiDAR-IMU Localization for Autonomous Vehicles[J]. 2021. (ICRA2021)
- 用于自动驾驶汽车的鲁棒的基于 SRIF 的 LiDAR-IMU 定位
- 上海科技大学
- [15] Kumar H, Payne J J, Travers M, et al. Periodic SLAM: Using Cyclic Constraints to Improve the Performance of Visual-Inertial SLAM on Legged Robots[J].
- 周期性 SLAM:使用循环约束提高腿式机器人视觉惯性 SLAM 的性能
- [16] Zhou P, Guo X, Pei X, et al. T-LOAM: Truncated Least Squares LiDAR-Only Odometry and Mapping in Real Time[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021.
- 截断最小二乘法的 LiDAR 实时里程计和建图
- 武汉理工
- [17] Jia Y, Luo H, Zhao F, et al. Lvio-Fusion: A Self-adaptive Multi-sensor Fusion SLAM Framework Using Actor-critic Method[J]. arXiv preprint arXiv:2106.06783, 2021.
- Lvio-Fusion:使用Actor-critic方法的自适应多传感器融合SLAM框架
- 北邮,中科院计算机所;代码开源
- [18] Huang R, Fang C, Qiu K, et al. AR Mapping: Accurate and Efficient Mapping for Augmented Reality[J]. arXiv preprint arXiv:2103.14846, 2021.
- AR Mapping:用于增强现实的准确高效建图
- 阿里巴巴
- [19] Kim A, Ošep A, Leal-Taixé L. EagerMOT: 3D Multi-Object Tracking via Sensor Fusion[J]. arXiv preprint arXiv:2104.14682, 2021. (ICRA2021)
- EagerMOT:通过传感器融合进行 3D 多目标跟踪
- TUM;代码开源
- [20] Wang J, Zhong Y, Dai Y, et al. Deep Two-View Structure-from-Motion Revisited[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 8953-8962.
- 重新审视两视图 SFM
- 澳大利亚国立、西工大、NVIDIA
本期更新于 2021 年 6 月 13 日
共 20 篇论文,其中 6 项(待)开源工作
[2] SLAM 中的隐私保护问题近年来受到关注
[4,5,6] 基于线的 SLAM/SFM
- [1] Geppert M, Larsson V, Speciale P, et al. Privacy Preserving Localization and Mapping from Uncalibrated Cameras[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 1809-1819.
- 未校准相机的隐私保护定位和建图
- ETH, Microsift
- [2] Hermann M, Ruf B, Weinmann M. Real-time dense 3D Reconstruction from monocular video data captured by low-cost UAVs[J]. arXiv preprint arXiv:2104.10515, 2021.
- 低成本无人机捕获的单目视频数据实时实时稠密重建
- KIT
- [3] Zhang J, Zhu C, Zheng L, et al. ROSEFusion: Random Optimization for Online Dense Reconstruction under Fast Camera Motion[J]. arXiv preprint arXiv:2105.05600, 2021. (SIGGRAPH 2021)
- 快速相机运动下在线稠密重建的随机优化
- 国防科大
- [4] Xu B, Wang P, He Y, et al. Leveraging Structural Information to Improve Point Line Visual-Inertial Odometry[J]. arXiv preprint arXiv:2105.04064, 2021.
- 利用结构信息改进点线 VIO
- 武汉大学;东北大学;代码开源
- [5] Liu Z, Shi D, Li R, et al. PLC-VIO: Visual-Inertial Odometry Based on Point-Line Constraints[J]. IEEE Transactions on Automation Science and Engineering, 2021.
- PLC-VIO:基于点线约束的 VIO
- 国防科大
- [6] Mateus A, Tahri O, Aguiar A P, et al. On Incremental Structure from Motion Using Lines[J]. IEEE Transactions on Robotics, 2021.
- 使用线的增量式 SFM
- 里斯本大学
- [7] Patel M, Bandopadhyay A, Ahmad A. Collaborative Mapping of Archaeological Sites using multiple UAVs[J]. arXiv preprint arXiv:2105.07644, 2021.
- 使用多无人机协同绘制考古遗址
- 印度理工学院;数据集&项目主页
- [8] Chiu C Y, Sastry S S. Simultaneous Localization and Mapping: A Rapprochement of Filtering and Optimization-Based Approaches[J]. 2021.
- SLAM:滤波和优化方法的结合
- 加州大学伯克利分校硕士学位论文
- [9] Karkus P, Cai S, Hsu D. Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 2815-2825.
- 可微 SLAM 网络:用于视觉导航的学习型粒子 SLAM
- 新加坡国立大学;项目主页
- [10] Ok K, Liu K, Roy N. Hierarchical Object Map Estimation for Efficient and Robust Navigation[C]//Proc. ICRA. 2021
- 用于高效鲁棒导航的层次化物体地图估计
- MIT
- [11] Adu-Bredu A, Del Coro N, Liu T, et al. GODSAC: Graph Optimized DSAC for Robot Relocalization**[J]. arXiv preprint arXiv:2105.00546, 2021.
- 用于机器人重定位的图形优化 DSAC*
- 密歇根大学;代码开源
- [12] Xu D, Vedaldi A, Henriques J F. Moving SLAM: Fully Unsupervised Deep Learning in Non-Rigid Scenes[J]. arXiv preprint arXiv:2105.02195, 2021.
- Moving SLAM:非刚性场景中的完全无监督深度学习
- 港科,牛津大学
- [13] Ma J, Ye X, Zhou H, et al. Loop-Closure Detection Using Local Relative Orientation Matching[J]. IEEE Transactions on Intelligent Transportation Systems, 2021.
- 使用局部相对方向匹配的闭环检测
- 武汉大学
- [14] Çatal O, Jansen W, Verbelen T, et al. LatentSLAM: unsupervised multi-sensor representation learning for localization and mapping[J]. arXiv preprint arXiv:2105.03265, 2021.
- LatentSLAM:用于定位和建图的无监督多传感器表示学习
- 根特大学,安特卫普大学;数据集
- [15] Zhao S, Zhang H, Wang P, et al. Super Odometry: IMU-centric LiDAR-Visual-Inertial Estimator for Challenging Environments[J]. arXiv preprint arXiv:2104.14938, 2021.
- 超级里程计:用于挑战性环境的以 IMU 为中心的 LiDAR-Visual-Inertial 状态估计器
- CMU
- [16] Nguyen T M, Yuan S, Cao M, et al. VIRAL SLAM: Tightly Coupled Camera-IMU-UWB-Lidar SLAM[J]. arXiv preprint arXiv:2105.03296, 2021.
- VIRAL SLAM:紧耦合相机-IMU-UWB-Lidar SLAM
- 南洋理工大学
- [17] Nguyen T M, Yuan S, Cao M, et al. MILIOM: Tightly Coupled Multi-Input Lidar-Inertia Odometry and Mapping[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 5573-5580.
- MILIOM:紧耦合多源激光雷达-惯性里程计和建图
- 南洋理工大学
- [18] Sakaridis C, Dai D, Van Gool L. ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding[J]. arXiv preprint arXiv:2104.13395, 2021.
- ACDC: 用于语义驾驶场景理解的具有对应关系的不利条件数据集
- ETH;数据集和 benchmark
- [19] Huang Z, Zhou H, Li Y, et al. VS-Net: Voting with Segmentation for Visual Localization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 6101-6111.
- VS-Net:通过分割投票进行视觉定位
- 浙大,港中文;代码开源
- [20] Kim A, Ošep A, Leal-Taixé L. EagerMOT: 3D Multi-Object Tracking via Sensor Fusion[J]. arXiv preprint arXiv:2104.14682, 2021. (ICRA2021)
- EagerMOT:通过传感器融合进行 3D 多目标跟踪
- TUM;代码开源
本期更新于 2021 年 5 月 11 日
共 20 篇论文,其中 5 篇来自于 CVPR2021,7 项开源工作
[7, 8, 9] Event-based
[10] VDO-SLAM 作者博士学位论文
[3, 4, 5, 17] 线、平面
[14] NeuralRecon
- [1] Jang Y, Oh C, Lee Y, et al. Multirobot Collaborative Monocular SLAM Utilizing Rendezvous[J]. IEEE Transactions on Robotics, 2021.
- 利用集合点的多机器人协作单目 SLAM
- 首尔大学
- [2] Luo H, Pape C, Reithmeier E. Hybrid Monocular SLAM Using Double Window Optimization[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 4899-4906.
- 使用双窗口优化的混合单目 SLAM
- 汉诺威大学
- [3] Yunus R, Li Y, Tombari F. ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames[J]. arXiv preprint arXiv:2103.15068, 2021.
- ManhattanSLAM:利用混合曼哈顿世界的鲁棒平面跟踪与建图
- TUM
- [4] Wang Q, Yan Z, Wang J, et al. Line Flow Based Simultaneous Localization and Mapping[J]. IEEE Transactions on Robotics, 2021.
- 基于线流的 SLAM
- 北大(去年的 9 月的 Preprint)
- [5] Vakhitov A, Ferraz L, Agudo A, et al. Uncertainty-Aware Camera Pose Estimation From Points and Lines[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 4659-4668.
- 基于点线不确定性感知的相机位姿估计
- SLAMCore;代码开源
- [6] Liu D, Parra A, Chin T J. Spatiotemporal Registration for Event-based Visual Odometry[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 4937-4946.
- 基于事件的视觉里程计的时空配准
- 阿德莱德大学
- [7] Min Z, Dunn E. Jiao J, Huang H, Li L, et al. Comparing Representations in Tracking for Event Camera-based SLAM[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 1369-1376.
- 比较基于事件相机 SLAM 在跟踪中的表示
- 港科,港大;代码开源
- [8] Zhou Y, Gallego G, Shen S. Event-based stereo visual odometry[J]. IEEE Transactions on Robotics, 2021.
- 基于事件的双目视觉里程计
- 港科、柏林工业大学;代码开源,(去年的 8 月的 Preprint)
- [9] Min Z, Dunn E. VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough[J]. arXiv preprint arXiv:2104.06800, 2021.
- VOLDOR-SLAM: 当基于特征或直接法不够好时
- 史蒂文斯理工学院;代码开源
- [10] Ila V, Henein M, Li H. Meta Information In Graph-based Simultaneous Localisation And Mapping[J]. 2020
- 基于图的 SLAM 中的元信息
- 澳大利亚国立大学 Mina Henein 博士学位论文
- VDO-SLAM: A Visual Dynamic Object-aware SLAM System.
- Dynamic SLAM: The Need for Speed.
- [11] Li S, Wu X, Cao Y, et al. Generalizing to the Open World: Deep Visual Odometry with Online Adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 13184-13193.
- 推广到开放世界:具有在线适应功能的深度视觉里程计
- 北京大学
- Li S, Wang X, Cao Y, et al. Self-supervised deep visual odometry with online adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 6339-6348.
- [12] Qiu K, Chen S, Zhang J, et al. Compact 3D Map-Based Monocular Localization Using Semantic Edge Alignment[J]. arXiv preprint arXiv:2103.14826, 2021.
- 使用语义边缘对齐的基于 3D 紧凑地图的单目定位
- 阿里巴巴
- [13] Cheng W, Yang S, Zhou M, et al. Road Mapping and Localization using Sparse Semantic Visual Features[J]. IEEE Robotics and Automation Letters, 2021.
- 使用稀疏语义视觉特征的道路建图和定位
- 阿里巴巴
- [14] Sun J, Xie Y, Chen L, et al. NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 15598-15607.
- NeuralRecon:从单目视频中进行实时连贯 3D 重建
- 浙大,商汤;代码开源
- [15] Shan T, Englot B, Ratti C, et al. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping[J]. arXiv preprint arXiv:2104.10831, 2021. (ICRA2021)
- LVI-SAM:通过平滑的紧耦合的激光雷达-视觉-惯性里程计和建图框架
- MIT;代码开源
- [16] Zhang K, Yang T, Ding Z, et al. The Visual-Inertial-Dynamical UAV Dataset[J]. arXiv preprint arXiv:2103.11152, 2021.
- 视觉-惯性-动力 UAV 数据集
- 浙大;数据集
- [17] Amblard V, Osedach T P, Croux A, et al. Lidar-Monocular Surface Reconstruction Using Line Segments[J]. arXiv preprint arXiv:2104.02761, 2021.
- 使用线段的 Lidar - 单目表面重建
- MIT
- [18] Wei B, Trigoni N, Markham A. iMag+: An Accurate and Rapidly Deployable Inertial Magneto-Inductive SLAM System[J]. IEEE Transactions on Mobile Computing, 2021.
- iMag+: 一种准确且可快速部署的惯性磁感应 SLAM 系统
- 诺森比亚大学、牛津大学
- [19] Huang R, Fang C, Qiu K, et al. AR Mapping: Accurate and Efficient Mapping for Augmented Reality[J]. arXiv preprint arXiv:2103.14846, 2021.
- AR Mapping:用于增强现实的准确高效建图
- 阿里巴巴
- [20] Wang J, Zhong Y, Dai Y, et al. Deep Two-View Structure-from-Motion Revisited[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 8953-8962.
- 重新审视两视图 DFM
- 澳大利亚国立、西工大、NVIDIA
本期更新于 2021 年 4 月 15 日
共 23 篇论文,其中 9 项(待)开源工作
[3, 4, 5] 线、面 SLAM
[6, 7] 滤波方法
[10] DynaSLAM 作者博士学位论文
[9, 12, 13, 17, 23] LiDAR
- [1] Liu P, Zuo X, Larsson V, et al. MBA-VO: Motion Blur Aware Visual Odometry[J]. arXiv preprint arXiv:2103.13684, 2021.
- MBA-VO: 运动模糊感知的视觉里程计
- ETH,Microsoft
- [2] Chen H, Hu W, Yang K, et al. Panoramic annular SLAM with loop closure and global optimization[J]. arXiv preprint arXiv:2102.13400, 2021.
- 具有闭环和全局优化的全景环形 SLAM
- 浙大、KIT
- [3] Wang X, Christie M, Marchand E. TT-SLAM: Dense Monocular SLAM for Planar Environments[C]//IEEE International Conference on Robotics and Automation, ICRA'21. 2021.
- TT-SLAM: 面向平面环境的单目稠密 SLAM
- 法国雷恩大学
- [4] Lim H, Kim Y, Jung K, et al. Avoiding Degeneracy for Monocular Visual SLAM with Point and Line Features[J]. arXiv preprint arXiv:2103.01501, 2021. (ICRA2021)
- 使用点线特征避免单目视觉 SLAM 退化
- 韩国科学技术院
- [5] Lu J, Fang Z, Gao Y, et al. Line-based visual odometry using local gradient fitting[J]. Journal of Visual Communication and Image Representation, 2021, 77: 103071.
- 使用局部梯度拟合的基于线的视觉里程计
- 上海工程技术大学
- [6] Wang J, Meng Z, Wang L. A UPF-PS SLAM Algorithm for Indoor Mobile Robot with Non-Gaussian Detection Model[J]. IEEE/ASME Transactions on Mechatronics, 2021.
- 具有非高斯检测模型的室内移动机器人 UPF-PS SLAM 算法
- 清华大学
- [7] Gao L, Battistelli G, Chisci L. PHD-SLAM 2.0: Efficient SLAM in the Presence of Missdetections and Clutter[J]. IEEE Transactions on Robotics, 2021.
- PHD-SLAM 2.0:漏检和杂波污染情况下的高效 SLAM
- 电子科大,佛罗伦萨大学;代码开源
- [8] Labbé M, Michaud F. Multi-session visual SLAM for illumination invariant localization in indoor environments[J]. arXiv preprint arXiv:2103.03827, 2021.
- 用于室内环境光照不变定位的多会话视觉 SLAM
- 谢布鲁克大学
- [9] Yokozuka M, Koide K, Oishi S, et al. LiTAMIN2: Ultra Light LiDAR-based SLAM using Geometric Approximation applied with KL-Divergence[J]. arXiv preprint arXiv:2103.00784, 2021. (ICRA2021)
- 使用几何近似和 KL 散度的超轻型基于 LiDAR 的 SLAM
- 日本先进工业科学技术研究所
- [10] Visual SLAM in dynamic environments. 2020
- 动态环境中视觉 SLAM
- 萨拉戈萨大学 Berta Bescos 博士学位论文,Github
- DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes. 2018
- DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM. 2020
- Empty Cities: a Dynamic-Object-Invariant Space for Visual SLAM. 2021
- [11] Zhan H, Weerasekera C S, Bian J W, et al. DF-VO: What Should Be Learnt for Visual Odometry?[J]. arXiv preprint arXiv:2103.00933, 2021.
- DF-VO: 视觉里程计应该学习什么?
- 阿德莱德大学;代码开源
- 先关工作:Zhan H, Weerasekera C S, Bian J W, et al. Visual odometry revisited: What should be learnt?[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 4203-4210.
- [12] Westfechtel T, Ohno K, Akegawa T, et al. Semantic Mapping of Construction Site From Multiple Daily Airborne LiDAR Data[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3073-3080.
- 来自多个日常机载激光雷达数据的施工现场语义建图
- 东京大学
- [13] Habich T L, Stuede M, Labbé M, et al. Have I been here before? Learning to Close the Loop with LiDAR Data in Graph-Based SLAM[J]. arXiv preprint arXiv:2103.06713, 2021.
- 在图 SLAM 中学习使用 LiDAR 数据的闭环
- 汉诺威大学
- [14] Jayasuriya M, Arukgoda J, Ranasinghe R, et al. UV-Loc: A Visual Localisation Strategy for Urban Environments[J]. 2021
- 城市环境中视觉定位策略
- 悉尼科技大学
- pole-like landmarks and ground surface boundaries
- [15] Sarlin P E, Unagar A, Larsson M, et al. Back to the Feature: Learning Robust Camera Localization from Pixels to Pose[J]. arXiv preprint arXiv:2103.09213, 2021. (CVPR2021)
- 回归到特征:从像素中学习鲁棒的相机定位
- ETH;代码开源
- [16] Zhang J, Sui W, Wang X, et al. Deep Online Correction for Monocular Visual Odometry[J]. arXiv preprint arXiv:2103.10029, 2021. (ICRA2021)
- 单目视觉里程计深度在线校正
- 地平线,华中科大
- [17] Lin J, Zheng C, Xu W, et al. R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping[J]. arXiv preprint arXiv:2102.12400, 2021.
- R2LIVE:一种鲁棒的的、实时的、LiDAR-惯导-视觉紧耦合的状态估计和建图方法
- 香港大学;代码开源
- [18] Cao S, Lu X, Shen S. GVINS: Tightly Coupled GNSS-Visual-Inertial for Smooth and Consistent State Estimation[J]. arXiv e-prints, 2021: arXiv: 2103.07899.
- GVINS: 用于平滑和一致性状态估计的紧耦合 GNSS-视觉-惯性系统
- 港科;代码开源
- [19] Zhu P, Geneva P, Ren W, et al. Distributed Visual-Inertial Cooperative Localization[J]. arXiv preprint arXiv:2103.12770, 2021.
- 分布式视觉-惯性协同定位
- 加利福尼亚大学、特拉华大学;video
- [20] Peng X, Liu Z, Wang Q, et al. Accurate Visual-Inertial SLAM by Feature Re-identification[J]. arXiv preprint arXiv:2102.13438, 2021.
- 基于特征重识别的视觉惯性 SLAM
- 三星
- [21] Reinke A, Chen X, Stachniss C. Simple But Effective Redundant Odometry for Autonomous Vehicles[J]. arXiv preprint arXiv:2105.11783, 2021. (ICRA2021)
- 用于自动驾驶的简易且有效的冗余里程计
- 波恩大学;代码开源(待公开)
- [22] Ram K, Kharyal C, Harithas S S, et al. RP-VIO: Robust Plane-based Visual-Inertial Odometry for Dynamic Environments[J]. arXiv preprint arXiv:2103.10400, 2021.
- RP-VIO: 动态环境中鲁棒的基于平面的 VIO
- 印度理工学院海得拉巴机器人研究中心;代码开源(基于 VINS)
- [23] Kramer A, Harlow K, Williams C, et al. ColoRadar: The Direct 3D Millimeter Wave Radar Dataset[J]. arXiv preprint arXiv:2103.04510, 2021.
- 直接 3D 毫米波雷达数据集
- 科罗拉多大学博尔德分校;开源数据集
本期更新于 2021 年 3 月 20 日
共 21 篇论文,其中 8 项开源工作
[3][4] VIO 数据集
[9][10] 杆状特征
- [1] Ferrera M, Eudes A, Moras J, et al. OV $^{2} $ SLAM: A Fully Online and Versatile Visual SLAM for Real-Time Applications[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1399-1406.
- 适用于实时应用的完全在线、多功能 SLAM
- IFREMER;代码开源
- [2] Gladkova M, Wang R, Zeller N, et al. Tight-Integration of Feature-Based Relocalization in Monocular Direct Visual Odometry[J]. arXiv preprint arXiv:2102.01191, 2021.
- 单目直接法视觉里程计中基于特征重定位的紧耦合
- TUM
- [3] Zhang H, Jin L, Ye C. The VCU-RVI Benchmark: Evaluating Visual Inertial Odometry for Indoor Navigation Applications with an RGB-D Camera[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 6209-6214.
- VCU-RVI Benchmark:用于评估 RGB-D 室内导航应用的 VIO
- 弗吉尼亚联邦大学;数据集地址
- [4] Minoda K, Schilling F, Wüest V, et al. VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1343-1350.
- VIODE:用于解决 VIO 在动态环境中挑战的仿真数据集
- 东京大学,洛桑联邦理工;数据集地址
- [5] Younes G. A Unified Hybrid Formulation for Visual SLAM[D]. , 2021.
- 一种视觉 SLAM 统一混合的架构
- 滑铁卢大学大学,博士学位论文,Google Scholar
- [6] Shakeri M, Loo S Y, Zhang H, et al. Polarimetric Monocular Dense Mapping Using Relative Deep Depth Prior[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 4512-4519.
- 利用相对深度先验的偏振相机单目密集建图
- 阿尔伯塔大学
- [7] Wang H, Wang C, Xie L. Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1715-1721.
- 大规模环境中强度特征辅助的 LiDAR 定位与建图
- NTU, CMU
- [8] Doan A, Latif Y, Chin T, et al. HM⁴: hidden Markov model with memory management for visual place recognition[J]. IEEE Robotics and Automation Letters, vol. 6, no. 1, pp. 167-174, Jan. 2021.
- 用于视觉位置识别的具有内存管理功能的隐马尔可夫模型
- 阿德莱德大学
- [9] Li L, Yang M, Weng L, et al. Robust Localization for Intelligent Vehicles Based on Pole-Like Features Using the Point Cloud[J]. IEEE Transactions on Automation Science and Engineering, 2021.
- 基于杆状点云特征的智能车鲁棒定位
- 上海交大;期刊:中科院一区,JCR Q1,IF 4.9
- [10] Tschopp F, von Einem C, Cramariuc A, et al. Hough $^ 2$ Map–Iterative Event-Based Hough Transform for High-Speed Railway Mapping[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 2745-2752.
- [11] Ma X, Liang X. Point-line-based RGB-D SLAM and Bundle Adjustment Uncertainty Analysis[J]. arXiv preprint arXiv:2102.07110, 2021.
- 基于点线的 RGB-D SLAM 和 BA 的不确定性分析
- 上交
- [12] Rosinol A, Violette A, Abate M, et al. Kimera: from slam to spatial perception with 3d dynamic scene graphs[J]. arXiv preprint arXiv:2101.06894, 2021.
- Kimera: 从 SLAM 到具有 3D 动态场景图的空间感知
- MIT;项目主页
- [13] Liu Y, Liu J, Hao Y, et al. A Switching-Coupled Backend for Simultaneous Localization and Dynamic Object Tracking[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1296-1303.
- 一种同时用于定位与动态物体跟踪的可切换松耦合后端
- 清华大学
- [14] Feng Q, Atanasov N. Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning[J]. arXiv preprint arXiv:2101.01844, 2021.
- 使用 2D-3D 联合学习从航空图像进行室外地形建图的 Mesh 重建
- UCSD Nikolay Atanasov
- [15] Wong Y S, Li C, Nießner M, et al. RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects[J]. Eurographics, 2021.
- RigidFusion: 移动刚体运动场景的 RGB-D 重建
- UCL, TUM;补充材料
- [16] Sun S, Melamed D, Kitani K. IDOL: Inertial Deep Orientation-Estimation and Localization[J]. arXiv preprint arXiv:2102.04024, 2021.(AAAI 2021)
- 基于学习的惯性传感器方向与位置估计
- CMU;代码与数据集
- [17] Qin C, Zhang Y, Liu Y, et al. Semantic loop closure detection based on graph matching in multi-objects scenes[J]. Journal of Visual Communication and Image Representation, 2021, 76: 103072.
- 基于图匹配的多目标场景闭环检测
- 东北大学
- [18] Wang H, Wang C, Xie L. Lightweight 3-D Localization and Mapping for Solid-State LiDAR[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1801-1807.
- 固态 LiDAR 轻量化 3D 定位与建图
- 南洋理工大学;代码开源
- [19] Jiang Z, Taira H, Miyashita N, et al. VIO-Aided Structure from Motion Under Challenging Environments[J]. arXiv preprint arXiv:2101.09657, 2021. (ICIT2021)
- 在具有挑战场景下 VIO 辅助的 SFM
- 东京工业大学
- [20] Liu Y, Yixuan Y, Liu M. Ground-aware Monocular 3D Object Detection for Autonomous Driving[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 919-926.
- 用于自动驾驶的地面感知单目 3D 目标检测
- 港科;代码开源
- [21] Singh G, Akrigg S, Di Maio M, et al. Road: The road event awareness dataset for autonomous driving[J]. arXiv preprint arXiv:2102.11585, 2021.
- 自动驾驶场景的道路事件感知数据集
- ETH;代码开源
- 道路事件由智能体、智能体所执行的动作和智能体所处的环境组成。
本期更新于 2021 年 2 月 13 日
共 20 篇论文,其中 2 项开源工作
[1] 长期定位
[10] Building Fusion [11] Mesh Reconstruction
- [1] Rotsidis A, Lutteroth C, Hall P, et al. ExMaps: Long-Term Localization in Dynamic Scenes using Exponential Decay[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021: 2867-2876.
- 使用指数衰减在动态场景中的长期定位
- 巴斯大学
- [2] Wen T, Xiao Z, Wijaya B, et al. High Precision Vehicle Localization based on Tightly-coupled Visual Odometry and Vector HD Map[C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 672-679.
- 基于视觉里程计和矢量高清地图紧耦合的高精度车辆定位
- 清华大学
- [3] Lee S J, Kim D, Hwang S S, et al. Local to Global: Efficient Visual Localization for a Monocular Camera[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021: 2231-2240.
- Local to Global:单目相机高效视觉定位
- 人工特征用于实时里程计,基于学习的特征用于定位,地图对齐
- [4] Jung K Y, Kim Y E, Lim H J, et al. ALVIO: Adaptive Line and Point Feature-based Visual Inertial Odometry for Robust Localization in Indoor Environments[J]. arXiv preprint arXiv:2012.15008, 2020.
- 基于自适应点线特征的视觉惯性里程计在室内环境中的鲁棒定位
- 韩国高等科学技术学院
- [5] Yang A J, Cui C, Bârsan I A, et al. Asynchronous Multi-View SLAM[J]. arXiv preprint arXiv:2101.06562, 2021.
- 异步多视点 SLAM
- 多伦多大学
- [6] Lyu Y, Nguyen T M, Liu L, et al. SPINS: Structure Priors aided Inertial Navigation System[J]. arXiv preprint arXiv:2012.14053, 2020.
- 结构先验辅助的惯性导航系统
- 南洋理工
- [7] Pan Y, Xu X, Ding X, et al. GEM: online globally consistent dense elevation mapping for unstructured terrain[J]. IEEE Transactions on Instrumentation and Measurement, 2020.
- 非结构化地形的在线全局一致稠密高程图
- 浙大;期刊:中科院三区,ICR Q1,IF 3.6
- [8] Tian R, Zhang Y, Zhu D, et al. Accurate and Robust Scale Recovery for Monocular Visual Odometry Based on Plane Geometry[J]. arXiv preprint arXiv:2101.05995, 2021.
- 基于平面几何单目视觉里程计的准确鲁棒尺度恢复
- 东北大学,香港中文大学
- [9] Fang B, Mei G, Yuan X, et al. Visual SLAM for robot navigation in healthcare facility[J]. Pattern Recognition, 2021: 107822.
- 用于医疗机构机器人导航的视觉 SLAM
- 合肥工业大学;期刊:中科院二区,ICR Q1,IF 7.2
- [10] Zheng T, Zhang G, Han L, et al. Building Fusion: Semantic-aware Structural Building-scale 3D Reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
- Building Fusion:语义感知结构化建筑规模的三维重建
- 清华-伯克利深圳学院,清华大学
- [11] Feng Q, Atanasov N. Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning[J]. arXiv preprint arXiv:2101.01844, 2021.
- 使用 2D-3D 联合学习从空中图像进行室外地形的网格重建
- 加州大学圣地亚哥分校 Nikolay A. Atanasov
- [12] Ma T, Wang Y, Wang Z, et al. ASD-SLAM: A Novel Adaptive-Scale Descriptor Learning for Visual SLAM[C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 809-816.
- 一种新的视觉 SLAM 自适应尺度描述符学习方法
- 上交;代码开源
- [13] Li B, Hu M, Wang S, et al. Self-supervised Visual-LiDAR Odometry with Flip Consistency[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021: 3844-3852.
- 具有翻转一致性的自监督视觉-雷达里程计
- 浙大
- [14] Akilan T, Johnson E, Sandhu J, et al. A Hybrid Learner for Simultaneous Localization and Mapping[J]. arXiv preprint arXiv:2101.01158, 2021.
- 用于 SLAM 的混合学习器
- Lakehead University
- [15] Cowley A, Miller I D, Taylor C J. UPSLAM: Union of Panoramas SLAM[J]. arXiv preprint arXiv:2101.00585, 2021.
- UPSLAM: 全景 SLAM
- 宾夕法尼亚大学
- [16] Jiang Z, Taira H, Miyashita N, et al. VIO-Aided Structure from Motion Under Challenging Environments[J]. arXiv preprint arXiv:2101.09657, 2021.
- 挑战场景下 VIO 辅助的 SFM
- 东京工业大学
- [17] Zhai C, Wang M, Yang Y, et al. Robust Vision-Aided Inertial Navigation System for Protection Against Ego-Motion Uncertainty of Unmanned Ground Vehicle[J]. IEEE Transactions on Industrial Electronics, 2020.
- 鲁棒的视觉辅助惯性导航系统,避免地面无人车的自我运动估计不确定性
- 北理工;期刊:中科院一区 JCR Q1,IF 7.5
- [18] Palieri M, Morrell B, Thakur A, et al. LOCUS: A Multi-Sensor Lidar-Centric Solution for High-Precision Odometry and 3D Mapping in Real-Time[J]. IEEE Robotics and Automation Letters, 2020, 6(2): 421-428.
- 用于高精度里程计和建图的以雷达为中心的多传感器框架
- 加州理工
- [19] Chiu H, Li J, Ambrus R, et al. Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous Driving[J]. arXiv preprint arXiv:2012.13755, 2020.
- 自动驾驶场景的 3D 多模态、多目标跟踪
- 斯坦福大学
- [20] Deng J, Shi S, Li P, et al. Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection[J]. arXiv preprint arXiv:2012.15712, 2020.(AAAI 2021)
- Voxel R-CNN:面向基于高性能体素的 3D 目标检测
- 中科大,港中文;代码开源
本期更新于 2021 年 1 月 02 日
共 18 篇论文,其中 6 项(待)开源工作
9 CodeVIO, 10 CamVox
- [1] Mascaro R, Wermelinger M, Hutter M, et al. Towards automating construction tasks: Large‐scale object mapping, segmentation, and manipulation[J]. Journal of Field Robotics, 2020.
- 【挖掘机抓石头】实现自动化的施工任务:大型物体建图,分割和操纵
- ETH,期刊:中科院二区,JCR Q1
- [2] Yang X, Yuan Z, Zhu D, et al. Robust and Efficient RGB-D SLAM in Dynamic Environments[J]. IEEE Transactions on Multimedia, 2020.
- 动态环境中鲁棒高效 RGB-D SLAM
- 华中科大,期刊:中科院二区,JCR Q1,IF 5.5
- [3] Yazdanpour M, Fan G, Sheng W. ManhattanFusion: Online Dense Reconstruction of Indoor Scenes from Depth Sequences[J]. IEEE Transactions on Visualization and Computer Graphics, 2020.
- ManhattanFusion:从深度序列中对室内场景进行在线稠密重建
- 北肯塔基大学,期刊:中科院二区,JCR Q1,IF 4.3
- [4] Fourie D, Rypkema N R, Teixeira P V, et al. Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation[J]. IROS 2020
- 面向实时非高斯 SLAM 的欠定导航
- MIT
- [5] Garg S, Sünderhauf N, Dayoub F, et al. Semantics for Robotic Mapping, Perception and Interaction: A Survey[J]. arXiv preprint arXiv:2101.00443, 2020.
- 用于机器人建图、感知和交互的语义
- 昆士兰科技大学,阿德莱德大学、澳大利亚机器人中心,962 篇论文综述
- [6] Nubert J, Khattak S, Hutter M. Self-supervised Learning of LiDAR Odometry for Robotic Applications[J]. arXiv preprint arXiv:2011.05418, 2020.
- 用于机器人应用的基于自监督学习的 LiDAR 里程计
- ETH,代码开源
- [7] Thomas H, Agro B, Gridseth M, et al. Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation[J]. arXiv preprint arXiv:2012.05897, 2020.
- 自主室内导航 LIDAR 分割的自监督学习
- 多伦多大学,Apple
- [8] Huynh L, Nguyen P, Matas J, et al. Boosting Monocular Depth Estimation with Lightweight 3D Point Fusion[J]. arXiv preprint arXiv:2012.10296, 2020.
- 轻量级 3D 点融合(立体匹配/SLAM)提高单眼深度估计
- 奥卢大学
- [9] Zuo X, Merrill N, Li W, et al. CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth[J]. arXiv preprint arXiv:2012.10133, 2020.
- CodeVIO: 具有学习可优化稠密深度的 VIO
- ETH,浙大,ICRA2021 投稿论文,video
- [10] ZHU, Yuewen, et al. CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System. arXiv preprint arXiv:2011.11357, 2020.
- 低成本、精确的Lidar辅助视觉 SLAM 系统
- 南方科技大学,代码开源,ICRA2021 投稿论文
- [11] Gong Z, Liu P, Wen F, et al. Graph-Based Adaptive Fusion of GNSS and VIO Under Intermittent GNSS-Degraded Environment[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-16.
- 间歇性 GNSS 退化环境下的基于图的自适应 GNSS-VIO 融合
- 上海交大
- [12] Wu Y, Li Y, Li W, et al. Robust Lidar-Based Localization Scheme for Unmanned Ground Vehicle via Multisensor Fusion[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020.
- 基于激光雷达的多传感器融合无人地面车辆鲁棒定位方法
- 广东工业大学,期刊:中科院一区,JCR Q1, IF 8.8
- [13] Zhu P, Ren W. Multi-Robot Joint Visual-Inertial Localization and 3-D Moving Object Tracking[J]. IROS 2020
- 多机器人联合视觉惯性定位和 3D 运动目标跟踪
- 加州大学河滨分校
- [14] Xu M, Snderhauf N, Milford M. Probabilistic Visual Place Recognition for Hierarchical Localization[J]. IEEE Robotics and Automation Letters, 2020, 6(2): 311-318.
- 用于分层定位的概率视觉场景识别
- 昆士兰科技大学, 代码开源
- [15] Li D, Miao J, Shi X, et al. RaP-Net: A Region-wise and Point-wise Weighting Network to Extract Robust Keypoints for Indoor Localization[J]. arXiv preprint arXiv:2012.00234, 2020.
- RaP-Net: 区域和点加权网络用于室内定位的鲁棒关键点提取
- 清华、北交大、北航、Intel, 代码开源
- [16] Bui M, Birdal T, Deng H, et al. 6D Camera Relocalization in Ambiguous Scenes via Continuous Multimodal Inference[J]. arXiv preprint arXiv:2004.04807, 2020.(ECCV 2020)
- [17] ALVES, Nelson, et al. Low-latency Perception in Off-Road Dynamical Low Visibility Environments. arXiv preprint arXiv:2012.13014, 2020.
- [18] Chen C, Al-Halah Z, Grauman K. Semantic Audio-Visual Navigation[J]. arXiv preprint arXiv:2012.11583, 2020.
- 语义视-听导航
- UT Austin, Facebook,项目主页
本期更新于 2020 年 12 月 07 日
共 20 篇论文,其中 6 项(待)开源工作
其中近一半来自于 IROS 2020 的录用论文和 ICRA 2021 的投稿论文
- [1] Kim C, Kim J, Kim H J. Edge-based Visual Odometry with Stereo Cameras using Multiple Oriented Quadtrees[J]. IROS 2020
- 使用多个定向四叉树的基于边的双目视觉里程计
- 首尔国立大学
- [2] Jaenal A, Zuniga-Noël D, Gomez-Ojeda R, et al. Improving Visual SLAM in Car-Navigated Urban Environments with Appearance Maps[J]. IROS 2020
- 通过外观地图改善城市环境汽车导航的视觉 SLAM
- 马拉加大学;video
- [3] Chen L, Zhao Y, Xu S, et al. DenseFusion: Large-Scale Online Dense Pointcloud and DSM Mapping for UAVs[J]. IROS 2020
- DenseFusion:无人机的大规模在线密集点云和 DSM 建图
- 西工大,自动化所
- 前期工作:Wang W, Zhao Y, Han P, et al. TerrainFusion: Real-time Digital Surface Model Reconstruction based on Monocular SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 7895-7902.
- [4] Arndt C, Sabzevari R, Civera J. From Points to Planes-Adding Planar Constraints to Monocular SLAM Factor Graphs[J]. IROS 2020
- From Points to Planes:在单目 SLAM 因子图中添加平面约束
- 西班牙萨拉戈萨大学
- [5] Giubilato R, Le Gentil C, Vayugundla M, et al. GPGM-SLAM: Towards a Robust SLAM System for Unstructured Planetary Environments with Gaussian Process Gradient Maps[C]//IROS Workshop on Planetary Exploration Robots: Challenges and Opportunities (PLANROBO20). ETH Zurich, Department of Mechanical and Process Engineering, 2020.
- GPGM-SLAM:具有高斯过程梯度图的非结构化行星环境的鲁棒 SLAM 系统
- DLR,TUM
- [6] Chang Y, Tian Y, How J P, et al. Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic Simultaneous Localization and Mapping[J]. arXiv preprint arXiv:2011.04087, 2020.
- Kimera-Multi: 分布式多机器人度量语义 SLAM 系统
- MIT
- [7] Sharma A, Dong W, Kaess M. Compositional Scalable Object SLAM[J]. arXiv preprint arXiv:2011.02658, 2020.
- 合成可拓展的物体级 SLAM
- CMU;ICRA2021 投稿论文;待开源
- [8] Wang W, Hu Y, Scherer S. TartanVO: A Generalizable Learning-based VO[J]. arXiv preprint arXiv:2011.00359, 2020.
- TartanVO:一种通用的基于学习的 VO
- CMU,代码开源
- [9] Wimbauer F, Yang N, von Stumberg L, et al. MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera[J]. arXiv preprint arXiv:2011.11814, 2020.
- 动态环境中单个移动相机的半监督稠密重构
- TUM;项目主页
- 相关研究:
- CVPR 2020 D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry
- ECCV 2018 Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry
- [10] Almalioglu Y, Santamaria-Navarro A, Morrell B, et al. Unsupervised Deep Persistent Monocular Visual Odometry and Depth Estimation in Extreme Environments[J]. arXiv preprint arXiv:2011.00341, 2020.
- 极端环境下的无监督持久性的单目视觉里程计和深度估计
- 牛津大学,NASA;ICRA2021 投稿论文
- [11] Zou Y, Ji P, Tran Q H, et al. Learning monocular visual odometry via self-supervised long-term modeling[J]. arXiv preprint arXiv:2007.10983, 2020. (ECCV 2020)
- 通过自监督长期建模学习单目视觉里程计
- 弗吉尼亚理工大学;项目主页+video
- [12] Nubert J, Khattak S, Hutter M. Self-supervised Learning of LiDAR Odometry for Robotic Applications[J]. arXiv preprint arXiv:2011.05418, 2020.
- 应用于机器人的自监督学习 LiDAR 里程计
- ETH;代码开源
- [13] Chancán M, Milford M. DeepSeqSLAM: A Trainable CNN+ RNN for Joint Global Description and Sequence-based Place Recognition[J]. arXiv preprint arXiv:2011.08518, 2020.
- [14] Zhao S, Wang P, Zhang H, et al. TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint[J]. arXiv preprint arXiv:2012.03455, IROS 2020.
- TP-TIO: 一种使用深度 ThermalPoint 网络的红外视觉-惯性里程计
- CMU,东北大学,video
- [15] Jaekel J, Mangelson J G, Scherer S, et al. A Robust Multi-Stereo Visual-Inertial Odometry Pipeline[J]. IROS 2020.
- 一种鲁棒的多立体视觉惯性里程计框架
- CMU
- [16] Huang H, Ye H, Jiao J, et al. Geometric Structure Aided Visual Inertial Localization[J]. arXiv preprint arXiv:2011.04173, 2020.
- 几何结构辅助的视觉惯性定位
- 港科,ICRA 2021 投稿论文
- [17] Ding Z, Yang T, Zhang K, et al. VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation[J]. arXiv preprint arXiv:2011.03993, 2020.
- VID-Fusion: 用于准确外力估计的露露帮视觉惯性里程计
- 浙大 FAST Lab
- [18] Li K, Li M, Hanebeck U D. Towards high-performance solid-state-lidar-inertial odometry and mapping[J]. arXiv preprint arXiv:2010.13150, 2020.
- 高性能固态雷达惯性里程计与建图
- 卡尔斯鲁厄理工学院;代码开源
- [19] Milano F, Loquercio A, Rosinol A, et al. Primal-Dual Mesh Convolutional Neural Networks[J]. Advances in Neural Information Processing Systems, 2020, 33.
- 原始对偶网格卷积神经网络
- ETH;代码开源
- [20] Li H, Gordon A, Zhao H, et al. Unsupervised Monocular Depth Learning in Dynamic Scenes[J]. arXiv preprint arXiv:2010.16404, 2020. (CoRL 2020)
- 动态环境中无监督单目深度学习
- Google;代码开源
本期更新于 2020 年 11 月 09 日
共 22 篇论文,其中 7 项(待)开源工作
9,10,11:SLAM 中动态物体跟踪,动态物体级 SLAM 今年很火
3,7,8,14,18:线段相关
- [1] Bhutta M, Kuse M, Fan R, et al. Loop-box: Multi-Agent Direct SLAM Triggered by Single Loop Closure for Large-Scale Mapping[J]. arXiv preprint arXiv:2009.13851, 2020. IEEE Transactions on Cybernetics, 2020
- [2] Zhou B, He Y, Qian K, et al. S4-SLAM: A real-time 3D LIDAR SLAM system for ground/watersurface multi-scene outdoor applications[J]. Autonomous Robots, 2020: 1-22.
- S4-SLAM:用于地面/水面多场景户外应用的实时 3D LIDAR SLAM 系统
- 东南大学;期刊:中科院三区,JCR Q1,IF 3.6
- [3] Li Y, Yunus R, Brasch N, et al. RGB-D SLAM with Structural Regularities[J]. arXiv preprint arXiv:2010.07997, 2020.
- 具有结构规律的 RGB-D SLAM
- TUM
- [4] Rodríguez J J G, Lamarca J, Morlana J, et al. SD-DefSLAM: Semi-Direct Monocular SLAM for Deformable and Intracorporeal Scenes[J]. arXiv preprint arXiv:2010.09409, 2020.
- SD-DefSLAM:适用于可变形和体内场景的半直接法单目 SLAM
- 萨拉戈萨大学;ICRA 2021 投稿论文;Video
- [5] Millane A, Oleynikova H, Lanegger C, et al. Freetures: Localization in Signed Distance Function Maps[J]. arXiv preprint arXiv:2010.09378, 2020.
- [6] Long R, Rauch C, Zhang T, et al. RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects[J]. arXiv preprint arXiv:2010.10841, 2020.
- RigidFusion: 在具有动态刚体物体的大型场景中进行机器人定位与建图
- 爱丁堡大学机器人中心
- [8] Han J, Dong R, Kan J. A novel loop closure detection method with the combination of points and lines based on information entropy[J]. Journal of Field Robotics. 2020
- 一种新的基于信息熵的点线闭环检测方法
- 北京林业大学;期刊:中科院二区,JCR Q1,IF 3.58
- [9] Bescos B, Campos C, Tardós J D, et al. DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM[J]. arXiv preprint arXiv:2010.07820, 2020.
- DynaSLAM II: 多目标跟踪与 SLAM 紧耦合
- 萨拉戈萨大学;一作是 DynaSLAM 的作者,二作是 ORB-SLAM3 的作者
- [10] Bescos B, Cadena C, Neira J. Empty Cities: a Dynamic-Object-Invariant Space for Visual SLAM[J]. arXiv preprint arXiv:2010.07646, 2020.
- [11] Ballester I, Fontan A, Civera J, et al. DOT: Dynamic Object Tracking for Visual SLAM[J]. arXiv preprint arXiv:2010.00052, 2020.
- 视觉 SLAM 的动态物体跟踪
- 萨拉戈萨大学
- [12] Wu S C, Tateno K, Navab N, et al. SCFusion: Real-time Incremental Scene Reconstruction with Semantic Completion[J]. arXiv preprint arXiv:2010.13662, 2020.
- SCFusion:具有完整语义的实时增量场景重建
- TUM
- [13] Mallick A, Stückler J, Lensch H. Learning to Adapt Multi-View Stereo by Self-Supervision[J]. arXiv preprint arXiv:2009.13278, 2020.
- 通过自监督学习的自适应多视图立体匹配
- 图宾根大学,马普所 Jörg Stückler,BMVC 2020
- 基于 ECCV 2018 MVSNet: Depth Inference for Unstructured Multi-view Stereo,代码
- [14] Li X, Li Y, Ornek E P, et al. Co-Planar Parametrization for Stereo-SLAM and Visual-Inertial Odometry[J]. IEEE Robotics and Automation Letters, 2020.
- 双目 SLAM 和 VIO 的共面参数化
- 北京大学,代码开源(暂未放出)
- [15] Liu Z, Zhang F. BALM: Bundle Adjustment for Lidar Mapping[J]. arXiv preprint arXiv:2010.08215, 2020.
- BALM:激光雷达建图中的 BA 优化
- 香港大学,代码开源
- [16] Nguyen T M, Yuan S, Cao M, et al. VIRAL-Fusion: A Visual-Inertial-Ranging-Lidar Sensor Fusion Approach[J]. arXiv preprint arXiv:2010.12274, 2020.
- VIRAL-Fusion: 视觉-惯性-测距-激光雷达传感器融合方法
- 南洋理工
- [17] Liu J, Gao W, Hu Z. Optimization-Based Visual-Inertial SLAM Tightly Coupled with Raw GNSS Measurements[J]. arXiv preprint arXiv:2010.11675, 2020.
- 基于优化的视觉惯性 SLAM 与原始 GNSS 测量紧耦合
- 中科院自动化所;ICRA 2021 投稿论文
- [18] Taubner F, Tschopp F, Novkovic T, et al. LCD--Line Clustering and Description for Place Recognition[J]. arXiv preprint arXiv:2010.10867, 2020. (3DV 2020)
- LCD: 用于位置识别的线段聚类和描述
- ETH;代码开源
- [19] Triebel R. 3D Scene Reconstruction from a Single Viewport. ECCV 2020
- 单视角进行三维场景重建
- TUM;代码开源
- [20] Hidalgo-Carrió J, Gehrig D, Scaramuzza D. Learning Monocular Dense Depth from Events[J]. arXiv preprint arXiv:2010.08350, 2020.(3DV 2020)
- [21] Yang B. Learning to reconstruct and segment 3D objects[J]. arXiv preprint arXiv:2010.09582, 2020.
- 学习重建和分割 3D 物体
- 牛津大学 BoYang 博士学位论文
- [22] von Stumberg L, Wenzel P, Yang N, et al. LM-Reloc: Levenberg-Marquardt Based Direct Visual Relocalization[J]. arXiv preprint arXiv:2010.06323, 2020.
- LM-Reloc:基于Levenberg-Marquardt 的直接视觉重定位
- TUM
- 相关论文:von Stumberg L, Wenzel P, Khan Q, et al. Gn-net: The gauss-newton loss for multi-weather relocalization[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 890-897.
本期更新于 2020 年 9 月 28 日
共 20 篇论文,其中 6 项(待)开源工作
4-5:机器人自主探索
8-11:多路标 SLAM
13: Jan Czarnowski 博士学位论文
17-20:增强现实相关的几项很好玩的工作
- [1] FZhao Y, Smith J S, Vela P A. Good graph to optimize: Cost-effective, budget-aware bundle adjustment in visual SLAM[J]. arXiv preprint arXiv:2008.10123, 2020.
- Good Graph to Optimize: 视觉 SLAM 中具有成本效益、可感知预算的 BA
- 佐治亚理工学院 Yipu Zhao
- 作者有很多 Good 系列的文章
- IROS 2018 Good feature selection for least squares pose optimization in VO/VSLAM
- ECCV 2018 Good line cutting: Towards accurate pose tracking of line-assisted VO/VSLAM
- T-RO 2020 Good Feature Matching: Towards Accurate, Robust VO/VSLAM with Low Latency
- [2] Fu Q, Yu H, Wang X, et al. FastORB-SLAM: a Fast ORB-SLAM Method with Coarse-to-Fine Descriptor Independent Keypoint Matching[J]. arXiv preprint arXiv:2008.09870, 2020.
- FastORB-SLAM: 一种 Coarse-to-Fine 描述符独立关键点匹配的快速 ORB-SLAM 方法
- 湖南大学
- [3] Wenzel P, Wang R, Yang N, et al. 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous Driving[J]. arXiv preprint arXiv:2009.06364, 2020.
- 4Seasons:自动驾驶中多天气SLAM的跨季节数据集
- 慕尼黑工业大学 Nan Yang
- 数据集网页:http://www.4seasons-dataset.com/
- [4] Duong T, Yip M, Atanasov N. Autonomous Navigation in Unknown Environments with Sparse Bayesian Kernel-based Occupancy Mapping[J]. arXiv preprint arXiv:2009.07207, 2020.
- 基于稀疏贝叶斯核占有地图的未知环境自主导航
- UCSD Nikolay A. Atanasov
- 项目主页 | 代码开源
- [5] Bartolomei L, Karrer M, Chli M. Multi-robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)(virtual). 2020.
- [6] Kern A, Bobbe M, Khedar Y, et al. OpenREALM: Real-time Mapping for Unmanned Aerial Vehicles[J]. arXiv preprint arXiv:2009.10492, 2020.
- [7] Du Z J, Huang S S, Mu T J, et al. Accurate RGB-D SLAM in Dynamic Environment using Observationally Consistent Conditional Random Fields. 2020
- 动态环境中使用观察一致 CRF 的精确 RGB-D SLAM
- 清华大学
- [8] Holynski A, Geraghty D, Frahm J M, et al. Reducing Drift in Structure from Motion using Extended Features[J]. arXiv preprint arXiv:2008.12295, 2020.
- 使用拓展特征减小 SFM 中的漂移
- 华盛顿大学,Facebook
- [9] Fu Q, Wang J, Yu H, et al. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line[J]. arXiv preprint arXiv:2009.07462, 2020.
- PL-VINS: 实时点线单目视觉惯性 SLAM
- 湖南大学 | 代码开源
- [10] Company-Corcoles J P, Garcia-Fidalgo E, Ortiz A. LiPo-LCD: Combining Lines and Points for Appearance-based Loop Closure Detection[J]. arXiv preprint arXiv:2009.09897, 2020.(BMVC 2020)
- 几何点线的基于外观的闭环检测
- 巴利阿里群岛大学
- [11] Wang Q, Yan Z, Wang J, et al. Line Flow based SLAM[J]. arXiv preprint arXiv:2009.09972, 2020.
- 基于 SLAM 的线流
- 北大
- [12] Badias A, Alfaro I, Gonzalez D, et al. MORPH-DSLAM: Model Order Reduction for PHysics-based Deformable SLAM[J]. arXiv preprint arXiv:2009.00576, 2020.
- 基于物理可变形 SLAM 降低模型阶数
- 萨拉戈萨大学
- [13] Czarnowski J. Learned representations for real-time monocular SLAM[J]. 2020.
- 实时单目 SLAM 的学习表示
- 帝国理工学院 Jan Czarnowski 博士学位论文 | 导师 Andrew Davison
- 代表作
- [14] Li J, Pei L, Zou D, et al. Attention-SLAM: A Visual Monocular SLAM Learning from Human Gaze[J]. arXiv preprint arXiv:2009.06886, 2020.
- Attention-SLAM:从人类视线中学习的单目视觉 SLAM
- 上海交通大学
- [15] Cremona J, Uzal L, Pire T. WGANVO: Monocular Visual Odometry based on Generative Adversarial Networks[J]. arXiv preprint arXiv:2007.13704, 2020.
- [16] Labbé Y, Carpentier J, Aubry M, et al. CosyPose: Consistent multi-view multi-object 6D pose estimation[J]. arXiv preprint arXiv:2008.08465, 2020.(ECCV 2020)
- CosyPose:一致的多视图多物体 6D 位姿估计
- Object SLAM @ 物体位姿估计
- [17] Yang X, Zhou L, Jiang H, et al. Mobile3DRecon: Real-time Monocular 3D Reconstruction on a Mobile Phone[J]. IEEE Annals of the History of Computing, 2020 (01): 1-1.
- Mobile3DRecon手机上的实时单目重建
- 商汤、浙大
- [18] Ungureanu D, Bogo F, Galliani S, et al. HoloLens 2 Research Mode as a Tool for Computer Vision Research[J]. arXiv preprint arXiv:2008.11239, 2020.
- HoloLens 2 研究模式作为计算机视觉研究的工具
- 三星 AI 中心,微软
- [19] Mori S, Erat O, Broll W, et al. InpaintFusion: Incremental RGB-D Inpainting for 3D Scenes[J]. IEEE Transactions on Visualization and Computer Graphics, 2020, 26(10): 2994-3007.
- InpaintFusion:3D场景的增量RGB-D修复
- 格拉茨工业大学 期刊:中科院二区,JCR Q1, IF 4.558
- [20] AAR: Augmenting a Wearable Augmented Reality Display with an Actuated Head-Mounted Projector. 2020
- 使用可驱动的头戴式投影仪增强可穿戴的增强现实显示
- 滑铁卢大学
- 在 AR 眼镜上再装个投影仪。。。。会玩
本期更新于 2020 年 8 月 27 日
共 30 篇论文,其中 11 项(待)开源工作
这个月公开的论文比较多,且有意思、高质量的工作也不少,多来自于 IROS、RAL(大部分也同步发表于 IROS),比如融合视觉、惯导、LiDAR 的 LIC-Fusion 2.0 和 融合物体语义的视惯里程计 OrcVIO,其他:
4-6、15:多机/多地图
8-13:结构化/室内 SLAM
- [1] Geppert M, Larsson V, Speciale P, et al. Privacy Preserving Structure-from-Motion[J]. 2020.
- 具有隐私保护的 SFM
- 苏黎世联邦理工
- 相关论文:
- CVPR 2019 Privacy Preserving Image-Based Localization
- ECCV 2020 Privacy Preserving Visual SLAM
- [2] Zhang Z, Scaramuzza D. Fisher Information Field: an Efficient and Differentiable Map for Perception-aware Planning[J]. arXiv preprint arXiv:2008.03324, 2020.
- Fisher 信息场:一种用于感知规划的高效且可微分的地图
- 苏黎世大学张子潮 | 代码开源
- 相关工作:ICRA 2019 Beyond Point Clouds: Fisher Information Field for Active Visual Localization
- [3] Zhou Y, Gallego G, Shen S. Event-based Stereo Visual Odometry[J]. arXiv preprint arXiv:2007.15548, 2020.
- [4] Yue Y, Zhao C, Wu Z, et al. Collaborative Semantic Understanding and Mapping Framework for Autonomous Systems[J]. IEEE/ASME Transactions on Mechatronics, 2020.
- 自治系统协作式语义理解和建图框架
- 南洋理工大学 | 期刊:中科院二区,JCR Q1,IF 5.6
- [5] Do H, Hong S, Kim J. Robust Loop Closure Method for Multi-Robot Map Fusion by Integration of Consistency and Data Similarity[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 5701-5708.
- 融合一致性和数据相似性的多机器人地图融合的闭环方法
- 韩国启明大学 | Google Scholar
- [6] Zhan Z, Jian W, Li Y, et al. A SLAM Map Restoration Algorithm Based on Submaps and an Undirected Connected Graph[J]. arXiv preprint arXiv:2007.14592, 2020.
- 基于子图和无向连通图的 SLAM 地图恢复算法
- 武汉大学
- [7] Chen H, Zhang G, Ye Y. Semantic Loop Closure Detection with Instance-Level Inconsistency Removal in Dynamic Industrial Scenes[J]. IEEE Transactions on Industrial Informatics, 2020.
- 动态工业场景中具有实例级不一致消除功能的语义闭环检测
- 厦门大学 | 期刊:中科院一区,JCR Q1,IF 9.1
- [8] Li Y, Brasch N, Wang Y, et al. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 6583-6590.
- Structure-SLAM:室内环境中的低漂移单目 SLAM
- TUM | 代码开源
- [9] Liu J, Meng Z. Visual SLAM with Drift-Free Rotation Estimation in Manhattan World[J]. IEEE Robotics and Automation Letters, 2020.
- 亚特兰大世界的全局最优和有效消失点估计
- 港中文刘云辉教授组 Haoang Li
- [10] Hou J, Yu L, Fei S. A highly robust automatic 3D reconstruction system based on integrated optimization by point line features[J]. Engineering Applications of Artificial Intelligence, 2020, 95: 103879.
- 基于点线联合优化的自动三维重建
- 苏州大学、东南大学 | 期刊:中科院二区,JCR Q1,IF 4.2
- [11] Li H, Kim P, Zhao J, et al. Globally Optimal and Efficient Vanishing Point Estimation in Atlanta World[J]. 2020.
- 曼哈顿世界中无漂移旋转估计的视觉 SLAM
- 港中文,西蒙弗雷泽大学
- [12] Wang X, Christie M, Marchand E. Relative Pose Estimation and Planar Reconstruction via Superpixel-Driven Multiple Homographies[C]//IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'20. 2020.
- 通过超像素驱动的多个单应性的相对姿势估计和平面重建
- 雷恩大学
- [13] Zuñiga-Noël D, Jaenal A, Gomez-Ojeda R, et al. The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments[J]. The International Journal of Robotics Research, 2020: 0278364920938439.
- [14] Zuo X, Yang Y, Geneva P, et al. LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking[J]. arXiv preprint arXiv:2008.07196, 2020.
- LIC-Fusion 2.0:滑动窗口平面特征跟踪的 LiDAR-惯性-视觉里程计
- 浙大、ETH Google Scholar
- 相关论文:LIC-Fusion: LiDAR-Inertial-Camera Odometry
- [15] Alliez P, Bonardi F, Bouchafa S, et al. Real-Time Multi-SLAM System for Agent Localization and 3D Mapping in Dynamic Scenarios[C]//International Confererence on Intelligent Robots and Systems (IROS 2020). 2020.
- 实时 Multi-SLAM 系统,用于动态场景中智能体定位和 3D 建图
- 法国国家信息与自动化研究所
- [16] Shao X, Zhang L, Zhang T, et al. A Tightly-coupled Semantic SLAM System with Visual, Inertial and Surround-view Sensors for Autonomous Indoor Parking[J].2020.
- 具有视觉、惯性和全景传感器的紧密耦合语义 SLAM 系统,用于自主室内停车
- 同济大学
- [17] Shan M, Feng Q, Atanasov N. OrcVIO: Object residual constrained Visual-Inertial Odometry[J]. arXiv preprint arXiv:2007.15107, 2020.(IROS 2020)
- 物体残差约束的 VIO
- 加州大学圣地亚哥分校 | Nikolay A. Atanasov | Mo Shan
- [18] Seok H, Lim J. ROVINS: Robust Omnidirectional Visual Inertial Navigation System[J]. IEEE Robotics and Automation Letters, 2020.
- 鲁棒的全方位视觉惯性导航系统
- 韩国汉阳大学
- [19] Liu W, Caruso D, Ilg E, et al. TLIO: Tight Learned Inertial Odometry[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 5653-5660.
- [20] Sartipi K, Do T, Ke T, et al. Deep Depth Estimation from Visual-Inertial SLAM[J]. arXiv preprint arXiv:2008.00092, 2020.
- 视觉惯性 SLAM 深度估计
- 明尼苏达大学 | 代码开源
- [21] Gomez C, Hernandez A C, Derner E, et al. Object-Based Pose Graph for Dynamic Indoor Environments[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 5401-5408.
- 动态室内环境中基于物体的位姿图
- 马德里卡洛斯三世大学 | Google Scholar(有很多基于物体的机器人导航的工作)
- [22] Wang H, Wang C, Xie L. Online Visual Place Recognition via Saliency Re-identification[J]. arXiv preprint arXiv:2007.14549, 2020.(IROS 2020)
- 通过显著性重识别进行视觉场景重识别
- CMU | 代码开源
- [23] Jau Y Y, Zhu R, Su H, et al. Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints[J]. arXiv preprint arXiv:2007.15122, 2020.(IROS 2020)
- 具有几何约束的基于深度关键点的相机姿势估计
- 加州大学圣地亚哥分校 | 代码开源
- [24] Gong X, Liu Y, Wu Q, et al. An Accurate, Robust Visual Odometry and Detail-preserving Reconstruction System[J]. 2020.
- 准确、鲁棒的视觉里程计和保留细节的重建系统
- 南京航空航天大学 | 代码开源(暂未公开)
- [25] Wei P, Hua G, Huang W, et al. Unsupervised Monocular Visual-inertial Odometry Network[J].IJCAI 2020
- 无监督单目视惯里程计网络
- 北大 | 代码开源 (暂未公开)
- [26] Li D, Shi X, Long Q, et al. DXSLAM: A Robust and Efficient Visual SLAM System with Deep Features[J]. arXiv preprint arXiv:2008.05416, 2020.(IROS 2020)
- 基于深度信息的鲁棒高效视觉 SLAM 系统
- 清华大学 | 代码开源
- [27] Tahara T, Seno T, Narita G, et al. Retargetable AR: Context-aware Augmented Reality in Indoor Scenes based on 3D Scene Graph[J]. arXiv preprint arXiv:2008.07817, 2020.
- 可重定位的 AR:基于室内 3D 场景图中的情景感知增强现实
- 索尼
- [28] Li X, Tian Y, Zhang F, et al. Object Detection in the Context of Mobile Augmented Reality[J]. arXiv preprint arXiv:2008.06655, 2020.(ISMAR 2020)
- 移动增强现实环境下的目标检测
- oppo
- [29] Liu C, Shen S. An Augmented Reality Interaction Interface for Autonomous Drone[J]. arXiv preprint arXiv:2008.02234, 2020.(IROS 2020)
- 自主无人机增强现实交互界面
- 港科沈邵劼组
- [30] Du R, Turner E L, Dzitsiuk M, et al. DepthLab: Real-time 3D Interaction with Depth Maps for Mobile Augmented Reality[J]. 2020.
- 使用深度地图实时 3D 交互的移动增强现实
本期更新于 2020 年 7 月 27 日
共 20 篇论文,其中 8 项(待)开源工作
本月月初 ECCV,IROS 放榜,不少新论文出现
2 隐私保护的视觉 SLAM,11 秦通大佬的 AVP-SLAM
月底 ORB-SLAM3 又制造了大新闻,谷歌学术都没来得及收录,国内公众号都出解析了
- [1] Carlos Campos, Richard Elvira, et al.ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM[J]. arXiv preprint arXiv:2007.11898, 2020.
- [2] Shibuya M, Sumikura S, Sakurada K. Privacy Preserving Visual SLAM[J]. arXiv preprint arXiv:2007.10361, 2020.(ECCV 2020)
- 保护隐私的视觉 SLAM
- 日本国立先进工业科学技术研究院;东京工业大学
- 作者 Shinya Sumikura 谷歌学术,OpenVSLAM 的作者 | 项目主页 | Video
- 相关论文:CVPR 2019 Privacy preserving image-based localization
- [3] Tompkins A, Senanayake R, Ramos F. Online Domain Adaptation for Occupancy Mapping[J]. arXiv preprint arXiv:2007.00164, 2020.(RSS 2020)
- [4] Li Y, Zhang T, Nakamura Y, et al. SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes[J]. arXiv preprint arXiv:2007.02108, 2020.(IROS 2020)
- SplitFusion:非刚性场景的 SLAM
- 东京大学
- [5] WeiChen Dai, Yu Zhang, Ping Li, and Zheng Fang, Sebastian Scherer. RGB-D SLAM in Dynamic Environments Using Points Correlations. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
- 动态环境中使用点关联的 RGB-D SLAM
- 浙江大学;期刊:PAMI 中科院一区,JCR Q1,IF 17.86
- [6] Huang H, Ye H, Sun Y, et al. GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Models[J]. IEEE Robotics and Automation Letters, 2020.
- 高斯混合模型的结构一致视觉定位
- 香港科技大学 | 代码开源
- [7] Zuo X, Ye W, Yang Y, et al. Multimodal localization: Stereo over LiDAR map[J]. Journal of Field Robotics, 2020.
- 多模式定位:在 LiDAR 先验地图中使用双目相机定位
- 浙江大学、特拉华大学 | 作者谷歌学术 | 期刊:中科院二区,JCR Q2,IF 4.19
- [8] Shan T, Englot B, Meyers D, et al. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping[J]. arXiv preprint arXiv:2007.00258, 2020.(IROS 2020)
- [9] Rozenberszki D, Majdik A. LOL: Lidar-Only Odometry and Localization in 3D Point Cloud Maps[J]. arXiv preprint arXiv:2007.01595, 2020.(ICRA 2020)
- [10] You R, Hou H. Real-Time Pose Estimation by Fusing Visual and Inertial Sensors for Autonomous Driving[J]. 2020.
- 通过融合视觉和惯性传感器进行自动驾驶的实时位姿估计
- 瑞典查尔默斯理工大学 硕士学位论文 | 代码开源
- [11] Qin T, Chen T, Chen Y, et al. AVP-SLAM: Semantic Visual Mapping and Localization for Autonomous Vehicles in the Parking Lot[J]. arXiv preprint arXiv:2007.01813, 2020.(IROS 2020)
- AVP-SLAM:停车场中自动驾驶车辆的语义 SLAM
- 华为秦通 | 知乎文章
- [12] Gomez C, Silva A C H, Derner E, et al. Object-Based Pose Graph for Dynamic Indoor Environments[J]. IEEE Robotics and Automation Letters, 2020.
- 动态室内环境中基于物体的位姿图
- 西班牙马德里卡洛斯三世大学
- [13] Costante G, Mancini M. Uncertainty Estimation for Data-Driven Visual Odometry[J]. IEEE Transactions on Robotics, 2020.
- 数据驱动的视觉测程的不确定度估计
- 意大利佩鲁贾大学 | 代码开源(还未放出) | 期刊:中科院二区,JCR Q1,IF 7.0
- [14] Min Z, Yang Y, Dunn E. VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR 2020: 4898-4909.
- 对数逻辑密集光流残差的视觉里程计
- 史蒂文斯技术学院 | 代码开源(还未放出)
- [15] Zou Y, Ji P, Tran Q H, et al. Learning Monocular Visual Odometry via Self-Supervised Long-Term Modeling[J]. arXiv preprint arXiv:2007.10983, 2020.(ECCV 2020)
- 自监督长期建模学习单目视觉里程计
- 弗吉尼亚理工大学 | 项目主页 | 代码 Coming soon
- [16] Wei P, Hua G, Huang W, et al. Unsupervised Monocular Visual-inertial Odometry Network[J].2020
- 无监督单目视惯里程计
- 北京大学 | 代码开源(还未放出)
- [17] Chen Y, Zhang B, Zhou J, et al. Real-time 3D unstructured environment reconstruction utilizing VR and Kinect-based immersive teleoperation for agricultural field robots[J]. Computers and Electronics in Agriculture, 2020, 175: 105579.
- 利用 VR 和基于 Kinect 的沉浸式远程操作技术对农业机器人进行实时 3D 非结构化环境重构
- 南京农业大学 | 期刊:中科院二区,JCR Q1,IF 4.0
- [18] Choi J, Son M G, Lee Y Y, et al. Position-based augmented reality platform for aiding construction and inspection of offshore plants[J]. The Visual Computer, 2020: 1-11.
- 基于位置的增强现实平台,用于辅助海上工厂的建设和检查
- 韩国光州科学技术院 | 期刊:中科院四区,JCR Q3
- [19] Brazil G, Pons-Moll G, Liu X, et al. Kinematic 3D Object Detection in Monocular Video[J]. arXiv preprint arXiv:2007.09548, 2020.(ECCV 2020)
- [20] Yu Z, Jin L, Gao S. P2Net: Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation[J]. arXiv preprint arXiv:2007.07696, 2020.(ECCV 2020)
- 无监督室内深度估计的块匹配和平面正则化
- 上海科技大学 | 代码开源
本期更新于 2020 年 6 月 27 日
共 20 篇论文,其中 3 项开源工作
4,5,6,12,13 线、边、平面、物体多路标 SLAM
2,3 多机器人 SLAM
7,16 拓扑相关
11:深度学习用于定位和建图的调研
- [1] Zhang T, Zhang H, Li Y, et al. FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow[C]. ICRA 2020.
- FlowFusion:基于光流的动态稠密 RGB-D SLAM
- 东京大学;作者谷歌学术
- [2] Lajoie P Y, Ramtoula B, Chang Y, et al. DOOR-SLAM: Distributed, online, and outlier resilient SLAM for robotic teams[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 1656-1663.
- 适用于多机器人的分布式,在线和异常灵活的 SLAM
- 加拿大蒙特利尔理工学院;代码开源
- [3] Chakraborty K, Deegan M, Kulkarni P, et al. JORB-SLAM: A Jointly optimized Multi-Robot Visual SLAM[J].
- 多机器人 SLAM 联合优化
- 密歇根大学机器人研究所
- [4] Zhang H, Ye C. Plane-Aided Visual-Inertial Odometry for 6-DOF Pose Estimation of a Robotic Navigation Aid[J]. IEEE Access, 2020, 8: 90042-90051.
- 用于机器人导航 6 自由度位姿估计的平面辅助 VIO
- 弗吉尼亚联邦大学;开源期刊;谷歌学术
- [5] Ali A J B, Hashemifar Z S, Dantu K. Edge-SLAM: edge-assisted visual simultaneous localization and mapping[C]//Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services. 2020: 325-337.
- Edge-SLAM: 边辅助的视觉 SLAM
- 布法罗大学
- [6] Mateus A, Ramalingam S, Miraldo P. Minimal Solvers for 3D Scan Alignment With Pairs of Intersecting Lines[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 7234-7244.
- 成对的相交线 3D 扫描对齐的最少求解器
- 葡萄牙里斯本大学,谷歌
- [7] Xue W, Ying R, Gong Z, et al. SLAM Based Topological Mapping and Navigation[C]//2020 IEEE/ION Position, Location and Navigation Symposium (PLANS). IEEE, 2020: 1336-1341.
- 基于 SLAM 的拓扑建图与导航
- 上交
- [8] Lee W, Eckenhoff K, Geneva P, et al. Intermittent GPS-aided VIO: Online Initialization and Calibration[J].2020
- 间歇性 GPS 辅助 VIO:在线初始化和校准
- 特拉华大学,黄国权教授
- [9] Alliez P, Bonardi F, Bouchafa S, et al. Indoor Localization and Mapping: Towards Tracking Resilience Through a Multi-SLAM Approach[C]//28th Mediterranean Conference on Control and Automation (MED 2020). 2020.
- 室内定位和制图:通过多传感器 SLAM 方法实现弹性跟踪
- [10] Nam D V, Gon-Woo K. Robust Stereo Visual Inertial Navigation System Based on Multi-Stage Outlier Removal in Dynamic Environments[J]. Sensors, 2020, 20(10): 2922.
- 动态环境中基于多阶段离群值剔除的鲁棒双目视觉惯性导航系统
- 韩国忠北国立大学,开源期刊,作者主页
- [11] Chen C, Wang B, Lu C X, et al. A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence[J]. arXiv preprint arXiv:2006.12567, 2020.
- 深度学习用于定位和建图的调研:走向空间机器智能时代
- 牛津大学;所有涉及到的论文的列表:Github
- [12] Li J, Koreitem K, Meger D, et al. View-Invariant Loop Closure with Oriented Semantic Landmarks[C]. ICRA 2020.
- 面向语义路标的视图不变闭环
- 麦吉尔大学;谷歌学术
- [13] Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems[J]. IEEE Access, 2020.
- VPS-SLAM:航空机器人的视觉平面语义 SLAM
- 马德里理工大学自动化与机器人研究中心,MIT 航空航天控制实验室
- 代码开源
- [14] Shi T, Cui H, Song Z, et al. Dense Semantic 3D Map Based Long-Term Visual Localization with Hybrid Features[J]. arXiv preprint arXiv:2005.10766, 2020.
- 使用混合特征的基于密集 3D 语义地图的长距离视觉定位
- 中科院自动化所
- [15] Metrically-Scaled Monocular SLAM using Learned Scale Factors [C]. ICRA 2020 Best Paper Award in Robot Vision
- 通过学习尺度因子的单目度量 SLAM
- MIT;作者主页
- [16] Chaplot D S, Salakhutdinov R, Gupta A, et al. Neural Topological SLAM for Visual Navigation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR 2020: 12875-12884.
- 用于视觉导航的神经拓扑SLAM
- CMU;项目主页
- [17] Min Z, Yang Y, Dunn E. VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR 2020: 4898-4909.
- 基于对数逻辑稠密光流残差的视觉里程计
- 史蒂文斯理工学院;代码开源(还未放出)
- [18] Loo S Y, Mashohor S, Tang S H, et al. DeepRelativeFusion: Dense Monocular SLAM using Single-Image Relative Depth Prediction[J]. arXiv preprint arXiv:2006.04047, 2020.
- 使用单视图深度预测的单目稠密 SLAM
- 哥伦比亚大学
- [19] Choudhary S, Sekhar N, Mahendran S, et al. Multi-user, Scalable 3D Object Detection in AR Cloud[C]. CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Seattle, WA, 2020.
- AR 云进行多用户可扩展的 3D 目标检测
- Magic Leap ;项目主页
- [20] Tang F, Wu Y, Hou X, et al. 3D Mapping and 6D Pose Computation for Real Time Augmented Reality on Cylindrical Objects[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019.
- 圆柱物体上实时增强现实的 3D 建图和 6D 位姿计算
- 中科院自动化所
本期更新于 2020 年 5 月 23 日
共 20 篇论文,其中 5 项开源工作
最近不知道是不是受疫情影响,论文好像有点少了
Voxgraph:苏黎世理工开源的实时体素建图
Neural-SLAM:CMU 开源的主动神经网络
- [1] Wang W, Zhu D, Wang X, et al. TartanAir: A Dataset to Push the Limits of Visual SLAM[J]. arXiv preprint arXiv:2003.14338, 2020.
- TartanAir:突破视觉 SLAM 极限的数据集
- CMU,港中文;数据集公开:http://theairlab.org/tartanair-dataset/
- 朱德龙师兄参与的一项工作,上个月推荐过了,这个月刚完善网站再推荐一遍,并在 CVPR 2020 组织了 workshop
- [2] Reijgwart V, Millane A, Oleynikova H, et al. Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps[J]. IEEE Robotics and Automation Letters, 2019, 5(1): 227-234.
- 使用 SDF 子图的全局一致体素建图
- 苏黎世联邦理工;代码开源
- [3] Fontán A, Civera J, Triebel R. Information-Driven Direct RGB-D Odometry[J].2020.
- 信息驱动的直接法 RGB-D SLAM
- 萨拉戈萨大学, TUM
- [4] Murai R, Saeedi S, Kelly P H J. BIT-VO: Visual Odometry at 300 FPS using Binary Features from the Focal Plane[J]. arXiv preprint arXiv:2004.11186, 2020.
- BIT-VO:使用焦平面的二进制特征以 300 FPS 运行的视觉里程计
- 帝国理工 项目主页、演示视频
- [5] Du S, Guo H, Chen Y, et al. GPO: Global Plane Optimization for Fast and Accurate Monocular SLAM Initialization[J]. ICRA 2020.
- 准确快速单目 SLAM 初始化的全局平面优化
- 中科院自动化所,字节跳动
- [6] Li F, Fu C, Gostar A K, et al. Advanced Mapping Using Planar Features Segmented from 3D Point Clouds[C]//2019 International Conference on Control, Automation and Information Sciences (ICCAIS). IEEE, 2019: 1-6.
- 利用 3D 点云分割的平面进行建图
- 重庆大学
- [7] Zou Y, Chen L, Jiang J. Lightweight Indoor Modeling Based on Vertical Planes and Lines[C]//2020 11th International Conference on Information and Communication Systems (ICICS). IEEE, 2020: 136-142.
- 基于垂直平面和线段的室内轻量化建图
- 国防科大;ICICS:CCF C 类会议
- [8] Nobis F, Papanikolaou O, Betz J, et al. Persistent Map Saving for Visual Localization for Autonomous Vehicles: An ORB-SLAM Extension[J]. arXiv preprint arXiv:2005.07429, 2020.
- ORB-SLAM2 的拓展应用:永久保存车辆视觉定位的地图
- TUM 汽车技术研究所;代码开源
- [9] Li X, He Y, Lin J, et al. Leveraging Planar Regularities for Point Line Visual-Inertial Odometry[J]. arXiv preprint arXiv:2004.11969, 2020.
- 利用平面规律的点线 VIO
- 北京大学;IROS 2020 投稿论文
- [10] Liu J, Gao W, Hu Z. Bidirectional Trajectory Computation for Odometer-Aided Visual-Inertial SLAM[J]. arXiv preprint arXiv:2002.00195, 2020.
- 里程计辅助视惯 SLAM 的双向轨迹计算
- 中科院自动化所;解决 SLAM 在转弯之后容易退化的问题
- [11] Liu R, Marakkalage S H, Padmal M, et al. Collaborative SLAM based on Wifi Fingerprint Similarity and Motion Information[J]. IEEE Internet of Things Journal, 2019.
- 基于 Wifi 指纹相似度和运动信息的协作式 SLAM
- 新加坡科技设计大学;期刊:中科院一区,JCR Q1,IF 11.2
- [12] Jung J H, Heo S, Park C G. Observability Analysis of IMU Intrinsic Parameters in Stereo Visual-Inertial Odometry[J]. IEEE Transactions on Instrumentation and Measurement, 2020.
- 立体视觉惯性里程计中IMU内部参数的可观察性分析
- 韩国首尔大学;期刊:中科院三区,JCR Q2,IF 3.0
- [13] Wu Y, Zhang Y, Zhu D, et al. EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association[J]. arXiv preprint arXiv:2004.12730, 2020.
- [14] Vasilopoulos V, Pavlakos G, Schmeckpeper K, et al. Reactive Navigation in Partially Familiar Planar Environments Using Semantic Perceptual Feedback[J]. arXiv preprint arXiv:2002.08946, 2020.
- 使用语义感知反馈的部分熟悉平面环境中的反应性导航
- 宾夕法尼亚大学
- [15] Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural slam[C]. ICLR 2020.
- [16] Li S, Wang X, Cao Y, et al. Self-Supervised Deep Visual Odometry with Online Adaptation[C]. CVPR. 2020.
- 在线自适应的自监督视觉里程计
- 北京大学
- [17] Li W, Gu J, Chen B, et al. Incremental Instance-Oriented 3D Semantic Mapping via RGB-D Cameras for Unknown Indoor Scene[J]. Discrete Dynamics in Nature and Society, 2020, 2020.
- RGB-D 相机室内增量式三维实例语义建图
- 河北工业大学;期刊:中科院三区,JCR Q3Q4 开源期刊
- [18] Tiwari L, Ji P, Tran Q H, et al. Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction[J]. arXiv preprint arXiv:2004.10681, 2020.
- 伪 RGB-D 用于改善单目 SLAM 和深度预测(单目 SLAM + 单目深度估计)
- 印度德里 Indraprastha 信息技术学院(IIIT-Delhi)
- [19] Wald J, Dhamo H, Navab N, et al. Learning 3D Semantic Scene Graphs from 3D Indoor Reconstructions[C]. CVPR 2020.
- [20] Sommer C, Sun Y, Bylow E, et al. PrimiTect: Fast Continuous Hough Voting for Primitive Detection[C]. ICRA 2020.
- 用于基元检验的快速连续霍夫投票
- TUM
本期更新于 2020 年 4 月 25 日
共 22 篇论文,其中 7 项开源工作, 1 项公开数据集;
2、8、12 跟线段有关
9、10 VIO 相关
TartanAir 突破视觉 SLAM 极限的数据集,投稿于 IROS 2020
VPS-SLAM 平面语义 SLAM 比较有意思,代码开源
- [1] Wang W, Zhu D, Wang X, et al. TartanAir: A Dataset to Push the Limits of Visual SLAM[J]. arXiv preprint arXiv:2003.14338, 2020.
- TartanAir:突破视觉 SLAM 极限的数据集
- CMU,港中文;数据集公开:http://theairlab.org/tartanair-dataset/
- 朱德龙师兄的工作,置顶推荐一下
- [2] Gomez-Ojeda R. Robust Visual SLAM in Challenging Environments with Low-texture and Dynamic Illumination[J]. 2020.
- 低纹理和动态光照挑战环境下的鲁棒视觉 SLAM
- 西班牙马拉加大学,点线 SLAM 作者的博士学位论文
- [3] Yang S, Li B, Cao Y P, et al. Noise-resilient reconstruction of panoramas and 3D scenes using robot-mounted unsynchronized commodity RGB-D cameras[J]. ACM Transactions on Graphics, 2020.
- 使用安装在机器人上的非商用 RGB-D 相机对全景图和三维场景进行抗噪声重建
- 清华大学胡事民教授,期刊:中科院二区,JCR Q1,IF 7.176
- [4] Huang J, Yang S, Mu T J, et al. ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings[J]. arXiv preprint arXiv:2003.12980, 2020.
- ClusterVO:对移动实例进行聚类并估算自身和周围环境的视觉里程计
- 清华大学胡事民教授;演示视频
- Huang J, Yang S, Zhao Z, et al. ClusterSLAM: A SLAM Backend for Simultaneous Rigid Body Clustering and Motion Estimation[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 5875-5884.
- [5] Quenzel J, Rosu R A, Läbe T, et al. Beyond Photometric Consistency: Gradient-based Dissimilarity for Improving Visual Odometry and Stereo Matching[C]. International Conference on Robotics and Automation (ICRA), 2020.
- 超越光度一致性:用于改善视觉里程计和立体匹配的基于梯度的差异
- 波恩大学智能自主实验室
- [6] Yang Y, Tang D, Wang D, et al. Multi-camera visual SLAM for off-road navigation[J]. Robotics and Autonomous Systems, 2020: 103505.
- 用于越野导航的多相机 SLAM
- 北理工自动化学院
- [7] Cheng W. Methods for large-scale image-based localization using structure-from-motion point clouds[J]. 2020.
- 利用 SFM 点云在大规模环境下的基于图像的定位
- 南洋理工大学博士学位论文;相关代码
- [8] Sun T, Song D, Yeung D Y, et al. Semi-semantic Line-Cluster Assisted Monocular SLAM for Indoor Environments[C]//International Conference on Computer Vision Systems. Springer, Cham, 2019: 63-74.
- 室内环境中半语义线段簇辅助单目 SLAM
- 香港科技大学机器人与多感知实验室 RAM-LAB
- [9] Nagy B, Foehn P, Scaramuzza D. Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO[J]. arXiv preprint arXiv:2003.13493, 2020.
- 用于高速 VIO 的前端 GPU 加速
- 苏黎世大学、苏黎世联邦理工;代码开源
- [10] Li J, Yang B, Huang K, et al. Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors[C]//Chinese Conference on Pattern Recognition and Computer Vision (PRCV). Springer, Cham, 2019: 283-295.
- 具有多平面先验的稳健高效的视觉惯性里程计
- 浙大 CAD&CG 实验室,章国峰;章老师主页上是显示将会开源
- [11] Debeunne C, Vivet D. A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping[J]. Sensors, 2020, 20(7): 2068.
- 视觉-激光 SLAM 综述
- 图卢兹大学;开源期刊,中科院三区,JCR Q2Q3
- [12] Yu H, Zhen W, Yang W, et al. Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences[J]. arXiv preprint arXiv:2004.00740, 2020.
- 在先验雷达地图中通过 2D-3D 线段关联实现单目视觉定位
- CMU,武汉大学;代码开源
- [13] Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems[J]. IEEE Access, 2020.
- [14] Liao Z, Wang W, Qi X, et al. Object-oriented SLAM using Quadrics and Symmetry Properties for Indoor Environments[J]. arXiv preprint arXiv:2004.05303, 2020.
- [15] Ma Q M, Jiang G, Lai D Z. Robust Line Segments Matching via Graph Convolution Networks[J]. arXiv preprint arXiv:2004.04993, 2020.
- 图卷积神经网络线段匹配
- 西安电子科大;代码开源
- [16] Li R, Wang S, Gu D. DeepSLAM: A Robust Monocular SLAM System with Unsupervised Deep Learning[J]. IEEE Transactions on Industrial Electronics, 2020.
- DeepSLAM:无监督深度学习的单目 SLAM
- 中国国家国防科技创新研究院,期刊:中科院一区,JCR Q1,IF 8.4
- ICRA 2018 无监督单目 VO:Undeepvo: Monocular visual odometry through unsupervised deep learning
- Cognitive Computation 2018 SLAM 从几何到深度学习:挑战与机遇:Ongoing evolution of visual SLAM from geometry to deep learning: challenges and opportunities
- [17] Baker L, Ventura J, Zollmann S, et al. SPLAT: Spherical Localization and Tracking in Large Spaces[J].2020.
- SPLAT:大场景中球形定位与跟踪
- 新西兰奥塔哥大学
- [18] Valentini I, Ballestin G, Bassano C, et al. Improving Obstacle Awareness to Enhance Interaction in Virtual Reality[C]. IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 2020.
- 增强障碍意识以提升虚拟现实中的互动
- 意大利热那亚大学;video
- [19] Stylianidis E, Valari E, Pagani A, et al. Augmented Reality Geovisualisation for Underground Utilities[J]. 2020.
- 增强现实地理可视化
- 希腊亚里士多德大学
- [20] Sengupta S, Jayaram V, Curless B, et al. Background Matting: The World is Your Green Screen[J]. arXiv preprint arXiv:2004.00626, 2020.
- 背景抠图
- 华盛顿大学;代码开源
- [21] Wang L, Wei H. Avoiding non-Manhattan obstacles based on projection of spatial corners in indoor environment[J]. IEEE/CAA Journal of Automatica Sinica, 2020.
- 室内环境中基于空间角投影避免非曼哈顿障碍物
- 北大、上海理工、复旦;期刊:自动化学报英文版
- [22] Spencer J, Bowden R, Hadfield S. Same Features, Different Day: Weakly Supervised Feature Learning for Seasonal Invariance[J]. arXiv preprint arXiv:2003.13431, 2020.
- 不同时间的相同特征:季节性不变的弱监督特征学习
- 英国萨里大学;代码开源(还未放出)
本期 23 篇论文,其中 7 项开源工作;
1、2 多相机 SLAM 系统
9、10 VIO
21、22 3D 目标检测
12-19 八篇跟 semantic/deep learning 有关,趋势?
注:没有特意整理 CVPR,ICRA 新的论文,大部分都半年前就有预印版了,在这个仓库里基本上也早收录了
2020 年 3 月 29 日更新
- [1] Kuo J, Muglikar M, Zhang Z, et al. Redesigning SLAM for Arbitrary Multi-Camera Systems[C]. ICRA 2020.
- [2] Won C, Seok H, Cui Z, et al. OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems[J]. arXiv preprint arXiv:2003.08056, 2020.
- OmniSLAM:宽基线和多相机的全向定位和建图
- 韩国汉阳大学计算机科学系
- [3] Colosi M, Aloise I, Guadagnino T, et al. Plug-and-Play SLAM: A Unified SLAM Architecture for Modularity and Ease of Use[J]. arXiv preprint arXiv:2003.00754, 2020.
- 即插即用型 SLAM:模块化且易用的 SLAM 统一框架
- 意大利罗马萨皮恩扎大学;代码开源
- 作者之前一篇类似的文章,教你怎么模块化一个 SLAM 系统:
- Schlegel D, Colosi M, Grisetti G. Proslam: Graph SLAM from a programmer's perspective[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-9.
- 代码开源
- [4] Wu X, Vela P, Pradalier C. Robust Monocular Edge Visual Odometry through Coarse-to-Fine Data Association[J].2020.
- 通过从粗到细的数据关联实现鲁棒的单目基于边的视觉里程计
- 佐治亚理工学院
- [5] Rosinol A, Gupta A, Abate M, et al. 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans[J]. arXiv preprint arXiv:2002.06289, 2020.
- 3D 动态场景图:具有位置,物体和人的可操作空间感知
- MIT;Kimera 的作者;演示视频;Google Scholar
- [6] Zeng T, Li X, Si B. StereoNeuroBayesSLAM: A Neurobiologically Inspired Stereo Visual SLAM System Based on Direct Sparse Method[J]. arXiv preprint arXiv:2003.03091, 2020.
- 类脑双目直接稀疏 SLAM
- 沈自所斯老师
- [7] Oleynikova H, Taylor Z, Siegwart R, et al. Sparse 3d topological graphs for micro-aerial vehicle planning[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1-9.
- 微型飞行器路径规划的稀疏 3D 拓扑图
- 苏黎世联邦理工;作者主页;路径规划与建图部分代码开源,相关论文:
- Oleynikova H, Taylor Z, Fehr M, et al. Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning[C]//2017 Ieee/rsj International Conference on Intelligent Robots and Systems (iros). IEEE, 2017: 1366-1373.
- [8] Ye H, Huang H, Liu M. Monocular Direct Sparse Localization in a Prior 3D Surfel Map[J]. arXiv preprint arXiv:2002.09923, 2020.
- 在 Surfel 地图中的单目稀疏直接法定位
- 港科大 RAM 实验室
- Tips:构造稀疏点的全局平面信息
- [9] Zhao Y, Smith J S, Karumanchi S H, et al. Closed-Loop Benchmarking of Stereo Visual-Inertial SLAM Systems: Understanding the Impact of Drift and Latency on Tracking Accuracy[C]. ICRA 2020.
- [10] Giubilato R, Chiodini S, Pertile M, et al. MiniVO: Minimalistic Range Enhanced Monocular System for Scale Correct Pose Estimation[J]. 2020.
- 用于正确尺度位姿估计的最小范围增强单目视觉里程计
- 意大利帕多瓦大学
- Tips:1D LiDAR 矫正单目尺度
- [11] Huang W, Liu H, Wan W. An Online Initialization and Self-Calibration Method for Stereo Visual-Inertial Odometry[J]. IEEE Transactions on Robotics, 2020.
- 一种双目视惯里程计的在线初始化和自标定方法
- 北京大学;Google Scholar;作者另外一篇文章:
- Huang W, Liu H. Online initialization and automatic camera-IMU extrinsic calibration for monocular visual-inertial SLAM[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 5182-5189.
- [12] Landgraf Z, Falck F, Bloesch M, et al. Comparing View-Based and Map-Based Semantic Labelling in Real-Time SLAM[J]. arXiv preprint arXiv:2002.10342, 2020.
- 在实时 SLAM 中比较基于视图和基于地图的语义标签
- 帝国理工学院计算机系戴森机器人实验室
- [13] Singh G, Wu M, Lam S K. Fusing Semantics and Motion State Detection for Robust Visual SLAM[C]//The IEEE Winter Conference on Applications of Computer Vision. 2020: 2764-2773.
- 融合语义和运动状态检测以实现鲁棒的视觉 SLAM
- 南洋理工大学
- [14] Gupta A, Iyer G, Kodgule S. DeepEvent-VO: Fusing Intensity Images and Event Streams for End-to-End Visual Odometry[J].
- DeepEvent-VO:融合强度图像和事件流的端到端视觉测距
- CMU;代码开源
- [15] Wagstaff B, Peretroukhin V, Kelly J. Self-Supervised Deep Pose Corrections for Robust Visual Odometry[J]. arXiv preprint arXiv:2002.12339, 2020.
- 鲁棒视觉里程计的自监督深度位姿矫正
- 多伦多大学 STARS 实验室;代码开源
- [16] Ye X, Ji X, Sun B, et al. DRM-SLAM: Towards dense reconstruction of monocular SLAM with scene depth fusion[J]. Neurocomputing, 2020.
- DRM-SLAM:通过场景深度融合实现单目 SLAM 的稠密重建
- 大连理工大学;期刊:中科院二区,JCR Q1,IF 3.824
- [17] Yang N, von Stumberg L, Wang R, et al. D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry[C]. CVPR 2020.
- D3VO:单目视觉里程计中针对深度、位姿和不确定性的深度网络
- TUM 计算机视觉组;个人主页
- [18] Chen C, Rosa S, Miao Y, et al. Selective sensor fusion for neural visual-inertial odometry[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 10542-10551.
- 神经视觉 VIO 的选择性传感器融合
- 牛津大学计算机科学系;Google Scholar
- [19] Towards the Probabilistic Fusion of Learned Priors into Standard Pipelines for 3D Reconstruction[C]. ICRA 2020.
- 将学习的先验信息融合到标准的三维重建中
- 帝国理工学院戴森机器人实验室
- [20] Wu L, Wan W, Yu X, et al. A novel augmented reality framework based on monocular semi‐dense simultaneous localization and mapping[J]. Computer Animation and Virtual Worlds, 2020: e1922.
- 基于单目半稠密 SLAM 的新型 AR 框架
- 上海大学;期刊:中科院四区,JCR Q4,IF 0.794
- [21] Shi W. Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud[C]. CVPR 2020.
- [22] Chen Y, Tai L, Sun K, et al. MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships[C]. CVPR 2020.
- MonoPair: 使用成对空间关系的单目 3D 对象检测
- 阿里巴巴
- [23] Chen X, Song J, Hilliges O. Monocular neural image based rendering with continuous view control[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 4090-4100.
- 基于连续视图控制的单目神经图像绘制
- 苏黎世联邦理工 AIT 实验室
- 类似于百度自动驾驶仿真 AADS 采用的新视图合成?
这个月赶论文,看的论文比较少,本期 17 篇,其中 3 项开源工作;
1、2、3、4 建图相关
7、8、9 动态相关
10、11 视惯融合
13、14、15 AR相关
2020 年 2 月 25 日更新
- [1] Muglikar M, Zhang Z, Scaramuzza D. Voxel map for visual slam[C]. ICRA 2020.
- 使用体素图的视觉 SLAM
- 苏黎世大学,张子潮
- [2] Ye X, Ji X, Sun B, et al. DRM-SLAM: Towards Dense Reconstruction of Monocular SLAM with Scene Depth Fusion[J]. Neurocomputing, 2020.
- 通过场景深度融合实现单目 SLAM 的稠密重建
- 大连理工大学,期刊:中科院二区, IF 4.0
- [3] Nardi F, Grisetti G, Nardi D. High-Level Environment Representations for Mobile Robots. 2019.
- 移动机器人高级别的环境表示
- 罗马大学博士学位论文
- [4] Puligilla S S, Tourani S, Vaidya T, et al. Topological Mapping for Manhattan-like Repetitive Environments[J]. arXiv preprint arXiv:2002.06575, 2020.
- [5] Li X, Ling H. Hybrid Camera Pose Estimation with Online Partitioning for SLAM[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 1453-1460.
- 在线分割 SLAM 中的混合相机位姿估计
- 天普大学,林海滨教授
- [6] Karimian A, Yang Z, Tron R. Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization[J]. arXiv preprint arXiv:2002.02638, 2020.
- 使用最大期望识别多机器人 VSLAM 中的异常值
- 波士顿大学
- [7] Henein M, Zhang J, Mahony R, et al. Dynamic SLAM: The Need For Speed[J]. arXiv preprint arXiv:2002.08584, 2020.
- 满足速度估计需求的动态 SLAM
- 澳大利亚国立大学,作者主要研究动态 SLAM Google Scholar
- [8] Nair G B, Daga S, Sajnani R, et al. Multi-object Monocular SLAM for Dynamic Environments[J]. arXiv preprint arXiv:2002.03528, 2020.
- 用于动态环境的多目标单目 SLAM
- 印度海得拉巴理工学院
- [9] Cheng J, Zhang H, Meng M Q H. Improving Visual Localization Accuracy in Dynamic Environments Based on Dynamic Region Removal[J]. IEEE Transactions on Automation Science and Engineering, 2020.
- 通过动态区域剔除来提升动态环境中视觉定位的准确性
- 港中文;中科院二区 JCR Q1
- [10] Patel N, Khorrami F, Krishnamurthy P, et al. Tightly Coupled Semantic RGB-D Inertial Odometry for Accurate Long-Term Localization and Mapping[C]//2019 19th International Conference on Advanced Robotics (ICAR). IEEE, 2019: 523-528.
- 用于精确、长期定位和建图的紧耦合语义 RGB-D 惯性里程计
- 纽约大学
- [11] Chiodini S, Giubilato R, Pertile M, et al. Retrieving Scale on Monocular Visual Odometry Using Low Resolution Range Sensors[J]. IEEE Transactions on Instrumentation and Measurement, 2020.
- 使用低分辨率距离传感器恢复单目视觉里程计的尺度
- 意大利帕多瓦大学,期刊:中科院三区 JCR Q1Q2
- [12] Jin S, Chen L, Sun R, et al. A novel vSLAM framework with unsupervised semantic segmentation based on adversarial transfer learning[J]. Applied Soft Computing, 2020: 106153.
- 基于对抗迁移学习的无监督语义分割的新型 vSLAM 框架
- 苏州大学,期刊:中科院二区 JCR Q1
- [13] Liu R, Zhang J, Chen S, et al. Towards SLAM-based outdoor localization using poor GPS and 2.5 D building models[C]//2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2019: 1-7.
- 使用粗糙的 GPS 和 2.5 D 建筑模型实现基于 SLAM 的户外定位
- 浙江工业大学; 代码开源
- [14] Miyamoto K, Shiraga T, Okato Y. User-Selected Object Data Augmentation for 6DOF CNN Localization[J].
- 6 自由度 CNN 定位的用户选择目标数据增强
- [15] Gui Z W. Register Based on Large Scene for Augmented Reality System[J]. Journal of Internet Technology, 2020, 21(1): 99-111.
- 基于大场景的增强现实三维注册
- 北理工,期刊:中科院四区, IF 0.7
- [16] Gao G, Lauri M, Wang Y, et al. 6D Object Pose Regression via Supervised Learning on Point Clouds[C]. ICRA 2020.
- [17] Habib R, Saii M. Object Pose Estimation in Monocular Image Using Modified FDCM[J]. Computer Science, 2020, 21(1).
- 使用改进的 FDCM 估计单目图像中的物体位置
- 类似于旋转物体检验?
本期 26 篇论文,其中 7 项开源工作,1 项开放数据集;
5、6、10 关于线段的 SLAM
7 基于事件相机的 SLAM 综述
8、9、10 视惯融合
16、17 AR+SLAM
2020 年 1 月 28 日更新
- [1] RÜCKERT, Darius; INNMANN, Matthias; STAMMINGER, Marc. FragmentFusion: A Light-Weight SLAM Pipeline for Dense Reconstruction. In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019. p. 342-347.
- FragmentFusion:一种轻量级的用于稠密重建的方案
- 德国埃朗根-纽伦堡大学
- [2] Chen Y, Shen S, Chen Y, et al. Graph-Based Parallel Large Scale Structure from Motion[J]. arXiv preprint arXiv:1912.10659, 2019.
- 基于图的并行大尺度的 SFM
- 中科院自动化所,代码开源
- [3] Sommer C, Sun Y, Guibas L, et al. From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized 3D Point Clouds[J]. arXiv preprint arXiv:2001.07360, 2020.
- 从平面到角点:无组织的点云中的多用途基本体检测
- 慕尼黑工业大学,代码开源
- [4] Zhao Y, Vela P A. Good feature matching: Towards accurate, robust VO/VSLAM with low latency[J]. IEEE Transactions on Robotics, 2019, 7: 181800-181811.
- [5] Luo X, Tan Z, Ding Y. Accurate Line Reconstruction for Point and Line-Based Stereo Visual Odometry[J]. IEEE Access, 2019, 7: 185108-185120.
- 基于双目点线视觉里程计的精确线段重构
- 浙江大学超大规模集成电路设计研究院,IEEE Access 开源期刊
- [6] Ma J, Wang X, He Y, et al. Line-Based Stereo SLAM by Junction Matching and Vanishing Point Alignment[J]. IEEE Access, 2019, 7: 181800-181811.
- 通过节点匹配与消失点对齐的基于线的双目 SLAM
- 武汉大学、中科院自动化所,IEEE Access 开源期刊
- [7] 马艳阳, 叶梓豪, 刘坤华, 等. 基于事件相机的定位与建图算法: 综述[J]. 自动化学报, 2020, 46: 1-11.
- 中山大学
- [8] WEN, Shuhuan, et al. Joint optimization based on direct sparse stereo visual-inertial odometry. Autonomous Robots, 2020, 1-19.
- 基于直接稀疏双目视觉惯导里程计的联合优化
- [9] Chen C, Zhu H, Wang L, et al. A Stereo Visual-Inertial SLAM Approach for Indoor Mobile Robots in Unknown Environments Without Occlusions[J]. IEEE Access, 2019, 7: 185408-185421.
- 无遮挡未知环境中室内移动机器人的双目视觉惯性 SLAM 方法
- 中国矿业大学,代码开源(还未放出),IEEE Access 开源期刊
- [10] Yan D, Wu C, Wang W, et al. Invariant Cubature Kalman Filter for Monocular Visual Inertial Odometry with Line Features[J]. arXiv preprint arXiv:1912.11749, 2019.
- 单目线特征视觉惯性里程法的不变容积卡尔曼滤波
- 石家庄铁道大学、北京交通大学
- [11] XU, Jingao, et al. Edge Assisted Mobile Semantic Visual SLAM.
- 边缘辅助移动语义视觉SLAM
- 清华、大工、微软
- [12] ZHAO, Zirui, et al. Visual Semantic SLAM with Landmarks for Large-Scale Outdoor Environment. arXiv preprint arXiv:2001.01028, 2020.
- 用于大规模室外环境的具有路标的视觉语义 SLAM
- 西安交大、北京交大
- [13] WANG, Li, et al. Object-Aware Hybrid Map for Indoor Robot Visual Semantic Navigation. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2019. p. 1166-1172.
- 用于室内机器人视觉语义导航的对象感知混合地图
- 燕山大学、加拿大阿尔伯塔大学、伦敦大学;Autonomous Robots 期刊中科院三区,JCR Q1, IF 2.244
- [14] Czarnowski, J., Laidlow, T., Clark, R., & Davison, A. J. (2020). DeepFactors: Real-Time Probabilistic Dense Monocular SLAM. IEEE Robotics and Automation Letters, 5(2), 721–728. doi:10.1109/lra.2020.2965415
- DeepFactors:实时的概率单目稠密 SLAM
- 帝国理工学院戴森机器人实验室,代码开源
- [15] TRIPATHI, Nivedita; SISTU, Ganesh; YOGAMANI, Senthil. Trained Trajectory based Automated Parking System using Visual SLAM. arXiv preprint arXiv:2001.02161, 2020.
- 使用视觉 SLAM 基于轨迹训练的自动停车系统
- 爱尔兰法雷奥视觉系统公司
- [16] WANG, Cheng, et al. NEAR: The NetEase AR Oriented Visual Inertial Dataset. In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019. p. 366-371.
- 面向视觉惯导数据集的网易 AR
- 网易,数据集地址
- [17] HUANG, Ningsheng; CHEN, Jing; MIAO, Yuandong. Optimization for RGB-D SLAM Based on Plane Geometrical Constraint. In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019. p. 326-331.
- 基于平面几何约束优化的 RGB-D SLAM
- 北理工
- [18] WU, Yi-Chin; CHAN, Liwei; LIN, Wen-Chieh. Tangible and Visible 3D Object Reconstruction in Augmented Reality. In: 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2019. p. 26-36.
- 增强现实中有形且可见的三维物体重建
- 台湾国立交通大学计算机科学系
- [19] FEIGL, Tobias, et al. Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments.
- 大型工业动态环境中 ARCore,ARKit 和 Hololens 的定位局限性
- 德国弗里德里希-亚历山大大学
- [20] Yang X, Yang J, He H, et al. A Hybrid 3D Registration Method of Augmented Reality for Intelligent Manufacturing[J]. IEEE Access, 2019, 7: 181867-181883.
- 用于智能制造的增强现实混合三维注册方法
- 广东工业大学,开源期刊
- [21] Speciale P. Novel Geometric Constraints for 3D Computer Vision Applications[D]. ETH Zurich, 2019.
- 适用于 3D 计算机视觉应用的新型几何约束
- 苏黎世联邦理工博士学位论文、微软,Google Scholar
- [22] Patil V, Van Gansbeke W, Dai D, et al. Don't Forget The Past: Recurrent Depth Estimation from Monocular Video[J]. arXiv preprint arXiv:2001.02613, 2020.
- 不要忘记过去的信息:从单目视频中的重复深度估计
- 苏黎世联邦理工,代码开源(还未放出)
- [23] CHIU, Hsu-kuang, et al. Probabilistic 3D Multi-Object Tracking for Autonomous Driving. arXiv preprint arXiv:2001.05673, 2020.
- 用于自动驾驶的概率 3D 多目标跟踪
- 斯坦福大学、丰田研究所,代码开源
- [24] ZHOU, Boyu, et al. Robust Real-time UAV Replanning Using Guided Gradient-based Optimization and Topological Paths. arXiv preprint arXiv:1912.12644, 2019.
- 使用基于梯度优化和拓扑路径引导进行鲁棒且实时的无人机重规划
- 港科大,代码开源:Fast-Planner, TopoTraj
- [25] Object-based localization,2019.
- 专利:基于物体的定位
- [26] Device pose estimation using 3d line clouds,2019.
- 专利:使用 3D 线云的设备位姿估计
本期 23 篇论文,其中 5 项开源工作;
比较有意思的有 TextSLAM、VersaVIS 和单目 3D 目标检测。
- [1] Tanke J, Kwon O H, Stotko P, et al. Bonn Activity Maps: Dataset Description[J]. arXiv preprint arXiv:1912.06354, 2019.
- 包含人体跟踪、姿态和环境语义三维重建的数据集
- 波恩大学,项目、数据集主页
- [2] An S, Che G, Zhou F, et al. Fast and Incremental Loop Closure Detection Using Proximity Graphs[J]. arXiv preprint arXiv:1911.10752, 2019.
- 使用邻近图的快速增量式闭环检测
- 京东、北航,代码开源
- [3] Li B, Zou D, Sartori D, et al. TextSLAM: Visual SLAM with Planar Text Features[J]. arXiv preprint arXiv:1912.05002, 2019.
- TextSLAM:基于平面文本的视觉 SLAM
- 上交邹丹平老师
- [4] Bundle Adjustment Revisited
- 再谈 BA
- 北京大学
- [5] Lange M, Raisch C, Schilling A. LVO: Line only stereo Visual Odometry[C]//2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 1-8.
- 双目线 VO
- 图宾根大学
- 相关论文:Vakhitov A, Lempitsky V. Learnable Line Segment Descriptor for Visual SLAM[J]. IEEE Access, 2019, 7: 39923-39934. 代码开源
- [6] Liu W, Mo Y, Jiao J. An efficient edge-feature constraint visual SLAM[C]//Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing. ACM, 2019: 13.
- 一种高效的基于边缘特征约束的视觉 SLAM
- 北邮
- [7] Pan L, Wang P, Cao J, et al. Dense RGB-D SLAM with Planes Detection and Mapping[C]//IECON 2019-45th Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2019, 1: 5192-5197.
- 使用平面检测与建图的稠密 RGB-D SLAM
- 新加坡国立大学
- [8] Ji S, Qin Z, Shan J, et al. Panoramic SLAM from a multiple fisheye camera rig[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 159: 169-183.
- 多鱼眼相机的全景 SLAM
- 武汉大学
- [9] Lecrosnier L, Boutteau R, Vasseur P, et al. Vision based vehicle relocalization in 3D line-feature map using Perspective-n-Line with a known vertical direction[C]//2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019: 1263-1269.
- 使用具有已知垂直方向的透视线在 3D 线特征图中进行基于视觉的车辆重定位
- 诺曼底大学
- [10] de Souza Muñoz M E, Menezes M C, de Freitas E P, et al. A Parallel RatSlam C++ Library Implementation[C]//Latin American Workshop on Computational Neuroscience. Springer, Cham, 2019: 173-183.
- 并行 Rat SLAM C++ 库的实现
- [11] Tschopp F, Riner M, Fehr M, et al. VersaVIS: An Open Versatile Multi-Camera Visual-Inertial Sensor Suite[J]. arXiv preprint arXiv:1912.02469, 2019.
- VersaVIS:开源多功能多相机的视觉惯性传感器套件
- 苏黎世联邦理工学院,代码开源
- [12] Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.
- Openvins:用于视觉惯性估计的研究平台
- 特拉华大学,代码开源
- [13] Barrau A, Bonnabel S. A Mathematical Framework for IMU Error Propagation with Applications to Preintegration[J]. 2019.
- IMU错误传播的数学框架及其在预积分中的应用
- PSL Research University
- [14] Ke T, Wu K J, Roumeliotis S I. RISE-SLAM: A Resource-aware Inverse Schmidt Estimator for SLAM[C]. IROS 2019.
- 用于 SLAM 的资源感知的逆施密特估计器
- [15] Dong Y, Wang S, Yue J, et al. A Novel Texture-Less Object Oriented Visual SLAM System[J]. IEEE Transactions on Intelligent Transportation Systems, 2019.
- 一种新型的面向低纹理物体的视觉 SLAM 系统
- 同济大学,期刊 中科院二区,JCR Q1,IF 6
- [16] Peng J, Shi X, Wu J, et al. An Object-Oriented Semantic SLAM System towards Dynamic Environments for Mobile Manipulation[C]//2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, 2019: 199-204.
- 用于动态环境中移动机器人抓取的面向物体的语义 SLAM
- 上海交大机械学院,AIM:CCF 人工智能 C 类会议
- [17] Kim U H, Kim S H, Kim J H. SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation[J]. arXiv preprint arXiv:1911.05939, 2019.
- SimVODIS:同时进行视觉里程计、目标检测和语义分割
- 韩国高等科学技术院
- [18] Howard Mahe, Denis Marraud, Andrew I. Comport. Real-time RGB-D semantic keyframe SLAM based on image segmentation learning from industrial CAD models. International Conference on Advanced Robotics, Dec 2019, Belo Horizonte, Brazil. ffhal-02391499
- 基于工业 CAD 模型图像分割学习的实时 RGB-D 语义关键帧 SLAM
- 相关工作:基于稠密类级别分割的仅使用语义的视觉里程表 Mahé H, Marraud D, Comport A I. Semantic-only Visual Odometry based on dense class-level segmentation[C]//2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018: 1989-1995.
- [19] Chandu S, Bhavan Jasani G J, Manglik A. Deleting 3D Objects in Augmented Reality using RGBD-SLAM[J].2019
- 使用 RGBD-SLAM 在增强现实中删除 3D 对象
- 亚马逊
- [20] Sartipi K, DuToit R C, Cobar C B, et al. Decentralized visual-inertial localization and mapping on mobile devices for augmented reality[R]. Google, Tech. Rep., Aug. 2019.[Online]. Available: http://mars. cs. umn. edu/tr/decentralizedvi19. pdf.
- 用于移动设备增强现实的分布式视觉惯导定位与建图
- 明尼苏达大学
- [21] Yan Z, Zha H. Flow-based SLAM: From geometry computation to learning[J]. Virtual Reality & Intelligent Hardware, 2019, 1(5): 435-460.
- 基于流的 SLAM:从几何计算到学习的方法
- 北京大学,Google Scholar
- [22] Li J, Liu Y, Yuan X, et al. Depth Based Semantic Scene Completion With Position Importance Aware Loss[J]. IEEE Robotics and Automation Letters, 2019, 5(1): 219-226.
- [23] Simonelli A, Bulò S R R, Porzi L, et al. Disentangling Monocular 3D Object Detection[J]. arXiv preprint arXiv:1905.12365, 2019.
- 揭秘单目 3D 目标检测
- 意大利特伦托大学,作者其他论文
- 使用虚拟相机进行单阶段单目 3D 目标检测:Simonelli A, Bulò S R, Porzi L, et al. Single-Stage Monocular 3D Object Detection with Virtual Cameras[J]. arXiv preprint arXiv:1912.08035, 2019.
- [1] Jatavallabhula K M, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.
- [2] Lee S J, Hwang S S. Elaborate Monocular Point and Line SLAM With Robust Initialization[C]//ICCV 2019: 1121-1129.
- 具有鲁棒初始化的单目点线 SLAM
- 韩国韩东国际大学
- [3] Wen F, Ying R, Gong Z, et al. Efficient Algorithms for Maximum Consensus Robust Fitting[J]. IEEE Transactions on Robotics, 2019.
- 最大一致性稳健拟合的有效算法
- 期刊 中科院二区,JCRQ1,IF 1.038,代码开源
- 上海交通大学电子工程系/脑启发式应用技术中心
- [4] Civera J, Lee S H. RGB-D Odometry and SLAM[M]//RGB-D Image Analysis and Processing. Springer, Cham, 2019: 117-144.
- RGB-D SLAM 与里程计
- 专著
- [5] Wang H, Li J, Ran M, et al. Fast Loop Closure Detection via Binary Content[C]//2019 IEEE 15th International Conference on Control and Automation (ICCA). IEEE, 2019: 1563-1568.
- 通过二进制内容进行快速闭环检测
- 南洋理工大学,ICCV 会议
- [6] Liu W, Wu S, Wu Z, et al. Incremental Pose Map Optimization for Monocular Vision SLAM Based on Similarity Transformation[J]. Sensors, 2019, 19(22): 4945.
- 基于相似度变换的单目 SLAM 增量式位姿图优化
- 北航,开源期刊
- [7] Wang S, Yue J, Dong Y, et al. A synthetic dataset for Visual SLAM evaluation[J]. Robotics and Autonomous Systems, 2019: 103336.
- 用于视觉 SLAM 评估的合成数据集
- 同济大学,期刊中科院三区, JCR Q2, IF 2.809
- [8] Han B, Li X, Yu Q, et al. A Novel Visual Odometry Aided by Vanishing Points in the Manhattan World[J].
- 曼哈顿世界消失点辅助的新型视觉里程计
- 国防科大,作者 Google Scholar
- [9] Yuan Z, Zhu D, Chi C, et al. Visual-Inertial State Estimation with Pre-integration Correction for Robust Mobile Augmented Reality[C]//Proceedings of the 27th ACM International Conference on Multimedia. ACM, 2019: 1410-1418.
- 用于鲁棒的移动增强现实中基于预积分校正的视觉惯性状态估计
- 华中科大,会议 ACM MM:CCF A 类会议
- [10] Zhong D, Han L, Fang L. iDFusion: Globally Consistent Dense 3D Reconstruction from RGB-D and Inertial Measurements[C]//Proceedings of the 27th ACM International Conference on Multimedia. ACM, 2019: 962-970.
- iDFusion:RGB-D 和惯导的全局一致性稠密三维建图
- 清华大学、港科,会议 ACM MM:CCF A 类会议,Google Scholar
- [11] Duhautbout T, Moras J, Marzat J. Distributed 3D TSDF Manifold Mapping for Multi-Robot Systems[C]//2019 European Conference on Mobile Robots (ECMR). IEEE, 2019: 1-8.
- 多机器人系统的分布式三维 TSDF 流形建图
- TSDF 开源库:https://github.com/personalrobotics/OpenChisel
- [12] Schorghuber M, Steininger D, Cabon Y, et al. SLAMANTIC-Leveraging Semantics to Improve VSLAM in Dynamic Environments[C]//ICCV Workshops. 2019: 0-0.
- SLAMANTIC:在动态环境中利用语义来改善VSLAM
- 奥地利理工学院, 代码开源
- [13] Lee C Y, Lee H, Hwang I, et al. Spatial Perception by Object-Aware Visual Scene Representation[C]//ICCV Workshops 2019: 0-0.
- 由物体视觉场景表示的空间感知
- 首尔大学
- [14] Peng J, Shi X, Wu J, et al. An Object-Oriented Semantic SLAM System towards Dynamic Environments for Mobile Manipulation[C]//2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, 2019: 199-204.
- 用于动态环境移动操作的物体级语义 SLAM 系统
- 上海交大,AIM:CCF 人工智能 C 类会议
- [15] Cui L, Ma C. SOF-SLAM: A semantic visual SLAM for Dynamic Environments[J]. IEEE Access, 2019.
- 一种用于动态环境的语义视觉里程计
- 北航,IEEE Access 开源期刊
- [16] Zheng L, Tao W. Semantic Object and Plane SLAM for RGB-D Cameras[C]//Chinese Conference on Pattern Recognition and Computer Vision (PRCV). Springer, Cham, 2019: 137-148.
- RGB-D 物体语义与平面级 SLAM
- 平面分割采用 PEAC,PRCV 第二届今年在西安举办的那个
- 华中科大
- [17] Kim U H, Kim S H, Kim J H. SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation[J]. arXiv preprint arXiv:1911.05939, 2019.
- SimVODIS:同时进行视觉里程计、目标检测和实例分割
- [1] Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework[J]. arXiv preprint arXiv:1910.01122, 2019.
- OpenVSLAM: 通用的视觉 SLAM 框架
- 代码开源
- 日本国家先进工业科学技术研究所,其他工作:Yokozuka M, Oishi S, Simon T, et al. VITAMIN-E: VIsual Tracking And Mapping with Extremely Dense Feature Points[J]. arXiv preprint arXiv:1904.10324, 2019.
- [2] Chen Y, Huang S, Fitch R, et al. On-line 3D active pose-graph SLAM based on key poses using graph topology and sub-maps[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 169-175.
- 使用图形拓扑和子图的基于关键姿势的在线 3D 活跃位姿图SLAM
- 悉尼科技大学
- [3] Pfrommer B, Daniilidis K. TagSLAM: Robust SLAM with Fiducial Markers[J]. arXiv preprint arXiv:1910.00679, 2019.
- TagSLAMM:具有基准标记的鲁棒 SLAM
- 宾夕法尼亚大学通用机器人,自动化,感应和感知实验室,项目主页
- [4] Lin T Y, Clark W, Eustice R M, et al. Adaptive Continuous Visual Odometry from RGB-D Images[J]. arXiv preprint arXiv:1910.00713, 2019.
- RGB-D 图像的自适应连续视觉里程计
- 密西根大学
- [5] Y Yang, P Geneva, K Eckenhoff, G Huang. Visual-Inertial Odometry with Point and Line Features, 2019.
- 点线 VIO
- 特拉华大学
- [6] Tarrio J J, Smitt C, Pedre S. SE-SLAM: Semi-Dense Structured Edge-Based Monocular SLAM[J]. arXiv preprint arXiv:1909.03917, 2019.
- SE-SLAM:基于边的单目半稠密 SLAM
- 阿根廷巴尔塞罗研究所
- [7] Wu X, Pradalier C. Robust Semi-Direct Monocular Visual Odometry Using Edge and Illumination-Robust Cost[J]. arXiv preprint arXiv:1909.11362, 2019.
- 利用边缘和光照鲁棒成本的单目半直接法视觉里程计
- 佐治亚理工学院
- [8] Pan Z, Chen H, Li S, et al. ClusterMap Building and Relocalization in Urban Environments for Unmanned Vehicles[J]. Sensors, 2019, 19(19): 4252.
- 无人驾驶车辆在城市环境中的 ClusterMap 构建和重定位
- 哈工大深圳,港中文,期刊 Sensors:开源期刊,中科院三区 JCR Q2Q3 IF 3.014
- [9] Zhang M, Zuo X, Chen Y, et al. Localization for Ground Robots: On Manifold Representation, Integration, Re-Parameterization, and Optimization[J]. arXiv preprint arXiv:1909.03423, 2019.
- 地面机器人的定位:流形表示,积分,重新参数化和优化
- 阿里巴巴人工智能实验室
- [10] Kirsanov P, Gaskarov A, Konokhov F, et al. DISCOMAN: Dataset of Indoor SCenes for Odometry, Mapping And Navigation[J]. arXiv preprint arXiv:1909.12146, 2019.
- DISCOMAN:用于里程计、制图和导航的室内场景数据集
- 三星 AI 中心,数据集随论文正式发表放出
- [11] Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, 2019: 1-10.
- 在线目标检测和定位的双目视觉 SLAM
- 阿根廷国际信息科学中心,代码开源
- [12] Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.
- [13] Feng Q, Meng Y, Shan M, et al. Localization and Mapping using Instance-specific Mesh Models[J].IROS 2019
- 使用特定实例网格模型进行定位和建图
- 加州大学圣地亚哥分校语境机器人研究所,课题组
- [14] Liao Z, Shi J, Qi X, et al. Coarse-To-Fine Visual Localization Using Semantic Compact Map[J]. arXiv preprint arXiv:1910.04936, 2019.
- 使用语义紧凑图的从粗糙到精细的视觉定位
- 北航,face++
- [15] Doherty K, Baxter D, Schneeweiss E, et al. Probabilistic Data Association via Mixture Models for Robust Semantic SLAM[J]. arXiv preprint arXiv:1909.11213, 2019.
- 鲁棒的语义 SLAM 中混合模型的概率数据关联
- MIT,好像就是之前 ICRA 2019 多模态概率数据关联
- [16] Jung E, Yang N, Cremers D. Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light[J]. arXiv preprint arXiv:1910.06632, 2019.
- Multi-Frame GAN:弱光照双目视觉里程计的图像增强
- 慕尼黑工业大学、澳大利亚国立大学,Artisense 自动驾驶公司,LSD、DSO 作者,Google Scholar
- [17] Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, 2019: 1-31.
- NeuroSLAM:针对 3D 环境的脑启发式 SLAM 系统
- 昆士兰科技大学,Rat SLAM 作者,代码开源
- [18] Zeng T, Si B. A Brain-Inspired Compact Cognitive Mapping System[J]. arXiv preprint arXiv:1910.03913, 2019.
- 脑启发的紧凑型认知地图系统
- 沈自所
- [19] Zhou Y, Qi H, Huang J, et al. NeurVPS: Neural Vanishing Point Scanning via Conic Convolution[J]. arXiv preprint arXiv:1910.06316, 2019.
- NeurVPS:通过圆锥卷积的神经消失点扫描
- 加州伯克利,代码开源
- [20] Alhashim I, Wonka P. High Quality Monocular Depth Estimation via Transfer Learning[J]. arXiv preprint arXiv:1812.11941, 2018.
- 通过迁移学习进行高质量单眼深度估计
- 阿卜杜拉国王科技大学,代码开源
- [21 ] Pei L, Liu K, Zou D, et al. IVPR: An Instant Visual Place Recognition Approach based on Structural Lines in Manhattan World[J]. IEEE Transactions on Instrumentation and Measurement, 2019.
- IVPR:基于曼哈顿世界中的结构线的即时视觉位置识别方法
- 上交裴凌老师,期刊:中科院三区 JCR Q1Q2 IF2.98
- [22] Sjanic Z. Particle Filtering Approach for Data Association[C]//22nd International Conference on Information Fusion. 2019.
- 粒子滤波算法用于数据关联
- [1] Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, 2019.
- ORBSLAM-Atlas:一个鲁棒而准确的多地图系统
- 西班牙萨拉戈萨大学,ORB-SLAM 作者
- [2] Yang Y, Dong W, Kaess M. Surfel-Based Dense RGB-D Reconstruction With Global And Local Consistency[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 5238-5244.
- 具有全局和局部一致性的基于面元的 RGB-D 稠密重建
- CMU 机器人研究所
- 相关研究:Schöps T, Sattler T, Pollefeys M. SurfelMeshing: Online Surfel-Based Mesh Reconstruction[J]. arXiv preprint arXiv:1810.00729, 2018.
- [3] Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, 2019: 1-10.
- 在线进行三维目标检测的双目视觉 SLAM
- 期刊:中科院四区,JCR Q4,IF 2.4
- [4] Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment[C] IROS 2019.
- [5] Zhang Y, Yang J, Zhang H, et al. Bundle Adjustment for Monocular Visual Odometry Based on Detected Traffic Sign Features[C]//2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019: 4350-4354.
- 基于交通标志特征检测的单目视觉里程计 BA 优化
- 北理工、华盛顿大学
- [6] Zhang X, Wang W, Qi X, et al. Point-Plane SLAM Using Supposed Planes for Indoor Environments[J]. Sensors, 2019, 19(17): 3795.
- 室内环境中使用假设平面的点-平面 SLAM
- 北京航空航天大学机器人研究所 开源期刊
- [7] Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3556-3562.
- 使用 SE(2)-XYZ 约束用于地面车辆定位于建图的视觉里程计
- 香港中文大学 代码开源
- [8] Li H, Xing Y, Zhao J, et al. Leveraging Structural Regularity of Atlanta World for Monocular SLAM[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 2412-2418.
- 利用亚特兰大世界结构规律的单目 SLAM
- 香港中文大学刘云辉教授课题组(上面那篇的郑帆博士是他学生),香港中文大学天石机器人研究所
- [9] Sun J, Wang Y, Shen Y. Fully Scaled Monocular Direct Sparse Odometry with A Distance Constraint[C]//2019 5th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2019: 271-275.
- 具有距离约束的全尺寸单目直接稀疏里程计
- 北理工
- [10] Dong J, Lv Z. miniSAM: A Flexible Factor Graph Non-linear Least Squares Optimization Framework[J]. arXiv preprint arXiv:1909.00903, 2019.
- minisam:一种灵活的因子图非线性最小二乘优化框架
- Facebook,代码开源,Google Scholar
- [11] Campos C, MM M J, Tardós J D. Fast and Robust Initialization for Visual-Inertial SLAM[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 1288-1294.
- 视惯 SLAM 快速鲁棒的初始化
- 西班牙萨拉戈萨大学,ORB-SLAM 课题组
- [12] He L, Yang M, Li H, et al. Graph Matching Pose SLAM based on Road Network Information[C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019: 1274-1279.
- 基于路网信息的图匹配位姿 SLAM
- 上海交通大学系统控制与信息处理教育部重点实验室
- [13] Gu T, Yan R. An Improved Loop Closure Detection for RatSLAM[C]//2019 5th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2019: 884-888.
- 一种改进的 RatSLAM 闭环检测方法
- 四川大学
- [14] Zhang J, Gui M, Wang Q, et al. Hierarchical Topic Model Based Object Association for Semantic SLAM[J]. IEEE transactions on visualization and computer graphics, 2019.
- 基于层次主题模型的语义 SLAM 对象关联
- 期刊:中科院三区, JCR Q1,IF 3.78
- [15] Gählert N, Wan J J, Weber M, et al. Beyond Bounding Boxes: Using Bounding Shapes for Real-Time 3D Vehicle Detection from Monocular RGB Images[C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019: 675-682.
- 超越边界框:使用边界形状从单目 RGB 图像中进行实时 3D 车辆检测
- 德国耶拿大学
- [16] Yang N, Wang R, Stuckler J, et al. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 817-833.
- [17] Liu H, Ma H, Zhang L. Visual Odometry based on Semantic Supervision[C]//2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019: 2566-2570.
- 基于语义监督的视觉里程计
- 清华大学;会议 ICIP:CCF 计算机图形学与多媒体 C 类会议
- [18] Wald J, Avetisyan A, Navab N, et al. RIO: 3D Object Instance Re-Localization in Changing Indoor Environments[J]. arXiv preprint arXiv:1908.06109, 2019.
- RIO:改变室内环境的 3D 物体实例重定位
- TUM,Google,项目主页,开放数据集
- [19] Su Y, Rambach J, Minaskan N, et al. Deep Multi-State Object Pose Estimation for Augmented Reality Assembly[C]. IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2019
- 增强现实装备的深度多状态目标姿态估计
- 德国人工智能研究中心
- [20] Huang X, Dai Z, Chen W, et al. Improving Keypoint Matching Using a Landmark-Based Image Representation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 1281-1287.
- 利用基于地标的图像表示改进关键点匹配
- 广东工业大学
- [21] Puscas M M, Xu D, Pilzer A, et al. Structured Coupled Generative Adversarial Networks for Unsupervised Monocular Depth Estimation[J]. arXiv preprint arXiv:1908.05794, 2019.
- 无监督单目深度估计的结构耦合生成对抗网络
- 华为、牛津大学 代码开源(还未放出)
- [22] Yang B, Xu X, Li J, et al. Landmark Generation in Visual Place Recognition Using Multi-Scale Sliding Window for Robotics[J]. Applied Sciences, 2019, 9(15): 3146.
- 基于多尺度滑动窗口的机器人视觉地点识别中的地标生成
- 东南大学 期刊:开源期刊,中科院三区,JCR Q3
- [23] Hofstetter I, Sprunk M, Schuster F, et al. On Ambiguities in Feature-Based Vehicle Localization and their A Priori Detection in Maps[C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019: 1192-1198.
- 基于特征的车辆模糊定位及其在地图中的先验检测
- SLAM 中的物体数据关联可参考
- [24] Kümmerle J, Sons M, Poggenhans F, et al. Accurate and Efficient Self-Localization on Roads using Basic Geometric Primitives[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 5965-5971.
- 基于几何元素在道路中进行准确有效的自定位
- 德国卡尔斯鲁厄理工学院
- [1] Wei X, Huang J, Ma X. Real-Time Monocular Visual SLAM by Combining Points and Lines[C]//2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019: 103-108.
- 点线结合的单目视觉 SLAM
- 中国科学院上海高等研究院 ICME:CCF 计算机图形学与多媒体 B 类会议
- [2] Fu Q, Yu H, Lai L, et al. A Robust RGB-D SLAM System with Points and Lines for Low Texture Indoor Environments[J]. IEEE Sensors Journal, 2019.
- 低纹理室内环境的点线联合的鲁棒 RGB-D SLAM 系统
- 湖南大学机器人视觉感知与控制国家工程实验室 期刊 IEEE Sensors Journal:中科院三区,JCR Q1Q2,IF 2.69
- [3] Zhao W, Qian K, Ma Z, et al. Stereo Visual SLAM Using Bag of Point and Line Word Pairs[C]//International Conference on Intelligent Robotics and Applications. Springer, Cham, 2019: 651-661.
- 利用点线词袋对的双目 SLAM
- 东南大学
- [4] Hachiuma R, Pirchheim C, Schmalstieg D, et al. DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM[C]//Proceedings British Machine Vision Conference (BMVC). 2019.
- DetectFusion:在实时的 SLAM 中检测和分割已知与未知的动态对象
- 日本庆应义塾大学、格拉茨理工大学 BMVC:CCF 人工智能 C 类会议
- 相关论文:
- Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects, Martin Rünz and Lourdes Agapito, 2017 IEEE International Conference on Robotics and Automation (ICRA)
- Ishikawa Y, Hachiuma R, Ienaga N, et al. Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space[C]//2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR). IEEE, 2019: 1-7.
- [5] Prokhorov D, Zhukov D, Barinova O, et al. Measuring robustness of Visual SLAM[C]//2019 16th International Conference on Machine Vision Applications (MVA). IEEE, 2019: 1-6.
- 视觉 SLAM 的鲁棒性评估
- 三星 AI 研究中心 MVA:CCF 人工智能 C 类会议
- [6] Ryohei Y, Kanji T, Koji T. Invariant Spatial Information for Loop-Closure Detection[C]//2019 16th International Conference on Machine Vision Applications (MVA). IEEE, 2019: 1-6.
- 用于闭环检测的不变空间信息
- 日本福井大学 MVA:CCF 人工智能 C 类会议
- [7] Yang B, Xu X, Li J. Keyframe-Based Camera Relocalization Method Using Landmark and Keypoint Matching[J]. IEEE Access, 2019, 7: 86854-86862.
- 使用路标和关键点匹配基于关键帧的相机重定位方法
- 东南大学 期刊 IEEE Access:开源期刊
- [8] Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM[C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019: 121-128.
- 视觉 SLAM 中网络不确定性语义信息的选择
- 滑铁卢大学、多伦多大学 代码开源
- 相关论文(学位论文):Ganti P. SIVO: Semantically Informed Visual Odometry and Mapping[D]. University of Waterloo, 2018.
- [9] Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM[J]. arXiv preprint arXiv:1907.09760, 2019.
- 不仅看到而且还观察:基于场景级三维多目标理解的概率 SLAM 变分观测模型
- 首尔国立大学 代码开源 Google Scholr
- [10] Hu L, Xu W, Huang K, et al. Deep-SLAM++: Object-level RGBD SLAM based on class-specific deep shape priors[J]. arXiv preprint arXiv:1907.09691, 2019.
- 基于特定类深度形状先验的对象级 RGBD SLAM
- 上海科技大学
- [11] Torres Cámara J M. Map Slammer. Densifying Scattered KSLAM 3D Maps with Estimated Depth[J]. 2019.
- [12] Cieslewski T, Bloesch M, Scaramuzza D. Matching Features without Descriptors: Implicitly Matched Interest Points (IMIPs)[C]// 2019 British Machine Vision Conference.2018.
- 没有描述符的特征匹配:隐含匹配的兴趣点
- 苏黎世理工、帝国理工 代码开源 BMVC:CCF 人工智能 C 类会议
- [13] Zheng J, Zhang J, Li J, et al. Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling[J]. arXiv preprint arXiv:1908.00222, 2019.
- [14] Jikai Lu, Jianhui Chen, James J. Little, Pan-tilt-zoom SLAM for Sports Videos.[C]//British Machine Vision Conference (BMVC) 2019.
- 用于体育视频的平移 - 倾斜 - 缩放SLAM
- 不列颠哥伦比亚大学 代码开源
- [15] Saran V, Lin J, Zakhor A. Augmented Annotations: Indoor Dataset Generation with Augmented Reality[J]. ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2019, 4213: 873-879.
- 增强注释:具有增强现实的室内数据集生成
- 加州大学伯克利分校 项目主页
- [16] Liao M, Song B, He M, et al. SynthText3D: Synthesizing Scene Text Images from 3D Virtual Worlds[J]. arXiv preprint arXiv:1907.06007, 2019.
- 从3D虚拟世界合成场景文本图像
- 华中科大、北大、Face++ 代码开源
- [17] Shooting Labels by Virtual Reality. 2019
- 利用虚拟现实拍摄语义标签
- 意大利博洛尼亚大学 代码开源
- [18] Ku J, Pon A D, Waslander S L. Monocular 3D Object Detection Leveraging Accurate Proposals and Shape Reconstruction[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2019: 11867-11876.
- 利用精确的提案和形状重建进行单目三维物体检测
- 多伦多大学
- 相关研究:Ku J, Pon A D, Walsh S, et al. Improving 3D Object Detection for Pedestrians with Virtual Multi-View Synthesis Orientation Estimation[J]. arXiv preprint arXiv:1907.06777, 2019.
- [19] Chiang H, Lin Y, Liu Y, et al. A Unified Point-Based Framework for 3D Segmentation[J]. arXiv preprint arXiv:1908.00478, 2019.
- 一种统一的基于点的三维分割框架
- 国立台湾大学、亚马逊 代码开源
- [20] Zhang Y, Lu Z, Xue J H, et al. A New Rotation-Invariant Deep Network for 3D Object Recognition[C]//2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019: 1606-1611.
- 一种新的具有旋转不变性的三维物体识别深度网络
- 清华大学 ICME:CCF 计算机图形学与多媒体 B 类会议
- [21] Gao Y, Yuille A L. Estimation of 3D Category-Specific Object Structure: Symmetry, Manhattan and/or Multiple Images[J]. International Journal of Computer Vision, 2019: 1-26.
- 3D 类别特定对象结构的估计:对称性,曼哈顿和或多图像
- 中科大 期刊 International Journal of Computer Vision:中科院一区,JCR Q1,IF 12.389
- [22] Palazzi A, Bergamini L, Calderara S, et al. Semi-parametric Object Synthesis[J]. arXiv preprint arXiv:1907.10634, 2019.
- 半参数的对象合成
- 摩德纳大学 代码开源
- [23] Christiansen P H, Kragh M F, Brodskiy Y, et al. UnsuperPoint: End-to-end Unsupervised Interest Point Detector and Descriptor[J]. arXiv preprint arXiv:1907.04011, 2019.
- 端到端无监督兴趣点检测器和描述符
- 土耳其哈塞特大学 相关代码:SuperPointPretrainedNetwork,lf-net-release
- [24] Chen B X, Tsotsos J K. Fast Visual Object Tracking with Rotated Bounding Boxes[J]. arXiv preprint arXiv:1907.03892, 2019.
- 带旋转边界框的快速视觉目标跟踪
- 约克大学、多伦多大学 代码开源
- [25] Brazil G, Liu X. M3D-RPN: Monocular 3D Region Proposal Network for Object Detection[J]. arXiv preprint arXiv:1907.06038, 2019.
- 用于物体检测的单目 3D 区域提议网络
- 密歇根州立大学
- [26] Zhou Q, Sattler T, Pollefeys M, et al. To Learn or Not to Learn: Visual Localization from Essential Matrices[J]. arXiv preprint arXiv:1908.01293, 2019.
- 学习或非学习的方法:基础矩阵用于视觉定位
- 慕尼黑、苏黎世
- [1] Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//IEEE International Conference on Robotics and Automation(ICRA) 2019. 2019.
- 一种实时稳健的基于边缘的SLAM系统
- 奥地利格拉茨科技大学 Google Scholar
- 代码开源 项目主页
- [2] Christensen K, Hebert M. Edge-Direct Visual Odometry[J]. arXiv preprint arXiv:1906.04838, 2019.
- 边缘直接法视觉里程计
- CMU
- [3] Dong E, Xu J, Wu C, et al. Pair-Navi: Peer-to-Peer Indoor Navigation with Mobile Visual SLAM[C]//IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019: 1189-1197.
- 使用移动视觉 SLAM 进行点对点室内导航
- 清华大学 Google Scholar
- 会议:IEEE INFOCOM:CCF 计算机网络 A 类会议
- [4] Zhou H, Fan H, Peng K, et al. Monocular Visual Odometry Initialization With Points and Line Segments[J]. IEEE Access, 2019, 7: 73120-73130.
- 利用点线初始化的单目视觉里程计
- 国防科大、清华大学、港中文
- IEEE Access:开源期刊
- [5] He M, Zhu C, Huang Q, et al. A review of monocular visual odometry[J]. The Visual Computer, 2019: 1-13.
- 单目视觉里程计综述
- 河海大学
- 期刊 The Visual Computer:中科院四区,JCR Q3,IF 1.39
- [6] Bujanca M, Gafton P, Saeedi S, et al. SLAMBench 3.0: Systematic Automated Reproducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Understanding[C]//IEEE International Conference on Robotics and Automation (ICRA). 2019.
- 用于机器人视觉挑战和场景理解的 SLAM 系统自动可重复性评估
- 爱丁堡大学,伦敦帝国理工学院
- [7] Wang Y, Zell A. Improving Feature-based Visual SLAM by Semantics[C]//2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS). IEEE, 2018: 7-12.
- 利用语义信息提高特征点法的 SLAM
- 图宾根大学
- [8] Mo J, Sattar J. Extending Monocular Visual Odometry to Stereo Camera System by Scale Optimization[C]. International Conference on Intelligent Robots and Systems (IROS), 2019.
- 通过尺度优化将单目视觉里程计扩展到双目相机系统
- 明尼苏达大学交互式机器人和视觉实验室
- 代码开源
- [9] A Modular Optimization framework for Localization and mApping (MOLA). 2019
- 用于定位和建图的模块化优化框架
- 西班牙阿尔梅利亚大学
- 代码开源
- [10] Ye W, Zhao Y, Vela P A. Characterizing SLAM Benchmarks and Methods for the Robust Perception Age[J]. arXiv preprint arXiv:1905.07808, 2019.
- 表征鲁棒感知时代的 SLAM 基准和方法
- 乔治亚理工学院
- [11] Bürki M, Cadena C, Gilitschenski I, et al. Appearance‐based landmark selection for visual localization[J]. Journal of Field Robotics. 2019
- 基于外观的用于视觉定位的路标选择
- ETH,MIT 期刊:中科院二区,JCR Q1,IF 5.0
- [12] Hsiao M, Kaess M. MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree[J]. 2019.
- MH-iSAM2:使用贝叶树和 Hypo 树的多假设 iSAM
- CMU 代码开源
- [13] Schops T, Sattler T, Pollefeys M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 134-144.
- [14] Wang K, Gao F, Shen S. Real-time Scalable Dense Surfel Mapping[C]//Proc. of the IEEE Intl. Conf. on Robot. and Autom.(ICRA). 2019.
- 实时可拓展的表面重建
- 港科大沈邵劼课题组
- 代码开源
- [15] Zhao Y, Xu S, Bu S, et al. GSLAM: A General SLAM Framework and Benchmark[J]. arXiv preprint arXiv:1902.07995, 2019.
- 通用SLAM框架和基准
- 西北工业大学,自动化所 代码开源
- [16] Nellithimaru A K, Kantor G A. ROLS: Robust Object-Level SLAM for Grape Counting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPR2019: 0-0.
- 用于葡萄计数的鲁棒的物体级 SLAM
- CMU
- [17] Nejad Z Z, Ahmadabadian A H. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs[J]. Machine Vision and Applications, 2019: 1-10.
- ARM CPU上地面车辆的高效单目视觉里程计
- 伊朗德黑兰托西技术大学
- 代码开源 期刊:中科院四区,JCR Q2Q3,IF 1.3
- [18] Aloise I, Della Corte B, Nardi F, et al. Systematic Handling of Heterogeneous Geometric Primitives in Graph-SLAM Optimization[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2738-2745.
- SLAM 图优化中异构几何基元的系统处理
- 罗马大学
- 代码开源
- [19] Guo R, Peng K, Fan W, et al. RGB-D SLAM Using Point–Plane Constraints for Indoor Environments[J]. Sensors, 2019, 19(12): 2721.
- 室内环境中使用点-平面约束的 RGB-D SLAM
- 国防科大 期刊:开源期刊,中科院三区,JCR Q2Q3,IF 3.0
- [20] Laidlow T, Czarnowski J, Leutenegger S. DeepFusion: Real-Time Dense 3D Reconstruction for Monocular SLAM using Single-View Depth and Gradient Predictions[J].2019.
- DeepFusion:使用单视图深度和梯度预测的单眼SLAM实时密集三维重建
- 帝国理工学院的戴森机器人实验室
- [21] Saeedi S, Carvalho E, Li W, et al. Characterizing Visual Localization and Mapping Datasets[C]//2019 IEEE International Conference on Robotics and Automation (ICRA). 2019.
- 描述可视化定位于建图的数据集
- 帝国理工学院计算机系 数据集地址
- [22] Sun T, Sun Y, Liu M, et al. Movable-Object-Aware Visual SLAM via Weakly Supervised Semantic Segmentation[J]. arXiv preprint arXiv:1906.03629, 2019.
- 通过弱监督语义分割的可移动对象感知视觉SLAM
- 港科大
- [23] Ghaffari M, Clark W, Bloch A, et al. Continuous Direct Sparse Visual Odometry from RGB-D Images[J]. arXiv preprint arXiv:1904.02266, 2019.
- RGB-D图像连续直接稀疏视觉里程计
- 密歇根大学 代码开源
- [24] Houseago C, Bloesch M, Leutenegger S. KO-Fusion: Dense Visual SLAM with Tightly-Coupled Kinematic and Odometric Tracking[J]. 2019
- KO-Fusion:具有紧耦合运动和测距跟踪的稠密视觉SLAM
- 帝国理工学院戴森机器人实验室
- [25] Iqbal A, Gans N R. Localization of Classified Objects in SLAM using Nonparametric Statistics and Clustering[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 161-168.
- 在非参数和聚类的 SLAM 中使用类别物体进行定位
- 德克萨斯大学计算机工程学院
- [26] Semantic Mapping for View-Invariant Relocalization. 2019
- 用于视角不变重定位的语义地图
- 加拿大蒙特利尔麦吉尔大学
- [27] Hou Z, Ding Y, Wang Y, et al. Visual Odometry for Indoor Mobile Robot by Recognizing Local Manhattan Structures[C]//Asian Conference on Computer Vision. Springer, Cham, ACCV2018: 168-182.
- 通过识别曼哈顿结构的室内机器人视觉里程计
- 南京理工大学
- [28] Guclu O, Caglayan A, Burak Can A. RGB-D Indoor Mapping Using Deep Features[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPR 2019: 0-0.
- 使用深度特征的 RGB-D 室内建图
- 土耳其 Ahi Evran University
- [29] Sualeh M, Kim G W. Simultaneous Localization and Mapping in the Epoch of Semantics: A Survey[J]. International Journal of Control, Automation and Systems, 2019, 17(3): 729-742.
- 语义时代的 SLAM 综述
- 韩国忠北国立大学
- [30] Guerra W, Tal E, Murali V, et al. FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality[J]. arXiv preprint arXiv:1905.11377, 2019.
- [31] Stotko P, Krumpen S, Hullin M B, et al. SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence[J]. IEEE transactions on visualization and computer graphics, 2019, 25(5): 2102-2112.
- SLAMCast:用于沉浸式多客户端实时远程呈现的大规模实时3D重建和流媒体
- 波恩大学 Google Scholar
- [32] Jörgensen E, Zach C, Kahl F. Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss[J]. arXiv preprint arXiv:1906.08070, 2019.
- 单目三维物体检测和使用交叉联合损失的端到端立方框拟合
- 瑞典查尔姆斯理工大学 演示视频
- [33] Wang B H, Chao W L, Wang Y, et al. LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2902-2909.
- 通过二维图像的标签扩散进行三维物体分割
- 康奈尔大学
- 代码开源 期刊:IEEE Robotics and Automation 中科院二区 ,JCR Q1Q2 ,IF 4.8
- [34] Yang B, Wang J, Clark R, et al. Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds[J]. arXiv preprint arXiv:1906.01140, 2019.
- 学习点云上三维实例分割的目标 3D 边界框
- 牛津大学 Google Scholar
- 代码开源
- [35] Ahmed, Mariam. (2019). Pushing Boundaries with 3D Boundaries for Object Recognition. 10.13140/RG.2.2.33079.98728.
- 利用三维边界框推动边界进行物体检测
- 新加坡国立大学
- [36] Wu D, Zhuang Z, Xiang C, et al. 6D-VNet: End-To-End 6-DoF Vehicle Pose Estimation From Monocular RGB Images[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPR2019: 0-0.
- 6D-VNet:单目 RGB 图像的端到端 6 自由度车辆姿态估计
- 深圳大学 代码开源
- [1] A Modular Optimization Framework for Localization and Mapping. [C] RSS 2019
- 用于跟踪与建图的模块化优化框架
- 西班牙阿尔梅里亚大学 Google Scholor
- 代码开源(还未放出) 演示视频
- [2] Wang C, Guo X. Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction[J]. arXiv preprint arXiv:1905.08853, 2019.
- 用于室内 RGB-D 重建的基于平面的几何和纹理优化
- 德克萨斯大学达拉斯分校 Google Scholor
- 代码开源
- [3] Wang J, Song J, Zhao L, et al. A submap joining algorithm for 3D reconstruction using an RGB-D camera based on point and plane features[J]. Robotics and Autonomous Systems, 2019.
- 一种基于点特征和平面特征的RGB-D相机三维重建子地图连接算法
- 悉尼科技大学 Google Scholor 中科院三区,JCR Q2,IF 2.809
- [4] Joshi N, Sharma Y, Parkhiya P, et al. Integrating Objects into Monocular SLAM: Line Based Category Specific Models[J]. arXiv preprint arXiv:1905.04698, 2019.
- 将物体集成到单目 SLAM 中:基于线的特定类别模型
- 印度海德拉巴大学
- Parkhiya P, Khawad R, Murthy J K, et al. Constructing Category-Specific Models for Monocular Object-SLAM[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-9.
- [5] Niu J, Qian K. A hand-drawn map-based navigation method for mobile robots using objectness measure[J]. International Journal of Advanced Robotic Systems, 2019, 16(3): 1729881419846339.
- 一种基于手绘地图的使用物体度量移动机器人导航方法
- 东南大学 期刊:中科院四区, JCR Q4,IF 1.0
- [6] Robust Object-based SLAM for High-speed Autonomous Navigation. 2019
- 基于鲁棒的物体 SLAM 的高速导航系统
- MIT
- [7] Cheng J, Sun Y, Meng M Q H. Robust Semantic Mapping in Challenging Environments[J]. Robotica, 1-15, 2019.
- 具有挑战环境下的鲁棒的语义建图
- 香港中文大学,香港科技大学 期刊:中科院四区, JCR Q4,IF 1.267
- [8] Sun B, Mordohai P. Oriented Point Sampling for Plane Detection in Unorganized Point Clouds[J]. arXiv preprint arXiv:1905.02553, 2019.
- 无组织点云中平面检测的定向点采样
- 美国史蒂文斯理工学院
- [9] Palazzolo E, Behley J, Lottes P, et al. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals[J]. arXiv preprint arXiv:1905.02082, 2019.
- ReFusion 利用残差的 RGB-D 相机动态环境下的三维重建
- 德国波恩大学 代码开源
- [10] Goldman M, Hassner T, Avidan S. Learn Stereo, Infer Mono: Siamese Networks for Self-Supervised, Monocular, Depth Estimation[J]. arXiv preprint arXiv:1905.00401, 2019.
- 学习双目,推断单目:用于自我监督,单目,深度估计的连体网络
- 以色列特拉维夫大学 代码开源(还未放出)
- [11] Mukherjee A, Chakaborty S, Saha S K. Detection of loop closure in SLAM: A DeconvNet based approach[J]. Applied Soft Computing, 2019.
- 基于 DeconvNet 的 SLAM 闭环检测方法
- 印度贾达普大学 期刊:中科院二区,JCR Q1,IF 4.0
- [12] Huang K, Xiao J, Stachniss C. Accurate Direct Visual-Laser Odometry with Explicit Occlusion Handling and Plane Detection[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2019.
- 具有显式遮挡处理和平面检测的精确直接视觉激光测距
- 国防科大
- [13] Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots[J]. Sensors, 2019, 19(10): 2251.
- 用于地面机器人的 RGBD-惯导轨迹估计与建图
- 上海科技大学 Google Scholor
- 代码开源 演示视频 期刊:开源,中科院三区,JCR Q2Q3
- [14] Xiong X, Chen W, Liu Z, et al. DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF[J]. arXiv preprint arXiv:1905.00684, 2019.
- DS-VIO:基于双重 EKF 的稳健高效的双目视觉惯性测距仪
- 哈工大 Google Scholor
- [15] Xing B Y, Pan F, Feng X X, et al. Autonomous Landing of a Micro Aerial Vehicle on a Moving Platform Using a Composite Landmark[J]. International Journal of Aerospace Engineering, 2019, 2019.
- 使用复合路标的在移动平台上自主着陆的微型飞行器
- 北京理工大学
- [16] Ozawa T, Nakajima Y, Saito H. Simultaneous 3D Tracking and Reconstruction of Multiple Moving Rigid Objects[C]//2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR). IEEE, 2019: 1-5.
- 多运动刚体运动三维跟踪与重建
- 日本庆应义塾大学 Google Scholor
- [17] Ens B, Lanir J, Tang A, et al. Revisiting Collaboration through Mixed Reality: The Evolution of Groupware[J]. International Journal of Human-Computer Studies, 2019.
- 通过混合现实重温协作:群件的发展
- 澳大利亚莫纳什大学,以色列海法大学,加拿大卡尔加里大学 期刊:中科院三区,JCR Q1Q2,IF 2.3
- [18] Song S, Yu F, Zeng A, et al. Semantic scene completion from a single depth image[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2017: 1746-1754.
- 从单个深度图像完成语义场景理解
- 普林斯顿大学 Google Scholor
- 代码开源 项目主页
- Wang H, Sridhar S, Huang J, et al. Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation[J]. arXiv preprint arXiv:1901.02970, 2019.
- A Survey of 3D Indoor Scene Synthesis[J]. Journal of Computer Science and Technology 34(3):594-608 · May 2019
- [19] Howard-Jenkins H, Li S, Prisacariu V. Thinking Outside the Box: Generation of Unconstrained 3D Room Layouts[C]//Asian Conference on Computer Vision. Springer, Cham, ACCV2018: 432-448.
- 跳出边界框的思考:无约束 3D 房间布局的生成
- 牛津大学
- [20] Qian Y, Ramalingam S, Elder J H. LS3D: Single-View Gestalt 3D Surface Reconstruction from Manhattan Line Segments[C]//Asian Conference on Computer Vision. Springer, Cham, ACCV 2018: 399-416.
- LS3D: 单视图格式塔三维表面重建曼哈顿线段
- 英国约克大学,美国犹他大学 Google Scholor 会议 ACCV:CCF 人工智能 C 类会议
- [21] Deng X, Mousavian A, Xiang Y, et al. PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking[J]. arXiv preprint arXiv:1905.09304, 2019.
- 一种用于6D目标姿态跟踪的 Rao-Blackwellized 粒子滤波器
- 英伟达,华盛顿大学,斯坦福大学 演示视频
- [1] Delmas P, Gee T. Stereo camera visual odometry for moving urban environments[J]. Integrated Computer-Aided Engineering, 2019 (Preprint): 1-14.
- 用于移动城市环境的双目里程计
- 奥克兰大学 中科院二区 JCR Q2
- [2] Guo R, Zhou D, Peng K, et al. Plane Based Visual Odometry for Structural and Low-Texture Environments Using RGB-D Sensors[C]//2019 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2019: 1-4.
- 用于结构化低纹理的平面 RGB-D 视觉里程计
- 国防科大
- [3] X Wang, F Xue, Z Yan, W Dong, Q Wang, H Zha. Continuous-time Stereo Visual Odometry Based on Dynamics Model⋆ 2019
- 基于动力学模型的连续时间的双目视觉里程计
- 北京大学,上海交大
- [4] Strecke M, Stückler J. EM-Fusion: Dynamic Object-Level SLAM with Probabilistic Data Association[J]. arXiv preprint arXiv:1904.11781, 2019.
- 具有概率数据关联的动态物体级 SLAM
- 德国马克斯普朗克智能系统研究所 实验室主页
- Usenko V, Demmel N, Schubert D, et al. Visual-Inertial Mapping with Non-Linear Factor Recovery[J]. arXiv preprint arXiv:1904.06504, 2019.
- 奥克兰大学 期刊 Integrated Computer-Aided Engineering 中科院二区,JCR Q1,IF 3.667
- [5] Guclu O, Can A B. k-SLAM: A fast RGB-D SLAM approach for large indoor environments[J]. Computer Vision and Image Understanding, 2019.
- 大型室内环境的快速 RGB-D SLAM 方法
- 土耳其哈西德佩大学 JCR Q2,IF 2.776
- [6] Yokozuka M, Oishi S, Simon T, et al. VITAMIN-E: VIsual Tracking And Mapping with Extremely Dense Feature Points[J]. arXiv preprint arXiv:1904.10324, 2019.
- 具有极度密度的特征点的视觉跟踪与建图
- 日本国家先进工业科学技术研究所
- [7] Zubizarreta J, Aguinaga I, Montiel J M M. Direct Sparse Mapping[J]. arXiv preprint arXiv:1904.06577, 2019.
- 直接法稀疏建图
- 西班牙萨拉戈萨大学 代码开源(还未放出)
- 作者 2018 年 ECCV 一篇文章:可变形贴图中 SLAM 的相机跟踪 Camera Tracking for SLAM in Deformable Maps
- [8] Feng G, Ma L, Tan X. Line Model-Based Drift Estimation Method for Indoor Monocular Localization[C]//2018 IEEE 88th Vehicular Technology Conference (VTC-Fall). IEEE, 2019: 1-5.
- 基于线模型的室内单目定位漂移估计方法
- 哈工大 VTC 无线通信会议,一年两届
- [9] Castro G, Nitsche M A, Pire T, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robotics and Autonomous Systems, 2019.
- 通过约束-合作策略实现高效双目SLAM
- 阿根廷布宜诺斯艾利斯大学博士 双目 PTAM 作者
- [10] Canovas B, Rombaut M, Nègre A, et al. A Coarse and Relevant 3D Representation for Fast and Lightweight RGB-D Mapping[C]//VISAPP 2019-International Conference on Computer Vision Theory and Applications. 2019.
- 应用于快速粗糙的 RGB-D 建图的粗糙的相关 3D 表示
- 格勒诺布尔计算机科学实验室
- [11] Ziquan Lan, Zi Jian Yew, Gim Hee Lee. Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes[C], ICRA 2019.
- 鲁棒的室外大场景点云重建
- 新加坡国立 代码开源(还未放出)
- [12] Shi T, Shen S, Gao X, et al. Visual Localization Using Sparse Semantic 3D Map[J]. arXiv preprint arXiv:1904.03803, 2019.
- 利用稀疏语义三维地图进行可视化定位
- 中国科学院自动化研究所模式识别国家重点实验室
- [13] Yang S, Kuang Z F, Cao Y P, et al. Probabilistic Projective Association and Semantic Guided Relocalization for Dense Reconstruction[C]//ICRA 2019.
- 稠密重建的概率投影关联和语义引导重定位
- 清华大学 谷歌学术
- [14] Xiao L, Wang J, Qiu X, et al. Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment[J]. Robotics and Autonomous Systems, 2019.
- 基于动态环境深度学习的单目 SLAM
- 中国科学院电子研究所传感器技术国家重点实验室 期刊 中科院三区 JCR Q2
- [15] Zhou L, Wang S, Ye J, et al. Do not Omit Local Minimizer: a Complete Solution for Pose Estimation from 3D Correspondences[J]. arXiv preprint arXiv:1904.01759, 2019.
- 不要忽略局部最小化:一种完整的 3D 对应姿态估计解决方案
- CMU
- [16] Miraldo P, Saha S, Ramalingam S. Minimal Solvers for Mini-Loop Closures in 3D Multi-Scan Alignment[C]. CVPR 2019.
- 三维多视角对齐中微型闭环的最小求解器
- 美国犹他大学
- ICRA 2019:POSEAMM: A Unified Framework for Solving Pose Problems using an Alternating Minimization Method
- [17] Piao J C, Kim S D. Real-time Visual-Inertial SLAM Based on Adaptive Keyframe Selection for Mobile AR Applications[J]. IEEE Transactions on Multimedia, 2019.
- 基于自适应关键帧选择的移动增强现实应用的实时视觉惯性 SLAM
- 中国延边大学,韩国延世大学 期刊 中科院二区,JCR Q2,IF 4.368
- [18] Puigvert J R, Krempel T, Fuhrmann A. Localization Service Using Sparse Visual Information Based on Recent Augmented Reality Platforms[C]//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019: 415-416.
- 基于最近增强现实平台的稀疏视觉信息定位服务
- Cologne Intelligence ISMAR:AR 领域顶级会议
- [19] Zillner J, Mendez E, Wagner D. Augmented Reality Remote Collaboration with Dense Reconstruction[C]//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019: 38-39.
- 具有稠密重建的增强现实远程协作
- DAQRI 智能眼镜:https://daqri.com/products/smart-glasses/ ISMAR:CCF 计算机图形学与多媒体 B 类会议
- [20] Grandi, Jerônimo & Debarba, Henrique & Maciel, Anderson. Characterizing Asymmetric Collaborative Interactions in Virtual and Augmented Realities. IEEE Conference on Virtual Reality and 3D User Interfaces. 2019.
- 表征虚拟现实和增强现实中的非对称协作交互
- 巴西南里奥格兰德联邦大学 演示视频
- [21] Chen Y S, Lin C Y. Virtual Object Replacement Based on Real Environments: Potential Application in Augmented Reality Systems[J]. Applied Sciences, 2019, 9(9): 1797.
- 基于真实环境的虚拟对象替换:在增强现实系统中的潜在应用
- 台湾科技大学 Applied Sciences 开源期刊
- [22] Ferraguti F, Pini F, Gale T, et al. Augmented reality based approach for on-line quality assessment of polished surfaces[J]. Robotics and Computer-Integrated Manufacturing, 2019, 59: 158-167.
- 基于增强现实的抛光表面在线质量评估方法
- 意大利摩德纳大学 中科院二区,JCR Q1,IF 4.031
- [23] Wang J, Liu H, Cong L, et al. CNN-MonoFusion: Online Monocular Dense Reconstruction Using Learned Depth from Single View[C]//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019: 57-62.
- 单视图中学习深度的在线单目密集重建
- 网易 AR 研究所 ISMAR:AR 领域顶级会议,CCF 计算机图形学与多媒体 B 类会议
- [24] He Z, Rosenberg K T, Perlin K. Exploring Configuration of Mixed Reality Spaces for Communication[C]//Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019: LBW0222.
- 探索混合现实空间的通信配置
- 纽约大学 CHI:CCF 人机交互与普适计算 A 类会议
- [25] von Stumberg L, Wenzel P, Khan Q, et al. GN-Net: The Gauss-Newton Loss for Deep Direct SLAM[J]. arXiv preprint arXiv:1904.11932, 2019.
- GN-Net:高斯牛顿损失的深度直接法 SLAM
- 慕尼黑工业大学 Google Scholor
- [26] Wang R, Yang N, Stueckler J, et al. DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation[J]. arXiv preprint arXiv:1904.10097, 2019.
- DirectShape:视觉车辆姿态形状估计的形状先验光度对准
- 慕尼黑工业大学 Google Scholor
- [27] Feng M, Hu S, Ang M, et al. 2D3D-MatchNet: Learning to Match Keypoints Across 2D Image and 3D Point Cloud[J]. arXiv preprint arXiv:1904.09742, 2019.
- 2D3D-MatchNet:学习匹配 2D 图像和 3D 点云的关键点
- 新加坡国立大学
- [28] Wei Y, Liu S, Zhao W, et al. Conditional Single-view Shape Generation for Multi-view Stereo Reconstruction[J]. arXiv preprint arXiv:1904.06699, 2019.
- 多视角立体重建的条件单视图外形生成
- 清华大学 代码开源
- [29] Behl A, Paschalidou D, Donné S, et al. Pointflownet: Learning representations for rigid motion estimation from point clouds[C]. CVPR 2019.
- Pointflownet:从点云学习刚体运动估计的表示
- 图宾根大学 即将开源代码(还未放出)
- [30] Xue F, Wang X, Li S, et al. Beyond Tracking: Selecting Memory and Refining Poses for Deep Visual Odometry[J]. arXiv preprint arXiv:1904.01892, 2019.
- 为深度视觉测距选择记忆和细化位姿
- 北京大学
- [31] Hou J, Dai A, Nießner M. 3D-SIC: 3D Semantic Instance Completion for RGB-D Scans[J]. arXiv preprint arXiv:1904.12012, 2019.
- RGB-D扫描的 3D 语义实例
- 慕尼黑工业大学
- [32] Phalak A, Chen Z, Yi D, et al. DeepPerimeter: Indoor Boundary Estimation from Posed Monocular Sequences[J]. arXiv preprint arXiv:1904.11595, 2019.
- DeepPerimeter:单目序列室内边界估计
- Magic Leap Google Scholor
- [33] Yang Z, Liu S, Hu H, et al. RepPoints: Point Set Representation for Object Detection[J]. arXiv preprint arXiv:1904.11490, 2019.
- RepPoints:目标检测的点集表示
- 北京大学
- [34] Jiang S, Xu T, Li J, et al. Foreground Feature Enhancement for Object Detection[J]. IEEE Access, 2019, 7: 49223-49231.
- 目标检测的前景特征增强
- 北京理工大学
- [35] Zakharov S, Shugurov I, Ilic S. DPOD: 6D Pose Object Detector and Refiner[J]. 2019.
- DPOD:6 自由度物体姿态检测与细化
- 慕尼黑工业大学,西门子
- [36] Liu C, Yang Z, Xu F, et al. Image Generation from Bounding Box-represented Semantic Labels[J]. Computers & Graphics, 2019.
- 从边界框表示的语义标签中生成图像
- 清华大学 Computers & Graphics 中科院四区,JCR Q3, IF 1.352
- [37] Qiu Z, Yan F, Zhuang Y, et al. Outdoor Semantic Segmentation for UGVs Based on CNN and Fully Connected CRFs[J]. IEEE Sensors Journal, 2019.
- 基于 CNN 和全连通 CRF 的 UGV 室外语义分割
- 大连理工大学 点云处理代码 中科院三区,JCR Q2,IF 2.698
- [38] Ma X, Wang Z, Li H, et al. Accurate Monocular 3D Object Detection via Color-Embedded 3D Reconstruction for Autonomous Driving[J]. arXiv preprint arXiv:1903.11444, 2019.
- 用于自动驾驶的彩色嵌入式三维重建精准单目三维物体检测
- 大连理工大学
- [39] Sindagi V A, Zhou Y, Tuzel O. MVX-Net: Multimodal VoxelNet for 3D Object Detection[J]. arXiv preprint arXiv:1904.01649, 2019.
- 用于三维物体检测的多模态 VoxelNet
- 美国约翰斯·霍普金斯大学 个人主页
- [40] Li J, Lee G H. USIP: Unsupervised Stable Interest Point Detection from 3D Point Clouds[J]. arXiv preprint arXiv:1904.00229, 2019.
- 三维点云的无监督稳定兴趣点检测
- 新加坡国立大学 即将开源代码(还未放出)
- [41] Scheerlinck C, Rebecq H, Stoffregen T, et al. CED: Color event camera dataset[J]. arXiv preprint arXiv:1904.10772, CVPRW 2019.
- 彩色事件相机
- 苏黎世大学 项目主页 Google Scholor
- 基于事件的视觉研究:Event-based Vision: A Survey. CVPR 2019
- Focus is all you need: Loss functions for event-based vision. 2019
- [42] Stoffregen T, Gallego G, Drummond T, et al. Event-based motion segmentation by motion compensation[J]. arXiv preprint arXiv:1904.01293, 2019.
- 基于事件的运动补偿运动分割
- 澳大利亚机器人视觉中心,苏黎世大学
- [43] Xiao Y, Ruan X, Chai J, et al. Online IMU Self-Calibration for Visual-Inertial Systems[J]. Sensors, 2019, 19(7): 1624.
- 视觉惯性系统 IMU 在线标定
- 北京工业大学 Sensors 开源期刊
- [44] Eckenhoff K, Geneva P, Huang G. Closed-form preintegration methods for graph-based visual–inertial navigation[J]. The International Journal of Robotics Research, 2018.
- 基于图的视觉惯性导航的封闭式预积分方法
- 特拉华大学 代码开源
- [45] Joshi B, Rahman S, Kalaitzakis M, et al. Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain[J]. arXiv preprint arXiv:1904.02215, 2019.
- 开源视觉惯导 SLAM 在水下的状态估计比较
- 美国南卡罗来纳大学哥伦比亚分校 Google Scholor
- [46] Xia L, Meng Q, Chi D, et al. An Optimized Tightly-Coupled VIO Design on the Basis of the Fused Point and Line Features for Patrol Robot Navigation[J]. Sensors, 2019, 19(9): 2004.
- 基于点线特征融合的巡检机器人紧耦合的 VIO
- 东北电力大学 Sensors 开源期刊
- [47] Ye H, Chen Y, Liu M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping[J]. arXiv preprint arXiv:1904.06993, 2019.
- 紧耦合的激光惯性里程计与建图
- 香港科技大学 Google Scholor
- Focal loss in 3d object detection [J]IEEE Robotics and Automation Letters 4 (2), 1263-1270, 2019.
- [48] Usenko V, Demmel N, Schubert D, et al. Visual-Inertial Mapping with Non-Linear Factor Recovery[J]. arXiv preprint arXiv:1904.06504, 2019.
- 具有非线性因子恢复的视觉-惯导建图
- 慕尼黑工业大学 Google Scholor
- [49] Qiu X, Zhang H, Fu W, et al. Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End[J]. Sensors, 2019, 19(8): 1941.
- 具有无偏差线性模型和前端鲁棒特征跟踪的单目视觉惯导里程计
- 多伦多大学 Google Scholor Sensors 开源期刊
- [50] Liu Y, Knoll A, Chen G. A New Method for Atlanta World Frame Estimation[J]. arXiv preprint arXiv:1904.12717, 2019.
- 亚特兰大世界框架估计的一种新方法
- 慕尼黑工业大学
- [51] Zhao Y, Qi J, Zhang R. CBHE: Corner-based Building Height Estimation for Complex Street Scene Images[J]. arXiv preprint arXiv:1904.11128, 2019.
- 基于角点的复杂街景图像建筑物高度估计
- 墨尔本大学
- [1] Rambach J, Lesur P, Pagani A, et al. SlamCraft: Dense Planar RGB Monocular SLAM[C]. International Conference on Machine Vision Applications MVA 2019.
- [2] Liu C, Yang J, Ceylan D, et al. Planenet: Piece-wise planar reconstruction from a single rgb image[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2018: 2579-2588.
- PlaneNet:从单张 RGB 图像进行平面重构
- 华盛顿大学 谷歌学术 Github 代码开源
- [3] Weng X, Kitani K. Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud[J]. arXiv preprint arXiv:1903.09847, 2019.
- 利用伪激光点云进行单目 3D 物体检测
- CMU 谷歌学术
- [4] Hassan M., Mohamed & Hemayed, Elsayed.. A Fast Linearly Back-End SLAM for Navigation Based on Monocular Camera. International Journal of Civil Engineering and Technology 2018. 627-645.
- 单目 SLAM 的快速线性后端优化
- 埃及法尤姆大学
- [5] Chen B, Yuan D, Liu C, et al. Loop Closure Detection Based on Multi-Scale Deep Feature Fusion[J]. Applied Sciences, 2019, 9(6): 1120.
- 基于多尺度深度特征融合的闭环检测
- 中南大学自动化学院
- [6] Ling Y, Shen S. Real‐time dense mapping for online processing and navigation[J]. Journal of Field Robotics.
- 用于在线处理和导航的实时密集建图
- 沈邵劼老师团队 Github 代码开源
- [7] Chen-Hsuan Lin, Oliver Wang et al.Photometric Mesh Optimization for Video-Aligned 3D Object Reconstruction[C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019
- 用于视频三维物体重建的光度网格优化
- CMU 在读博士 个人主页 Github 代码开源
- [8] Tang F, Li H, Wu Y. FMD Stereo SLAM: Fusing MVG and Direct Formulation Towards Accurate and Fast Stereo SLAM[J]. 2019.
- 融合多视图几何与直接法的快速精准双目 SLAM
- 中科院自动化研究所,模式识别国家重点实验室,吴毅红团队
- [9] Duff T, Kohn K, Leykin A, et al. PLMP-Point-Line Minimal Problems in Complete Multi-View Visibility[J]. arXiv preprint arXiv:1903.10008, 2019.
- PLMP:多视图中的点线最小化
- 佐治亚理工学院
- [10] Seong Hun Lee, Javier Civera. Loosely-Coupled Semi-Direct Monocular SLAM[J] IEEE Robotics and Automation Letters. 2019
- 松耦合的半直接法单目 SLAM
- 萨拉戈萨大学,谷歌学术,代码开源,演示视频
- Lee S H, de Croon G. Stability-based scale estimation for monocular SLAM[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 780-787.
- Lee S H, Civera J. Closed-Form Optimal Triangulation Based on Angular Errors[J]. arXiv preprint arXiv:1903.09115, 2019.
- [12] Jinyu Li, Bangbang Yang, Danpeng Chen, Nan Wang, Guofeng Zhang*, Hujun Bao*. Survey and Evaluation of Monocular Visual-Inertial SLAM Algorithms for Augmented Reality[J] Journal of Virtual Reality & Intelligent Hardware 2019.
- 应用于增强现实的单目 VI-SLAM 算法调研与评估
- 章国锋教授团队,工程地址,Github-评估工具
- [13] Pablo Speciale, Johannes L. Schonberg, Sing Bing Kang. Privacy Preserving Image-Based Localization[J] 2019.
- [14] Li M, Zhang W, Shi Y, et al. Bionic Visual-based Data Conversion for SLAM[C]//2018 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2018: 1607-1612.
- 基于仿生视觉的 SLAM 数据转换
- 北京理工大学仿生机器人与系统教育部重点实验室
- [15] Cheng J, Sun Y, Chi W, et al. An Accurate Localization Scheme for Mobile Robots Using Optical Flow in Dynamic Environments[C]//2018 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2018: 723-728.
- 动态环境下使用光流的移动机器人精确定位方案
- 香港中文大学,实验室主页
- [16] Zichao Zhang, Davide Scaramuzza, Beyond Point Clouds: Fisher Information Field for Active Visual Localization.[C], IEEE International Conference on Robotics and Automation (ICRA), 2019.
- [17] Georges Younes, Daniel Asmar, John Zelek. A Unified Formulation for Visual Odometry[J]. arXiv preprint arXiv:1903.04253, 2019.
- 一种统一的视觉里程计方法
- 加拿大滑铁卢大学,贝鲁特美国大学; 谷歌学术
- Younes G, Asmar D, Shammas E, et al. Keyframe-based monocular SLAM: design, survey, and future directions[J]. Robotics and Autonomous Systems, 2017, 98: 67-88.
- 2018:Fdmo: Feature assisted direct monocular odometry
- [1] Han L, Gao F, Zhou B, et al. FIESTA: Fast Incremental Euclidean Distance Fields for Online Motion Planning of Aerial Robots[J]. arXiv preprint arXiv:1903.02144, 2019.
- 基于快速增量式欧氏距离场的飞行器实时运动规划
- 沈邵劼老师团队
- [2] ICRA 2019 :Multimodal Semantic SLAM with Probabilistic Data Association
- 具有概率数据关联的多模态语义SLAM
- 麻省理工学院海洋机器人团队
- [3] Zhang F, Rui T, Yang C, et al. LAP-SLAM: A Line-Assisted Point-Based Monocular VSLAM[J]. Electronics, 2019, 8(2): 243.
- 线辅助的点 SLAM
- 解放军陆军工程大学
- [4] Zhang H, Jin L, Zhang H, et al. A Comparative Analysis of Visual-Inertial SLAM for Assisted Wayfinding of the Visually Impaired[C]//2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019: 210-217.
- VI-SLAM 辅助寻径对比分析
- 弗吉尼亚联邦大学
- [5] Chen Z, Liu L. Creating Navigable Space from Sparse Noisy Map Points[J]. arXiv preprint arXiv:1903.01503, 2019.
- 从稀疏地图点中创建可导航空间
- [6] Antigny N, Uchiyama H, Servières M, et al. Solving monocular visual odometry scale factor with adaptive step length estimates for pedestrians using handheld devices[J]. Sensors, 2019, 19(4): 953.
- 行人步长估计辅助单目手持式视觉里程计尺度估计 城市环境 AR应用
- 法国运输\规划和网络科学与技术研究所,researchgate YouTube
- [7] Zhou D, Dai Y, Li H. Ground Plane based Absolute Scale Estimation for Monocular Visual Odometry[J]. arXiv preprint arXiv:1903.00912, 2019.
- 基于地平面的单目 SLAM 绝对尺度估计
- 百度
- [8] Duong N D, Kacete A, Soladie C, et al. Accurate Sparse Feature Regression Forest Learning for Real-Time Camera Relocalization[C]//2018 International Conference on 3D Vision (3DV). IEEE, 2018: 643-652.
- 基于随机森林学习的实时相机重定位
- 视频
- [9] Patra S, Gupta K, Ahmad F, et al. EGO-SLAM: A Robust Monocular SLAM for Egocentric Videos[C]//2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019: 31-40.
- 视频序列鲁棒的单目 SLAM
- 印度理工学院
- [10] Rosinol A, Sattler T, Pollefeys M, et al. Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities[J]. arXiv preprint arXiv:1903.01067, 2019.
- 增量式 VI-SLAM 三维网格生成
- 麻省理工学院信息与决策系统实验室,项目主页
- [11] Wang Z. Structure from Motion with Higher-level Environment Representations[J]. 2019.
- 具有高级环境表示的 SFM
- 澳大利亚国立大学 硕士学位
- [12] Vakhitov A, Lempitsky V. Learnable Line Segment Descriptor for Visual SLAM[J]. IEEE Access, 2019.
- 视觉SLAM中的可学习线段描述,基于 ORB-SLAM2
- Samsung AI Center, Moscow
- [13] Grinvald M, Furrer F, Novkovic T, et al. Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery[J]. arXiv preprint arXiv:1903.00268, 2019.
- 语义感知建图与三维物体探索,基于 mask-RCNN
- 苏黎世联邦理工学院
wuyanminmax[AT]gmail.com