Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support 3D semantic segmentation demo #532

Merged
merged 13 commits into from
May 19, 2021

Conversation

Wuziyi616
Copy link
Contributor

@Wuziyi616 Wuziyi616 commented May 10, 2021

I add a demo script for 3D pc seg models following previous demos. I tested it using my trained PN2_SSG weight and it works smoothly. Need to finish the docs after PN2 segmentor PR is merged.

Overview of this PR:

  • I compress the file size of some test data (mainly KITTI's png image). I understand this is unrelated to this PR but I don't think it desires a separate PR. So I push it here.
  • I add some unit tests for functions in mmdet3d/apis/inference.py, e.g. inference_multi_modality_detector(). They are missed in previous PRs.
  • Main feature: support pc seg demo (PN++ on ScanNet). In the original implementation, the core function for visualization is a show_result_meshlab() function, which handles 3D single-modality det and multi-modality det. However, I think its logic is not expandable. So I write three vis functions to perform 3D det vis on points, 3D seg vis on points and 3D project to 2D vis on images. Now show_result_meshlab() is still the interfere for demos, but its inner logic calls three vis functions according to the args task. For example, if we want to task is multi-modality_det, if calls 3D det vis on points and 3D project to 2D vis on images. In the future, if we want to perform mono-3D demo @Tai-Wang, we may simply call 3D project to 2D vis on images. I believe show_result_meshlab() is much expandable now.

@Wuziyi616
Copy link
Contributor Author

Wuziyi616 commented May 10, 2021

This PR can be reviewed now. But currently there is no pre-trained PN++ weight in model_zoo. So it should be merged after I finish benchmarking (at least on ScanNet)?

@Wuziyi616 Wuziyi616 changed the title [Feature] Support 3D semantic segmentation demo [Feature] Support 3D semantic segmentation demo (Review this after PR#528 is merged) May 10, 2021
@Wuziyi616 Wuziyi616 added the WIP label May 11, 2021
@Wuziyi616 Wuziyi616 changed the title [Feature] Support 3D semantic segmentation demo (Review this after PR#528 is merged) [Feature] Support 3D semantic segmentation demo May 13, 2021
@Wuziyi616
Copy link
Contributor Author

Wuziyi616 commented May 13, 2021

Demo results (PN2-SSG on ScanNet scene0000_00.bin):
scene0000_00_online
PN2-MSG:
scene0000_00_online

@codecov
Copy link

codecov bot commented May 13, 2021

Codecov Report

Merging #532 (b7978dd) into master (eb77ff1) will increase coverage by 0.07%.
The diff coverage is 64.91%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #532      +/-   ##
==========================================
+ Coverage   50.69%   50.77%   +0.07%     
==========================================
  Files         197      197              
  Lines       14874    14923      +49     
  Branches     2419     2426       +7     
==========================================
+ Hits         7541     7577      +36     
- Misses       6836     6851      +15     
+ Partials      497      495       -2     
Flag Coverage Δ
unittests 50.77% <64.91%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmdet3d/apis/__init__.py 100.00% <ø> (ø)
mmdet3d/apis/inference.py 59.19% <64.91%> (+3.19%) ⬆️
mmdet3d/core/visualizer/show_result.py 66.95% <0.00%> (+2.60%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update eb77ff1...b7978dd. Read the comment docs.

@Wuziyi616 Wuziyi616 removed the WIP label May 13, 2021
@Wuziyi616 Wuziyi616 requested a review from Tai-Wang May 13, 2021 05:10
docs/0_demo.md Outdated

The visualization results including a point cloud and its predicted 3D segmentation mask will be saved in `${OUT_DIR}/PCD_NAME`.

Example on ScanNet data using [PointNet++-SSG]() model:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: provide the link to the pretrained model.

mmdet3d/apis/inference.py Outdated Show resolved Hide resolved
Copy link
Member

@Tai-Wang Tai-Wang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The demo data for scannet can be further compressed or not? like sample half of all points?

Copy link
Contributor Author

@Wuziyi616 Wuziyi616 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is also ready for review now.

@ZwwWayne
Copy link
Collaborator

need to resolve conflicts before merge

@Wuziyi616
Copy link
Contributor Author

need to resolve conflicts before merge

Done.

@ZwwWayne ZwwWayne merged commit 3bac800 into open-mmlab:master May 19, 2021
@Wuziyi616 Wuziyi616 deleted the seg_related_unittest branch May 22, 2021 09:04
tpoisonooo pushed a commit to tpoisonooo/mmdetection3d that referenced this pull request Sep 5, 2022
* changing the onnxwrapper script

* gpu_issue

* Update wrapper.py

* Update wrapper.py

* Update runtime.txt

* Update runtime.txt

* Update wrapper.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants