-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support 3D semantic segmentation demo #532
Conversation
This PR can be reviewed now. But currently there is no pre-trained PN++ weight in model_zoo. So it should be merged after I finish benchmarking (at least on ScanNet)? |
c56328a
to
c618be4
Compare
Codecov Report
@@ Coverage Diff @@
## master #532 +/- ##
==========================================
+ Coverage 50.69% 50.77% +0.07%
==========================================
Files 197 197
Lines 14874 14923 +49
Branches 2419 2426 +7
==========================================
+ Hits 7541 7577 +36
- Misses 6836 6851 +15
+ Partials 497 495 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
docs/0_demo.md
Outdated
|
||
The visualization results including a point cloud and its predicted 3D segmentation mask will be saved in `${OUT_DIR}/PCD_NAME`. | ||
|
||
Example on ScanNet data using [PointNet++-SSG]() model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: provide the link to the pretrained model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The demo data for scannet can be further compressed or not? like sample half of all points?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is also ready for review now.
need to resolve conflicts before merge |
6493973
to
b7978dd
Compare
Done. |
* changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py
I add a demo script for 3D pc seg models following previous demos. I tested it using my trained PN2_SSG weight and it works smoothly. Need to finish the docs after PN2 segmentor PR is merged.
Overview of this PR:
mmdet3d/apis/inference.py
, e.g.inference_multi_modality_detector()
. They are missed in previous PRs.show_result_meshlab()
function, which handles 3D single-modality det and multi-modality det. However, I think its logic is not expandable. So I write three vis functions to perform 3D det vis on points, 3D seg vis on points and 3D project to 2D vis on images. Nowshow_result_meshlab()
is still the interfere for demos, but its inner logic calls three vis functions according to the argstask
. For example, if we want to task is multi-modality_det, if calls 3D det vis on points and 3D project to 2D vis on images. In the future, if we want to perform mono-3D demo @Tai-Wang, we may simply call 3D project to 2D vis on images. I believeshow_result_meshlab()
is much expandable now.