-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support PAConv operation #598
Conversation
Codecov Report
@@ Coverage Diff @@
## master #598 +/- ##
==========================================
+ Coverage 50.92% 51.00% +0.07%
==========================================
Files 197 201 +4
Lines 15056 15220 +164
Branches 2444 2467 +23
==========================================
+ Hits 7668 7763 +95
- Misses 6884 6939 +55
- Partials 504 518 +14
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Would be better if you read the paper before reviewing my code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement PAConv layer. PAConv is similar to
Conv2d
orMLP
, serving as basic building block of a model. The basic idea of PAConv is simple. PAConv dynamically generates convolutional kernels by assembling some weight kernels in its weight bank. Every PAConv has a weight bank which consists of several (typically 16) trainable weight kernels, and aScoreNet
which is implemented by MLPs. Given a point pairp_c is a center
andp_1 is its neighbor (queried by KNN)
, it usesScoreNet
to predict scores for this point pair, and uses the score to assemble weight bank to form one kernel. Then, it runs convolution using the assembled kernel on the point pair and get output features.PAConv has two versions of implementation, one is called
PAConv
and the other one is calledPAConvCUDA
. The differences are:PAConv
is easy to understand, and can run on both CPU as well as GPU. Its inputs are features already grouped by KNN, so it only needs to compute scores and assemble weights and get the output.PAConvCUDA
is tricky, and can only run on GPU because it utilizes a custom cuda opassign_score_withk
. Its inputs are features NOT grouped by KNN, and queries KNN on the fly (hard to describe). This saves memory because we don't need to pre-compute KNN features which have very large tensor size (typically(B, C, npoint, K)
).The operation is somewhat complex and I am not sure if I should write its detailed process in the comment. Also, the original code is dirty, although I tried my best to add comments and made it clear, it may still not meet our standard. Please kindly help me improve it (e.g. some function's name).