You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello author, I found that in the evaluation, the Minkunet model output of SPCONV and Torchsparse ++ is different,(artifact-p2 evaluate.py, model output cosine similarity is approximately 0.81 ). I make sure each backend using same input point clouds. And, the cosine similarity between ME and Torchsparse++ output is approximately 0.99.I am not very familiar with this field and may have made some naive mistakes. Looking forward to your reply.
this line in artifact-p2 evaluate.py out = model(inputs["pts_input"])
The text was updated successfully, but these errors were encountered:
Hello author, I found that in the evaluation, the Minkunet model output of SPCONV and Torchsparse ++ is different,(artifact-p2 evaluate.py, model output cosine similarity is approximately 0.81 ). I make sure each backend using same input point clouds. And, the cosine similarity between ME and Torchsparse++ output is approximately 0.99.I am not very familiar with this field and may have made some naive mistakes. Looking forward to your reply.
this line in artifact-p2 evaluate.py
out = model(inputs["pts_input"])
The text was updated successfully, but these errors were encountered: