-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about train #66
Comments
Hi @typhoonlee, the default train config uses two 11GB GPUs. A pair of GTX1080ti or a pair of RTX2080ti should be fine. And yes, the training takes around a week. If you have more GPUs, you can set the |
Hello, I'm sorry to disturb you. What is the difference between run.py and eval.py and the offline package kitti_native_evaluation, and what is the relationship between them? What are their respective results? What should I do if I want to get experimental results for comparison with the paper? |
Hi @typhoonlee, the print out from run.py and eval.py are more or less internal/debug metrics related to this work. To compare with other works, kitti_native_evaluation is the right way to go. Print out from run.py and eval.py: Accuracy and Recall are at point level, and car class is splited to front view car and side view car subclasses. kitti_native_evaluation: Kitti 3D and BEV object detection scores (mAP), used in the paper. Hope this helps. Thank you. |
Hi @typhoonlee, those figures in paper are drawn using OpenCV (boxes in the rgb) and Open3D (point cloud with colors). Check the visualization case |
Hi @typhoonlee, For the figures in the paper, we modified the visualization a bit to make it clean. To remove the "lines", you just need to delete the To remove the color of the points or add custom color for each point, simply modify Not quite sure about the warning, it seems related to pre-set the viewing angle though. If it does not break the code, it might be fine. Sorry for the delayed reply. Thanks, |
Thank you very much~ |
@WeijingShi what does the multiple line from the bounding box signify!! |
Hi @typhoonlee, unfortunately, the pre-trained model detects car and ped separately. Checkpoints/car_auto_T3_train only detects cars. |
Hi @abhigoku10, the lines are the graph connections to the node which is outputting the box. |
Thank you for your patience to answer.!Why is there a green box in the final prediction graph in the paper for cars, red for pedestrians, and blue for cyclists? How should I get the prediction results of all categories in a graph? Does it need to be retrained? And...Why are there only car results and no pedestrians and cyclists in the results obtained by using kitti_native_evaluation, but there are still two types of 3DmAP results in the paper. How do you get them? |
Can you share the code for the result visualization? It will be very helpful to my study, thank you very much! |
Hi @typhoonlee, we provide The qualitative visualization is the combined results of a car detection model and a ped-cyl model. The visualization code is drawn based on Open3D. You can select the points within the detection box and set their color in The following function may be helpful: Point-GNN/dataset/kitti_dataset.py Line 143 in 48f3d79
|
How do you get the visualization results after the two models are combined? |
Hi @typhoonlee, If offline visualization is fine to you, you can just combine the results files, read them as label file for visualization: Point-GNN/dataset/kitti_dataset.py Line 703 in 48f3d79
Point-GNN/dataset/kitti_dataset.py Line 1286 in 48f3d79
If you want an online combination, it's not supported by the current code. You need to rewrite run.py to load to models together and run them at the same time. |
Hi,Thank you for your patient answer last time~
What type of GPU did you use during the training process? How many GPUs did you use? The efficiency is very slow during my training process.
The text was updated successfully, but these errors were encountered: