-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the 2D backbone #7
Comments
Hi, sorry it seems I didn't make it clear in the readme.
And I use a similar step as (3) to train a 2D backbone for the waymo dataset. I can send you the relevant processing code and config file if needed. Best, |
Thanks a lot for your reply! It is really clear! |
Hi, I have sent them to your email. |
Thanks! I received your email.
Sorry to bother you again. Sincere appreciation! |
Hi,
Best, |
Oh, I got it. I forget to adopt the fade strategy for the last 5 epochs.
Sincerely, |
It is not normal, could you provide the full results such as mATE, mAOE, mASE? |
It can have a very bad mAOE abd mASE if you use the newest version mmdet3d to generate the .pkl and then train TransFusion. |
OK, I list the TP metrics results as below:
I think this might be the key to my problem! Since I create the meta data of nuscenes with the newest released mmdet3d, and degrade its version after I find the version mismatching with the mmdet3d of the TransFusion github. |
Nice discussion above! Hi @XuyangBai, I have a follow-up question regarding the training of LC model. To load the TransFusion-L model when training the -LC model, should we change the |
Hi @YunzeMan I usually use the following code to combine the pretrained TransFusion-L and the 2D backbone
And then set the |
Hi, @XuyangBai @SxJyJay, it needs 4 days for me to train a TransFusion-L (8 V100 GPUs, epoch=20, samples_per_gpu=2), which seems too long. How long did you spend training TransFusion-L?Thanks!! |
@WWW2323 about 2 days for me using 8V100 GPUs |
Also about 2 days for me using 8 RTX3090 GPUs. |
@XuyangBai Hi, I have finished the whole training process of TransFusion. I make no modifications except for replacing the DLA-34 to ResNet50+FPN as you suggested. And the final results on the nuscenes validation set are: Besides, I find that the mAP drop may be caused by much lower AP of some classes such as trailer, traffic cone and barrier.. I list AP of my results (on val set) vs reported results (on test set) below: I don't know whether my results are within an acceptable error margin. Or such results are caused by the bias of different image backbones (i.e., DLA-34 and ResNet50+FPN)? |
Hi @SxJyJay, You can see the detailed results on val set below.
I think it is within an acceptable error margin. The slightly worse performance might be coming from the training variance. For the gap between validation and test set, it is normal because generally they are having different distributions. Also, you could try using more queries during inference to get a better result with a longer inference time (see Table 13 in the supplementary) Besides, if you are using a different version |
Hello @XuyangBai, I want to use your results on nuscenes validation set to do object tracking experiment, but I don't have enough computing power for training. I wonder if you could provide json files of the validation set results? Here is my email 304886938@qq.com. Looking forward to your reply! |
Thank you. On the validation set, the performance I re-produced seems close to yours.
My problems are perfectly solved by you! Hence, I close this issue. |
Hi, I also plan to train 2D backbone for waymo and nuscenes, Could you please send me the relevant code for training the 2D backbone? It will be helpful! My email is xxlbigbrother@gmail.com |
Hi, could you please provide the environment of your CUDA, PyTorch MMCV, mmdet, mmdet3d, because I am training on 4*A100 and the display takes 20 days, which makes me confused, I want to exclude the influence of the environment
TorchVision: 0.9.0 |
|
Hi, my runtime environment is shown below:
Besides, I think you can check the time consumed on fetching data and running one forward pass to identify where is the bottleneck. Maybe your problem is caused by slow io. |
thanks for your reply! The strange thing is that my GPU usage has been maintained at 100, basically will not jump back and forth, I don't know if this can mean that the Speed of CPU loading data is normal? |
@SxJyJay Hi, can you provide the trained TransFusion and TransFusion-L model? My re-produced result is 63.9 mAP (Lidar) and 64.4 mAP(Lidar+Camera), which is strange. Thanks so much! |
@wzmsltw Hi, you can leave me your email, and I will send checkpoints to you. |
@SxJyJay my email address is wzmsltw@gmail.com Thanks so much for your help! |
@SxJyJay Hi, when will you send checkpoints? Really looking forward to it. Thanks again~ |
Sorry for the delay. I have something urgent yesterday. I have send you! |
Hi, I also plan to train 2D backbone for waymo and nuscenes. Could you please send me the relevant code for training the 2D backbone for the waymo and nuscenes dataset if that doesn't bother you? (specifically the waymo dataset) My email is xpydgqb@gmail.com |
Hi, I'm also trying to reproduce TransFusion-L but my mAP and NDS (60.34 & 66.46) are much lower than the author's. Could you please send me your training log of TransFusion-L? I notice an obvious drop of loss at epoch 16 when fade strategy is applied in other's training. But mine seems no difference between with and w/o fade strategy. Thank you! My mail is: kiki_jiang@sjtu.edu.cn |
@JamesHao-ml @yangsijing1995 @wangyd-0312 @Young98CN @zzj403 @jqfromsjtu Hi, I have sent checkpoints to u. Sorry for late reply, as I just finish a DDL. |
@xpyqiubai @xxlbigbrother @kuangpanda Hi, I have sent data processing code for waymo and kitti to u. Sorry for late reply. |
Thanks! |
@SxJyJay Hi SxJyJay, can you send the trained checkpoints on nuscenes to me? I need the trained TransFusion and |
I have sent relevant checkpoints and data processing code to your email. |
Thank you very much! |
Hi, @SxJyJay, I have reproduce the Transfusion-L with mAP 65.4, however, the reproduced Transfusion-LC model can only achive mAP 65.6, which has a large gap between yours(67.25). Can you send me your training log and checkpoint of both Transfusion-L and Transfusion-LC so I can check where went wrong, my email is hustminrui@126.com. Thank you! |
Hi, I have sent you relevant pretrained weights. |
Thanks a lot |
@SxJyJay Hi SxJyJay, the reproduced Transfusion-LC model of mine is so low. Could you please send the trained checkpoints on nuscenes to me? I need the trained TransFusion and TransFusion-L model as well as the relevant data processing code. Thank you very much! My email is 982330532@qq.com |
@SxJyJay Hi, could you send me the checkpoint to me? I need the trained TransFusion-L, |
@SxJyJay Hi, can you provide the trained TransFusion and TransFusion-L model? |
|
Hi, @SxJyJay I am trying to reproduce the Transfusion-L. But I can‘t reach the results. |
I upload my reproduced checkpoints to Google Drive. You can get access them using the following links: |
Hi @SxJyJay Thank you very much! |
@SxJyJay Thank you so much for you kind sharing! |
@maokp @kuangpanda @cxd520314wang @SxJyJay I am interested in training a 2D backbone on Waymo dataset. Could you share the relevant code and checkpoints with me on khoche@kth.se? thanks in advance! |
After commenting out that part, I get an error
How did you solve that? |
@xpyqiubai @xxlbigbrother @kuangpanda @SxJyJay I'm interested to train a 2D backbone on Waymo dataset. Can you please share the relevant code and checkpoints (if possible) to gopi231091@gmail.com. Thank you very much! |
This is fantastic, thank you so much for sharing! |
Hello, thank you very much for sharing, this is very helpful for me who only has a GPU, I also want to see the parameters after training, so can you send me a Transfusion work_dir file, thank you very much. gzr321654987@126.com |
@xpyqiubai @xxlbigbrother @kuangpanda @SxJyJay Could you please provide the necessary code and any available checkpoints for training a 2D backbone on the Waymo dataset? If possible, send the information to friendship1@dgist.ac.kr. Your assistance is greatly appreciated! |
@SxJyJay are these the checkpoints trained on waymo? if not could also sent the checkpoints to bk190196@gmail.com. Thank you! |
Hi, I have some questions about training the TransFusion-LC.
You mentioned in the supplementary materials that a 2D backbone pre-trained on the autonomous driving datasets is required and frozen during training the TransFusion-LC. (i.e., DLA-34 and ResNet-50 pre-trained on the nuScenes and Waymo in repsectively.) However, I don't find relevant pre-trained models in the readme.md of this git, and relevant configuration terms in the config files (e.g., transfusion_nusc_voxel_LC.py). Or maybe you have provided but I missed something important?
Could you please provide relevant pre-trained 2D backbone models, or relevant instructions of pre-training the 2D backbone models? Thanks a lot!
The text was updated successfully, but these errors were encountered: