Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Issues with visualization demos: multi_modality_demo_noann.py not running and lack of diverse environment data for inference #3031

Open
3 tasks done
Wangquans opened this issue Sep 5, 2024 · 0 comments

Comments

@Wangquans
Copy link

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

main branch https://github.com/open-mmlab/mmdetection3d

Environment

sys.platform: linux
Python: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 4090
CUDA_HOME: /usr/local/cuda-11.8
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.0.0+cu118
PyTorch compiling details: PyTorch built with:

  • GCC 9.3
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.8
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  • CuDNN 8.6
    • Built with CuDNN 8.7
  • Magma 2.6.1
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

TorchVision: 0.15.1+cu118
OpenCV: 4.9.0
MMEngine: 0.10.4
MMDetection: 3.2.0
MMDetection3D: 1.3.0+5c0613b
spconv2.0: True

Reproduces the problem - code sample

#The program is on projects/BEVFusion/demo/multi_modality_demo_noann.py

Copyright (c) OpenMMLab. All rights reserved.

from argparse import ArgumentParser

import mmcv

from mmdet3d.apis import inference_multi_modality_detector, init_model
from mmdet3d.registry import VISUALIZERS

def parse_args():
parser = ArgumentParser()
parser.add_argument('pcd', help='Point cloud file')
parser.add_argument('img', help='image file')
#parser.add_argument('ann', help='ann file')
parser.add_argument('config', help='Config file')
parser.add_argument('checkpoint', help='Checkpoint file')
parser.add_argument(
'--device', default='cuda:0', help='Device used for inference')
parser.add_argument(
'--cam-type',
type=str,
default='CAM_FRONT',
help='choose camera type to inference')
parser.add_argument(
'--score-thr', type=float, default=0.0, help='bbox score threshold')
parser.add_argument(
'--out-dir', type=str, default='demo', help='dir to save results')
parser.add_argument(
'--show',
action='store_true',
help='show online visualization results')
parser.add_argument(
'--snapshot',
action='store_true',
help='whether to save online visualization results')
args = parser.parse_args()
return args

def main(args):
# build the model from a config file and a checkpoint file
model = init_model(args.config, args.checkpoint, device=args.device)

# init visualizer
visualizer = VISUALIZERS.build(model.cfg.visualizer)
visualizer.dataset_meta = model.dataset_meta

# test a single image and point cloud sample  删除args.ann
result, data = inference_multi_modality_detector(model, args.pcd, args.img,'',
                                                 args.cam_type)
points = data['inputs']['points']
if isinstance(result.img_path, list):
    img = []
    for img_path in result.img_path:
        single_img = mmcv.imread(img_path)
        single_img = mmcv.imconvert(single_img, 'bgr', 'rgb')
        img.append(single_img)
else:
    img = mmcv.imread(result.img_path)
    img = mmcv.imconvert(img, 'bgr', 'rgb')
data_input = dict(points=points, img=img)

# show the results
visualizer.add_datasample(
    'result',
    data_input,
    data_sample=result,
    draw_gt=False,
    show=args.show,
    wait_time=-1,
    out_file=args.out_dir,
    pred_score_thr=args.score_thr,
    vis_task='multi-modality_det')

if name == 'main':
args = parse_args()
main(args)

Reproduces the problem - command or script

python projects/BEVFusion/demo/multi_modality_demo_noann.py
demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin
demo/data/nuscenes/
work_dirs/without_pretrained/nuscenes_lidar_cam/nuscenes_lidar_cam.py
work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
--cam-type all --score-thr 0.2 --show --snapshot

Reproduces the problem - error message

demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin
demo/data/nuscenes/
work_dirs/without_pretrained/nuscenes_lidar_cam/nuscenes_lidar_cam.py
work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
--cam-type all --score-thr 0.2 --show --snapshot
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmdet/models/task_modules/builder.py:17: UserWarning: build_sampler would be deprecated soon, please use mmdet.registry.TASK_UTILS.build()
warnings.warn('build_sampler would be deprecated soon, please use '
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmdet/models/task_modules/builder.py:39: UserWarning: build_assigner would be deprecated soon, please use mmdet.registry.TASK_UTILS.build()
warnings.warn('build_assigner would be deprecated soon, please use '
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
09/05 23:37:17 - mmengine - INFO - Loads checkpoint by http backend from path: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
Loads checkpoint by local backend from path: work_dirs/without_pretrained/nuscenes_lidar_cam/epoch_6.pth
/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the save_dir argument.
warnings.warn(f'Failed to add {vis_backend.class}, '
Traceback (most recent call last):
File "/home/robot/1.code/mmdetection3d/projects/BEVFusion/demo/multi_modality_demo_noann.py", line 78, in
main(args)
File "/home/robot/1.code/mmdetection3d/projects/BEVFusion/demo/multi_modality_demo_noann.py", line 49, in main
result, data = inference_multi_modality_detector(model, args.pcd, args.img,'',
File "/home/robot/1.code/mmdetection3d/mmdet3d/apis/inference.py", line 233, in inference_multi_modality_detector
data_list = mmengine.load(ann_file)['data_list']
File "/home/robot/anaconda3/envs/cuda11.8-bev/lib/python3.9/site-packages/mmengine/fileio/io.py", line 832, in load
raise TypeError(f'Unsupported format: {file_format}')
TypeError: Unsupported format:

Additional information

My issue:
I am looking for special environmental conditions (such as rainy and night scenes from the NuScenes dataset) to test the performance of my improved model and enhance my paper. For this purpose, I need to use the inference visualization program in the demo folder of mmdet3d. However, it only provides daytime scenarios and lacks more diverse environments for testing, which would allow me to demonstrate my model's effectiveness more intuitively.

When I attempted to design my own program and successfully extracted night and rainy data from the NuScenes dataset, or modified the create_data.py program to meet the pkl requirements, I encountered no issues with images and point clouds. However, the pkl files consistently fail to meet the requirements of the inference program. I need help figuring out what to do next. Does mmdet3d provide a method for generating pkl files compatible with the demo folder's inference program?

The demo may be designed for specific data formats, and there might be a lack of clear documentation or examples for creating compatible pkl files for diverse environmental conditions.
I have attempted to modify the create_data.py script to generate pkl files suitable for the demo inference, but I consistently encounter issues with the pkl structure. It would be extremely helpful if there were clearer guidelines or examples for creating these pkl files for different environmental conditions.

Additionally, I'm facing issues running the multi_modality_demo_noann.py script. Both of these problems are related to visualization, so I'd like to address them together.

My main questions are:

How can I generate appropriate pkl files for different environmental conditions (like rain and night) that are compatible with the demo inference program?
What might be causing the multi_modality_demo_noann.py script to fail, and how can I resolve this issue?
Any guidance on these visualization-related problems would be greatly appreciated, as they are crucial for demonstrating the effectiveness of my improved model across various environmental conditions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant