Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: size shape must match input shape. Input is 2D, size is 3 #1719

Closed
zhiweil20 opened this issue Jun 28, 2022 · 7 comments
Closed
Assignees

Comments

@zhiweil20
Copy link

I meet the same problem with #1048

while my mask file is RGB, I got problem ValueError: size shape must match input shape. Input is 2D, size is 3.
When I changed my mask file to gray image, I got same error with issue 1048

RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. terminate called after throwing an instance of 'c10::CUDAError' what(): CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

My custom dataset define as the following

CLASSES = ('background', 'trachea')

PALETTE = [[0, 0, 0], [255,255,255]]

the final of issue 1048 said Mask pixel value should be label value solved this problem.
what does this mean? how to change my config file or mask file?

@xiexinch
Copy link
Collaborator

Hi @zhiweil20
The pixel value in annotation files should be the class label, not the color value.

@zhiweil20
Copy link
Author

Hi @zhiweil20 The pixel value in annotation files should be the class label, not the color value.

sorry..., I don't really understand the concept in the dataset
What do you mean by annotation file?
The structure of my folder is like
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── VOCdevkit
│ │ ├── VOC2012
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClass
│ │ │ ├── ImageSets
│ │ │ │ ├── Segmentation

which file should I modify?

@xiexinch
Copy link
Collaborator

It's the SegmentationClass.

ann_dir='SegmentationClass',

@zhiweil20
Copy link
Author

It's the SegmentationClass.

ann_dir='SegmentationClass',

But the segmentationClass folder is the picuture of segmentation label, How can I change the picture file?

@xiexinch
Copy link
Collaborator

xiexinch commented Jun 28, 2022

I don't know what your custom dataset exactly looks like, so might not give more advice..
Perhaps it may need to write a script, using any image process packages like cv2 or PIL to read the picture file, then transfer the color value to label value.

@zhiweil20
Copy link
Author

I don't know what your custom dataset exactly looks like, so might not give more advice.. Perhaps it may need to write a script, using any image process packages like cv2 or PIL to read the picture file, then transfer the color value to label value.

sorry, my custom dataset is like voc style. But there is not an "Annotations" folder which contains xml file. As the pascal_voc12.py, ann_dir = 'SegmentationClass', I think the mask picture is the annnotation file.
But now I got an error like

Traceback (most recent call last):
File "tools/train.py", line 242, in
main()
File "tools/train.py", line 238, in main
meta=meta)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/apis/train.py", line 194, in train_segmentor
runner.run(data_loaders, cfg.workflow)
File "/home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 134, in run
iter_runner(iter_loaders[i], **kwargs)
File "/home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 61, in train
outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)
File "/home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 75, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/segmentors/base.py", line 138, in train_step
losses = self(**data_batch)
File "/home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/segmentors/base.py", line 108, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py", line 144, in forward_train
gt_semantic_seg)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py", line 88, in _decode_head_forward_train
self.train_cfg)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/decode_heads/decode_head.py", line 204, in forward_train
losses = self.losses(seg_logits, gt_semantic_seg)
File "/home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(*args, *kwargs)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/decode_heads/decode_head.py", line 265, in losses
seg_logit, seg_label, ignore_index=self.ignore_index)
File "/home/ecnu-lzw/lzw/mmsegmentation/mmseg/models/losses/accuracy.py", line 49, in accuracy
correct = correct[:, target != ignore_index]
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1623448265233/work/c10/cuda/CUDACachingAllocator.cpp:1055 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f7353e80a22 in /home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: + 0x10ac3 (0x7f73540e2ac3 in /home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void
) + 0x1a7 (0x7f73540e4167 in /home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x54 (0x7f7353e6a5a4 in /home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #4: + 0xa2bb12 (0x7f73cd761b12 in /home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0xa2bbb1 (0x7f73cd761bb1 in /home/ecnu-lzw/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

frame #23: __libc_start_main + 0xe7 (0x7f7408f62c87 in /lib/x86_64-linux-gnu/libc.so.6)

Aborted (core dumped)

Maybe somthing wrong with my enviroments?

@zhiweil20
Copy link
Author

I don't know what your custom dataset exactly looks like, so might not give more advice.. Perhaps it may need to write a script, using any image process packages like cv2 or PIL to read the picture file, then transfer the color value to label value.

Thank you very much. I've tried use cv2 to change mask color value to label value, and now it can train. Thanks a lot!

aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this issue Mar 27, 2023
…for Wandb (open-mmlab#1719)

Update train_unconditional.py

Add logger flag to choose between tensorboard and wandb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants