-
Notifications
You must be signed in to change notification settings - Fork 117
Replies: 1 comment · 4 replies
-
Are you suggesting applying post-processing code on ground truth center heatmap and ground truth center offset gives these wierd results? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes, you can see the ground truth inputs in the other images seem to be correct, it's just the combination of it all to make the instance segmentation seems to be wrong (aka the fragmentation occuring). The complete source can be found at https://github.com/5had3z/stereo-to-all/tree/feature/detr_head, and the code for displaying instance output is shown below (nnet_training/utilities/model_trainer.py). I can't see any reasons why this is happening, the instance code seems to be good to me, so I wanted to know if anyone else has had this issue before. @staticmethod
def show_instance(batch_data, nnet_outputs, batch_size, img_norm):
plt.figure("Instance Prediction Estimation")
seg_pred = torch.argmax(nnet_outputs['seg'], dim=1)
for i in range(batch_size):
plt.subplot(*ModelTrainer.col_maj_2_row_maj(3, batch_size, 3*i+1))
plt.imshow(np.moveaxis(img_norm(batch_data['l_img'][i]).cpu().numpy(), 0, 2))
plt.xlabel("Input Image")
instance_gt, _ = get_instance_segmentation(
batch_data['seg'][i],
batch_data['center'][i].unsqueeze(0),
batch_data['offset'][i].unsqueeze(0),
CityScapesDataset.cityscapes_things, nms_kernel=7)
plt.subplot(*ModelTrainer.col_maj_2_row_maj(3, batch_size, 3*i+2))
plt.imshow(instance_gt.squeeze(0).cpu().numpy())
plt.xlabel("Ground Truth Instances")
instance_pred, _ = get_instance_segmentation(
seg_pred[i].unsqueeze(0),
nnet_outputs['center'][i].unsqueeze(0),
nnet_outputs['offset'][i].unsqueeze(0),
CityScapesDataset.cityscapes_things, nms_kernel=7)
plt.subplot(*ModelTrainer.col_maj_2_row_maj(3, batch_size, 3*i+3))
plt.imshow(instance_pred.squeeze(0).cpu().numpy())
plt.xlabel("Predicted Instances")
plt.suptitle("Instance Prediction versus Ground Truth")
plt.show(block=False) |
Beta Was this translation helpful? Give feedback.
All reactions
-
That's interesting because I have tested my post-processing code with GT center and offset and did not encounter your problems. I think there might be something wrong either in how you use the post-processing or other bugs elsewhere. Here is my code for your reference: from .instance_post_processing import find_instance_center, get_instance_segmentation, get_panoptic_segmentation
def _test_find_instance_center(ctr_hmp):
return find_instance_center(ctr_hmp)
def _test_get_instance_segmentation(sem_seg, ctr_hmp, offsets, thing_list):
return get_instance_segmentation(sem_seg, ctr_hmp, offsets, thing_list)
def _test_get_panoptic_segmentation(sem, ctr_hmp, offsets, thing_list, label_divisor, stuff_area, void_label):
return get_panoptic_segmentation(sem, ctr_hmp, offsets, thing_list, label_divisor, stuff_area, void_label)
if __name__ == '__main__':
import numpy as np
from segmentation.data import build_default_dataset
from segmentation.utils import (
save_annotation, save_instance_annotation, save_center_image, save_panoptic_annotation)
# test panoptic dataset
dataset_panoptic = build_default_dataset(
'cityscapes_panoptic', './datasets/cityscapes', 'val', (1024, 2048), is_train=False)
dataset_panoptic_dict = dataset_panoptic.__getitem__(2)
semantic_tensor = dataset_panoptic_dict['semantic']
ignore_map = semantic_tensor == dataset_panoptic.ignore_label
# semantic_tensor[semantic_tensor == dataset_panoptic.ignore_label] = 0
save_annotation(semantic_tensor.numpy(), '.', 'semantic',
add_colormap=True, colormap=dataset_panoptic.create_label_colormap())
semantic_tensor[semantic_tensor == dataset_panoptic.ignore_label] = 0
center_tensor = dataset_panoptic_dict['center']
offset_tensor = dataset_panoptic_dict['offset']
image_tensor = dataset_panoptic_dict['image']
image_array = dataset_panoptic.reverse_transform(image_tensor)
# Test center
center_points = dataset_panoptic_dict['center_points']
save_center_image(image_array, center_points, '.', 'gt_center')
recovered_center_points = _test_find_instance_center(center_tensor.unsqueeze(0)).numpy().tolist()
save_center_image(image_array, recovered_center_points, '.', 'recovered_center')
# Test get instance segmentation
instance, center = _test_get_instance_segmentation(semantic_tensor.unsqueeze(0),
center_tensor.unsqueeze(0),
offset_tensor.unsqueeze(0),
thing_list=dataset_panoptic.thing_list)
recovered_ins_seg = instance.squeeze(0).numpy().astype('uint8')
save_instance_annotation(recovered_ins_seg, '.', 'recovered_ins_seg')
foreground = np.zeros_like(recovered_ins_seg)
foreground[recovered_ins_seg > 0] = 1
save_annotation(foreground, '.', 'foreground',
add_colormap=False, scale_values=True)
# Test get panoptic segmentation
panoptic, center = _test_get_panoptic_segmentation(semantic_tensor.unsqueeze(0),
center_tensor.unsqueeze(0),
offset_tensor.unsqueeze(0),
thing_list=dataset_panoptic.thing_list,
label_divisor=dataset_panoptic.label_divisor,
stuff_area=4096,
void_label=(
dataset_panoptic.ignore_label *
dataset_panoptic.label_divisor))
recovered_pan_seg = panoptic.squeeze(0).numpy()
print(recovered_pan_seg.dtype)
# Convert panoptic to semantic
pan_to_sem = recovered_pan_seg // dataset_panoptic.label_divisor
save_annotation(pan_to_sem, '.', 'pan_to_sem',
add_colormap=True, colormap=dataset_panoptic.create_label_colormap())
# Convert panoptic to instance
ins_id = recovered_pan_seg % dataset_panoptic.label_divisor
pan_to_ins = recovered_pan_seg.copy()
pan_to_ins[ins_id == 0] = 0
print(np.unique(recovered_pan_seg))
print(np.unique(recovered_ins_seg))
print(np.unique(pan_to_ins))
save_instance_annotation(pan_to_ins, '.', 'pan_to_ins')
recovered_pan_seg[ignore_map.numpy()] = dataset_panoptic.ignore_label * dataset_panoptic.label_divisor
save_panoptic_annotation(recovered_pan_seg, '.', 'recovered_pan_seg',
label_divisor=dataset_panoptic.label_divisor,
colormap=dataset_panoptic.create_label_colormap()) You can try this code under the |
Beta Was this translation helpful? Give feedback.
All reactions
-
build_default_dataset is missing from master, so I'm just trying to get it working from build_test_loader_from_cfg (although its giving me a hard time trying to do so....) |
Beta Was this translation helpful? Give feedback.
All reactions
-
Okay I found it, I forgot to rescale xy values of the offset tensor when applying image augmentations. Reading through you're code, I knew it couldn't be wrong, which is why I asked whether it has happened to anyone before to maybe nudge me in the right direction, I guess I'm the only one to forget to rescale haha. |
Beta Was this translation helpful? Give feedback.
-
Has anyone had issues with post processing outputs with get_instance_segmentation(), giving fragmented instances? Examples are shown below. You can see that the offsets and centers are reasonable, and this issue is also occuring with the ground truth data, therefore its not an inference issue, rather to do with the post processing.
Beta Was this translation helpful? Give feedback.
All reactions