Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please upload files for Nuscenes #15

Open
kaxapatel opened this issue Jul 15, 2022 · 8 comments
Open

Please upload files for Nuscenes #15

kaxapatel opened this issue Jul 15, 2022 · 8 comments

Comments

@kaxapatel
Copy link

Good work. Are you planning to upload a dataset_nuscenes and train_nuscenes files? I really want to test it.

@edwardzhou130
Copy link
Owner

Thanks for your interest in our work!

You can find our dataloader for nuScenes segmentation task in our PolarNet repo (https://github.com/edwardzhou130/PolarSeg/blob/master/dataloader/dataset_nuscenes.py). You will need to change the dataloader to load the panoptic nuScenes annotations rather than semantic annotations.

Our panoptic segmentation results on nuScenes in the paper were trained and evaluated based on our self-generated gt labels, which are different than the actual gt released officially by nuScenes (released after CVPR 2021). We currently have not plan to release those scripts and weights for this reason. However, You can find the reproduced results of our method on the panoptic nuscenes paper.

@kaxapatel
Copy link
Author

Thank You for your reply. But my concern is regarding preprocessing task. Can you upload instance_preprocess_nuscenes.pyfile? and I also want to point out for Panoptic you have used some augmentation so how can I use the code from Polarseg? can you guide me?

@edwardzhou130
Copy link
Owner

The process for nuScenes preprocessing is similar to SemanticKITTI. You will need to add the same function like this

def save_instance(self, out_dir, min_points = 10):
'instance data preparation'
instance_dict={label:[] for label in self.thing_list}
for data_path in self.im_idx:
print('process instance for:'+data_path)
# get x,y,z,ref,semantic label and instance label
raw_data = np.fromfile(data_path, dtype=np.float32).reshape((-1, 4))
annotated_data = np.fromfile(data_path.replace('velodyne','labels')[:-3]+'label', dtype=np.uint32).reshape((-1,1))
sem_data = annotated_data & 0xFFFF #delete high 16 digits binary
sem_data = np.vectorize(self.learning_map.__getitem__)(sem_data)
inst_data = annotated_data
# instance mask
mask = np.zeros_like(sem_data,dtype=bool)
for label in self.thing_list:
mask[sem_data == label] = True
# create unqiue instance list
inst_label = inst_data[mask].squeeze()
unique_label = np.unique(inst_label)
num_inst = len(unique_label)
inst_count = 0
for inst in unique_label:
# get instance index
index = np.where(inst_data == inst)[0]
# get semantic label
class_label = sem_data[index[0]]
# skip small instance
if index.size<min_points: continue
# save
_,dir2 = data_path.split('/sequences/',1)
new_save_dir = out_dir + '/sequences/' +dir2.replace('velodyne','instance')[:-4]+'_'+str(inst_count)+'.bin'
if not os.path.exists(os.path.dirname(new_save_dir)):
try:
os.makedirs(os.path.dirname(new_save_dir))
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
inst_fea = raw_data[index]
inst_fea.tofile(new_save_dir)
instance_dict[int(class_label)].append(new_save_dir)
inst_count+=1
with open(out_dir+'/instance_path.pkl', 'wb') as f:
pickle.dump(instance_dict, f)

to save all instance point clouds in a database. The differences between nuScenes and SemnticKITTI are:

  1. The path to load and save the data.
  2. The class label. Panoptic nuScenes saves the gt as 1000*semnatic + instance
  3. You probably also need to create a similar yaml file for nuScenes.

@jwma0725
Copy link

jwma0725 commented Jul 21, 2022

@edwardzhou130 Thanks for your work. Is your generating own panoptic label same as 1000*semantic + instance in Nuscenes? And I also want to know if the pre-trained model is same as semantickitti. Thanks again.

@edwardzhou130
Copy link
Owner

@jwma0725 I used the same format in semanticKITTI (instance id is stored in the upper 16bits). The model is trained separately for semanticKITTI and nuscenes because their class and data distribution are different. But you can use the pretrained model in one dataset as the pretrained weight for the other (I did not try this though).

@jwma0725
Copy link

@edwardzhou130 Thanks for your reply. I still have some questions, as follows:

  1. Can you tell me how to get pre-train model about Nuscenes? Or are you planning to upload the pre-train Nuscenes model?
  2. Is there any difference between “Panoptic_SemKITTI.pt”model and “pretrained_weight/Panoptic_SemKITTI_PolarNet.pt”model?
    Thanks!

@edwardzhou130
Copy link
Owner

  1. We have no plan to upload our previous model because it is trained and evaluated on a different annotation than the official panoptic nuscenes.
  2. pretrained_weight/Panoptic_SemKITTI_PolarNet.pt is the saved weight I got from running the training script. And it can be used to produce the result we reported in the paper.

@kaxapatel
Copy link
Author

Thank you for your response @edwardzhou130. I am trying as you told me but getting this error when I try to run dataloader.
Screenshot from 2022-07-24 13-05-22 . I have changed lidarseg to panopticseg in line 124.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants