-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about coordinate system between lidar and camera. #2
Comments
Hi @SubChange,
We are aware of the problem with the rotation and position of some objects. We believe it is related to a bug in CARLA (carla-simulator/carla#8226). We are currently working on resolving it. In the meantime, I recommend applying a convex hull or a similar method to the points of the objects and using that instead.
The purpose of the images is to provide a more understandable way to visualize the scene. However, the primary purpose of the dataset is for use with point clouds; the labels were not designed for images. There shouldn't be a calibration error, as the positions of the three sensors are the same. We are working on a new dataset that includes images, and I’ll notify you once it is published. I’ll also keep you updated on fixes for the current errors. |
Hi,
Yes, I agree that this is a good choice to recorrect the label accuracy. I ill try it.
Though the primary purpose of the dataset is for use with point clouds, it woule be much more excellent with accurate calibration matrix between lidar coordinate system and camera coordinate system. What do you mean by "there should be no calibration errors"? I list a example of the label file ( According to the label file ( Entrinsics Intrinsics From above, it can be seen that there is no translations between
However, based on above settings, the labels (
Thank you very much. I'm looking forward to your new dataset release. Meanwhile, I'm also instersted in the methods that you used to generate this dataset. If there are any recommendations about your papers and open source tools, please also let me know. I believed it was also an excellent works and I will insterested in it.
If you need me to provide any tests about this dataset, please feel free to contact me. I'm very glad to contribute this dataset. Appendix: Part of the label json of the discussed scene above( {
"openlabel": {
"metadata": {
"schema_version": "1.0.0",
"rec_file": "023_urban_01_01_00.log"
},
"coordinate_systems": {
"odom": {
"type": "scene_cs",
"parent": "",
"children": ["ego_train"]
},
"ego_train": {
"type": "local_cs",
"parent": "odom",
"children": ["tele15", "pandar64", "camera"],
"pose_wrt_parent": {
"matrix4x4": [1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
}
},
"tele15": {
"type": "sensor_cs",
"parent": "ego_train",
"children": [],
"pose_wrt_parent": {
"matrix4x4": [1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
}
},
"pandar64": {
"type": "sensor_cs",
"parent": "ego_train",
"children": [],
"pose_wrt_parent": {
"matrix4x4": [1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
}
},
"camera": {
"type": "sensor_cs",
"parent": "ego_train",
"children": [],
"pose_wrt_parent": {
"matrix4x4": [6.123233995736766e-17, 6.123233995736766e-17, 1.0, 0.0, -1.0, 3.749399456654644e-33, 6.123233995736766e-17, 0.0, 0.0, -1.0, 6.123233995736766e-17, 0.0, 0.0, 0.0, 0.0, 1.0]
}
}
},
"streams": {
"tele15": {
"description": "lidar",
"uri": "",
"type": "lidar",
"stream_properties": {
"intrinsics_custom": {
"lidar_type": "Risley_prism",
"name": "tele"
}
}
},
"pandar64": {
"description": "lidar",
"uri": "",
"type": "lidar",
"stream_properties": {
"intrinsics_custom": {
"lidar_type": "Surround",
"name": "pandar64"
}
}
},
"camera": {
"description": "pinhole_camera",
"uri": "",
"type": "camera",
"stream_properties": {
"intrinsics_pinhole": {
"width_px": 1232,
"height_px": 1028,
"camera_matrix_3x4": [747.90462424, 0.0, 623.79287665, 0.0, 0.0, 794.63662007, 498.12602172, 0.0, 0.0, 0.0, 1.0, 0.0],
"distortion_coeffs_1xN": []
}
}
}
}
}
} |
For an object detection research, I dived into this dataset. Currently, I successfully parse the label file in the format of
json
. However, when I tried to find relationship between point clouds and images, I foud that the coordinate system in thejson
file is inaccurate. This heavily hindered my study interests. Therefore, I would greatly appriciate it if you cloud give me some advice.I put the whole python code in the bottom, anyone can run it directly. The code depends on
Open3D
,opencv-python
, andpyntcloud
. Theses three main dependencies are easy to install throughpip install xxx
The problems are as follows:
cuboid
label is in format(x, y, z, rx, ry, rz, sx, sy, sz)
as commented in the followed python codes. However, it has a wrong pose when I draw it in open3d viewer?bboxes_3d
into image, there is a large deviation between the actual object and the projectedbbox_2d
. So I want to know if the calibration matrix is accurate enough or if there is any bug in my python code.The following are some samples:
In this sample, the rotation of the label (
bbox_3d
) is not only aroundz-axis
. Meanwhile, It can be seen that the position of this label are also not accurate, because it doesn't contain the target point cloud.In this sample, Though the position is accurate in the point cloud, it's not accurate after projecting it onto image.
The same problems as the above two.
There are many samples as above. Therefore, I guess there are two possibility. One is the label has mistakes. The other is my codes have a wrong implementation. I really hope someone can give me some hints or tips. Thank you very much.
python codes
The text was updated successfully, but these errors were encountered: