You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am trying to perform a simple projection of the LIDAR points into the camera frame of the 1st camera without relying on the dgp library. This should be a simple transformation from world coordinates -> camera coordinates -> image coordinates - however, the projection is not working as expected and LIDAR points are misaligned with the image. I am sharing code below, any help or guidance would be most appreciated - thanks!
parseCalib - parsing calibration file to get transformation matrices
def parseCalib(calib_files):
calib_logs = []
world_to_camera_transforms = []
camera_intrinsic_matrices = []
focal_lengths = []
cy_vals = []
for i in range (0, len(calib_files)):
# Get filepath to calibration file
calib_filepath = str(calib_files[i])
# Storing log associated with calibration file
calib_log = calib_filepath[26:-58]
calib_logs.append(calib_log)
# Reading calibration file JSON
with open(calib_filepath, 'r') as file:
data = json.load(file)
camera_rotation = data['extrinsics'][1]['rotation']
camera_translation = data['extrinsics'][1]['translation']
camera_intrinsic = data['intrinsics'][1]
# Get the world to camera transformation matrix
###############################################
# Rotation
qx = camera_rotation['qx']
qy = camera_rotation['qy']
qz = camera_rotation['qz']
qw = camera_rotation['qw']
quat = Quaternion(qw, qx, qy, qz)
world_to_camera_rotation_matrix = quat.rotation_matrix
# Translation
tvec=np.float32([0., 0., 0.])
tvec[0] = camera_translation['x']
tvec[1] = camera_translation['y']
tvec[2] = camera_translation['z']
# Full transform
world_to_camera_projection_matrix = np.eye(4)
world_to_camera_projection_matrix[:3,:3] = world_to_camera_rotation_matrix
world_to_camera_projection_matrix[:3, 3] = tvec
# Get camera to image projection matrix
#######################################
camera_to_image_projection_matrix = np.eye(3)
camera_to_image_projection_matrix[0, 0] = camera_intrinsic['fx']
camera_to_image_projection_matrix[1, 1] = camera_intrinsic['fy']
camera_to_image_projection_matrix[0, 2] = camera_intrinsic['cx']
camera_to_image_projection_matrix[1, 2] = camera_intrinsic['cy']
camera_to_image_projection_matrix[0, 1] = camera_intrinsic['skew']
# Append to list
world_to_camera_transforms.append(world_to_camera_projection_matrix)
camera_intrinsic_matrices.append(camera_to_image_projection_matrix)
focal_lengths.append(camera_intrinsic['fy'])
cy_vals.append(camera_intrinsic['cy'])
return calib_logs, world_to_camera_transforms, camera_intrinsic_matrices, focal_lengths, cy_vals
projectPointcloudToImage - projecting 3D world point in homogenous coordinates to image
# Filepath for data loading
root_data_path = '/mnt/media/ddad_train_val/'
# Reading dataset labels and images and sorting returned list in alphabetical order
depth_maps = sorted([f for f in pathlib.Path(root_data_path).glob("*/point_cloud/LIDAR/*.npz")])
images = sorted([f for f in pathlib.Path(root_data_path).glob("*/rgb/CAMERA_01/*.png")])
calib_files = sorted([f for f in pathlib.Path(root_data_path).glob("*/calibration/*.json")])
calib_logs, world_to_camera_transforms, camera_intrinsic_matrices, focal_lengths, cy_vals = \
parseCalib(calib_files)
# Looping through data
for index in range(0, 1):
# Image
image = Image.open(str(images[index]))
img_width, img_height = image.size
# Pointcloud
pointcloud = np.load(str(depth_maps[index]))['data']
print(pointcloud)
print(str(depth_maps[index]), str(images[index]))
# Data log and its index
data_log = str(images[index])[26:32]
log_index = calib_logs.index(data_log)
# Projection of pointlcoud point to image coordinates
projectPoincloudToImage(image, pointcloud, world_to_camera_transforms[log_index],
camera_intrinsic_matrices[log_index], img_height, img_width)
# Visualization
plt.figure()
plt.imshow(image)
The text was updated successfully, but these errors were encountered:
Description
Hello, I am trying to perform a simple projection of the LIDAR points into the camera frame of the 1st camera without relying on the dgp library. This should be a simple transformation from world coordinates -> camera coordinates -> image coordinates - however, the projection is not working as expected and LIDAR points are misaligned with the image. I am sharing code below, any help or guidance would be most appreciated - thanks!
parseCalib - parsing calibration file to get transformation matrices
projectPointcloudToImage - projecting 3D world point in homogenous coordinates to image
The data is being read and processed as follows:
The text was updated successfully, but these errors were encountered: