You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have been searching a sophisticated registration method for point clouds, and we came across your solution. We kind of need a help while implementing your solution to our case. Here is how we do registration and implement your solution.
We register point clouds taken with synchronized Azure kinect cams with a custom solution in which a pattern seen by cam-pairs is used. Since this solution is time-taking and requires somebody else that is necessary for adjusting pattern position during registration, we needed better and more sophisticated solution. After I looked into your study and github code, I got a feeling that your solution could be applicable to our case, And then for training, we collected approximately 100 pairs of point clouds taken by two cams with 100 different positions. For each pair of point clouds, we also calculated transformation matrix between point clouds using our custom solution. In every point cloud, we have only a person isolated. So for every data point, we have one transformation matrix and a pair of point clouds. I use only these data with train_pointlk.py after doing some necessary changes. But the result is not positive, I mean loss function is not converged to 0.
What I would like to learn from you guys is whether the way I follow is correct or not? or Do you have any other suggestions?
Cheers,
Hamit
The text was updated successfully, but these errors were encountered:
We have been searching a sophisticated registration method for point clouds, and we came across your solution. We kind of need a help while implementing your solution to our case. Here is how we do registration and implement your solution.
We register point clouds taken with synchronized Azure kinect cams with a custom solution in which a pattern seen by cam-pairs is used. Since this solution is time-taking and requires somebody else that is necessary for adjusting pattern position during registration, we needed better and more sophisticated solution. After I looked into your study and github code, I got a feeling that your solution could be applicable to our case, And then for training, we collected approximately 100 pairs of point clouds taken by two cams with 100 different positions. For each pair of point clouds, we also calculated transformation matrix between point clouds using our custom solution. In every point cloud, we have only a person isolated. So for every data point, we have one transformation matrix and a pair of point clouds. I use only these data with train_pointlk.py after doing some necessary changes. But the result is not positive, I mean loss function is not converged to 0.
What I would like to learn from you guys is whether the way I follow is correct or not? or Do you have any other suggestions?
Cheers,
Hamit
The text was updated successfully, but these errors were encountered: