-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple octomap views generated and octomap flipped #20
Comments
Thanks for this write-up! I will review this and get back to you. |
@AustinDeric Thanks a lot. Looking forward to your comments on the same. |
Are the redundant octomaps all created at the same time, or sequentially? If the pose of the TSDF reconstruction volume changes after mapping starts then it behaves as if objects in the scene have moved and might start mapping new voxels as occupied. If they're being drawn simultaneously then it's very mysterious: haven't seen that behavior before! (re: 1) The origins of the rays are supposed to be placed in a spherical pattern around the center of the volume. You should be able to control where they're created by moving the volume, though based on your picture it looks like they might not be related to the pose of the volume correctly. (edit: or the center of the pattern is at a fixed offset from the corner of the volume closest to the origin) (re: 2, 3, and possible 4) The motion planning problem is kind of weirdly constrained, as a workaround to prevent the robot from pointing the camera away from the volume to be reconstructed while moving between poses. It creates a point some distance away from the front of the camera (about 50cm, offhand), then tries to keep that point within the boundary of the volume. Since the motion planning uses a RRT strategy, at best it finds a solution after a fairly long time, and at worst it completely fails to plan the motion and throws errors like the first few shown, though I'm not 100% sure what's going on with the last two. It should iterate through candidate poses in order of which ones expose the most unknown voxels in the volume until it finds a motion it can accomplish. |
Hi @schornakj. Thanks for your response.
Yes, there is like a 2-3 seconds pause before these are created. I won't say they all are created at the same time. Why would the TSDF reconstruction volume move though? I mean, I was not moving the interactive marker in RViz manually during this example.
So I would conclude that the pattern formed by the rays is similar to the geometry of the object which is placed in front of it in all cases? I have gone through the Kinect fusion paper and the UW CSE Lecture slides. Any other suggestion which would help me get a deeper intuition into this process?
This confirms that the behavior which I was seeing wasn't something abnormal. I was also having cases where the motion could not be planned. After a while it used to take up candidate poses and in some cases it used to state that
Thanks again! |
EDIT: On further observation, just a few more warning messages.
I get this above error for all 64 candidates. Seems similar to my previous issue. But adding a try-catch block as before does not solve this error as it did before. This is one more warning.
|
There isn't any feedback from the TSDF reconstruction to the placement of the candidate poses and the casting of the rays. As a simplification I had assumed that if the camera got a chance to look at all the different parts of the object, then the TSDF surface would be reasonably complete. Currently poses are selected by randomizing pitch and yaw relative to some nominal center of the object to be explored, and then randomly varying the camera distance within a range. A constant number of rays are cast from each pose, and the poses are ranked by how many rays hit "unknown" voxels in the octomap. Basically it's trying to generate a variety of poses that are likely to be reachable and likely to expose new regions of the surface, but without having very much specific knowledge of the characteristics of the surface You could try removing some of the constraints on the motion planner. The constraint that keeps the camera pointed at the object was added to solve a problem with using Iterative Closest Point to find the pose of a real camera from the current depth image and the surface in the TSDF volume, which is that ICP gets lost if it can't find any part of a previously-seen surface in the current camera image. If you're providing it with current poses from tf then it shouldn't be as much of an issue. ICP might give weird results for a smooth simulated surface, but it's not something I've tried personally. I'll get back to you on the other topics shortly. |
Thanks a lot, @schornakj. This makes a lot of sense, especially recognizing that the TSDF reconstruction and the placement of poses are decoupled.
|
I should have used a clearer word than "weird". ICP is responsible for calculating the pose of the camera relative to the observed surface, and it does this by trying to align the current depth image with previously observed ones. For this to work well, the surface should have variations like corners, edges, curves, and rough textures. In your simulated world the objects are a perfectly smooth sphere and plane. I was thinking about what would happen if the camera could only see part of the sphere or the plane. I would expect that ICP would have difficulty finding the correct camera pose, since the problem of fitting a section of a plane to a flat surface or a piece of a sphere to a sphere is very ambiguous. |
That's true, even I had similar concerns while I was starting out with Gazebo. This paper suggests the use of 3-D CAD models as a reference to align real-time data using ICP with a similar task of 3-D reconstruction. However, the behavior of ICP with Gazebo models is definitely questionable. |
While I was analysing further, I realised a few things.
So regarding the problem with multiple octomaps, I observed that as I shift the interactive volume marker to the left and right, only the fake octomap moves in the direction where the volume marker box is being moved which leads to the following artifact. The actual octomap does not move at all. Any clues on why this could be happening?
I switched the sphere model with a dumpster just for the sake of having a few more edges, corners and color differences (don't know if its exactly helpful though). This is how the raycasting looks them being pointed towards the point cloud published on the Note: I created a new issue for the lack of robot motion as there seems to be some weird behavior regarding TF. |
Hi!
So after solving this issue, I was able to run all the launch files/ nodes and started to analyze the Kinect fusion process. However, I am not able to understand why multiple octomaps in different views are being generated, and one view has an octomap which is flipped.
I am adding the different steps of the process and the views in Gazebo and RViz so you can get a better visualisation of the problem.
After launching Gazebo and moveit_planning_execution.launch
Note: The Octomap is straight and properly generated wrt to the object placed in Gazebo. The first Octomap isn't generated by the YAK package and was created separately to check if the Kinect camera is working in simulation properly.
After
roslaunch yak launch_gazebo_robot.launch
Note: Multiple octomaps are generated in all 3 directions. Even though there is just one sphere in the view as shown in the earlier picture. Could this be because the tracking isn't working?
After
roslaunch nbv_planner octomap_mapping.launch
After
rosrun nbv_planner exploration_controller_node
Note-
exploration_controller_node
[ WARN] [1530883644.819044064, 925.199000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero. Setting to 1.0.
I apologise for the long message. Just wanted to describe stuff in a better way as I am not able to pinpoint what exactly is causing this error.
Thanks in advance,
Aaditya Saraiya
@Levi-Armstrong
The text was updated successfully, but these errors were encountered: