-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Significant sensor data timestamp accuracy regression from Gazebo2 to Gazebo4 #1748
Comments
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher).
|
Original comment by Steve Peters (Bitbucket: Steven Peters, GitHub: scpeters). We should make a simple test case. @chapulina, how hard would it be to write a visual plugin that renders the current simulation time so that it would be visible from a gazebo camera? |
Original comment by Louise Poubel (Bitbucket: chapulina, GitHub: chapulina). We could probably reuse some code from the DRC finals drone demo where a floating text with the time was always visible by the occulus camera. Not sure where that code is though. We should check if the rendered time really corresponds to the time stamp in the camera image though, I guess it could be a bit behind since it needs to wait for an update message. |
Original comment by Steve Peters (Bitbucket: Steven Peters, GitHub: scpeters). I think these are the files from the drone demo: |
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). Here's a direct visualization of point cloud data coming from the RGB-D sensor with 10s decay time. Without the filtering through the grid map, the severity of the issue becomes more clear, with 10 instances of the wall in front of the robot spread over range of ~30 degrees visible in projected sensor data. A test could be rotating the sensor in front of a pole and checking the resulting sensor data for consistency (e.g. a single pole being visible in projected sensor data as opposed to multiple poles). |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig). How does this test case sound:
|
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). Sounds good to me, such a test should indeed prevent similar regressions in the future. I assume the issue is present for both simulated depth and rgb cameras, but only tested depth so far. |
Original comment by Steve Peters (Bitbucket: Steven Peters, GitHub: scpeters). For debugging, we could also render the simulation time as a |
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). I just ran a quick test with a Camera Display displaying the RGB Image and backprojecting objects in rviz. They also shift around, confirming that the issue is also present for RGB sensors: video For reference, for my experiments a custom servo plugin is used that also publishes joint states (nowadays we'd use ros_control for that, but it's relatively old) and the sensor is instantiated via this xacro macro. |
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). There have been a few good ideas on how to track this down, but no activity since 3 months. This is a major hindrance to properly simulating some robot systems for us, so a comment on the plans going forward would be nice. |
Original comment by Ahmad Seyfi (Bitbucket: aseyfi). Hi, I am currently working on a project and this is a major issue for us. I am new to Gazebo, but if somebody send me enough details on how this problem can be fixed, I would be more than happy to contribute. |
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). I know @nkoenig looked into this a few weeks ago (because I sat right beside him ;) ). Can we have an update on the state of affairs? |
Original comment by Samuel Martín Martínez (Bitbucket: samuelmartinm). Hi, This problem has also affected my project. I am working with a gazebo plugin called flyingKinect which is basically a kinect that can be teleoperated. The idea is to extract planes from RGBD data while the sensor moves. This already works on static: moving the kinect sensor, waiting one second or so, activating the plane extraction, then stopping it, moving it again, waiting one second, activating it and so on. At first I thought that it was a problem in the plugin, so I place some time stamps on the functions that gazebo calls to update the position and the camera in the plugin and, when a kinect camera starts moving, the position is updated sooner than the image, with no way to filter it since it is the exactly same moment. And it only happens sometines. For example, in this image we can see a kinect in the middle of a square room and we are viewing it from the top. That plane extraction is the result of extracting them while the sensor spins around: |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig). I believe the problem is here. The afore mentioned line should be moved to after this line. |
Original comment by Samuel Martín Martínez (Bitbucket: samuelmartinm). I just tried it but it did not make a difference. These are images of gazebo output running the flyingKinect Plugin with some traces, that you can see here (It is by the way part from a project from my university called JdeRobot, a software development suite for robotics and computer vision applications). The camera sensor is running at 10 Hz and the real time update rate at 10 Hz too (I also have tried to run it way faster and it does not matter, I still get the position before the image sometimes). The first image is the output before changing that line and the second one the output after the change. Both images shows a small single change in the position of the camera: As it can be seen in the green selections, it changes the LastMeasurementTime as expected, because in the second image is always equal to LastUpdateTime. But, as it can be seen in the red selections, sometimes, the position is updated sooner than the image. It is specifically in those moments when I get erroneous measurements. |
Original comment by Samuel Martín Martínez (Bitbucket: samuelmartinm). I forgot to say that DepthCameraSensor was the file I had to modify since it is the sensor I am using and my gazebo version is 5.3.0. |
Original comment by Ian Chen (Bitbucket: Ian Chen, GitHub: iche033). I looked into this a little and started by writing a test in the camera_sensor_timestamp branch. The test moves a tall thin box horizontally across the view of the camera, analytically calculates the box position in the image over time, and compare them with the actual camera images. Unfortunately I have not been able to reproduce the case where the camera sensor generates images with incorrect timestamp yet. The test currently passes. I played with a few different variables, e.g. box movement speed, image resolution and camera framerate, and it seems like the position of the box in the camera sensor images is always within 1-2 pixels of the one computed by hand. Here is my understanding of the camera sensor implementation in gazebo which influenced the expectations I wrote in the test:
I'll keep trying. If people have ideas on how the test can be improved, let me know too. |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig). Ian, could you try your test with the introduction of a sleep? See issue #1966. |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig).
|
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). This is becoming a blocker for switching to Kinetic/Gazebo7 for us as the issue prevents proper environment/obstacle modeling via RGB-D cameras in simulation. @ianchen I'll try to provide a (somewhat minimal) test case demonstrating the problem in the coming days. |
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). @ianchen @nkoenig Here's the promised minimal test setup: https://github.com/skohlbr/gazebo_camera_timestamp_issue It's a self contained ROS package that allows reproducing the issue easily as described in the README. Porting it to a pure Gazebo implementation/a failing test case that can be added to the Gazebo suite of tests should be fairly doable. The check would be to look for point cloud data of the sensor that is closer than 1.2m. If such data is found, compute the closest point to the camera, transform into the model frame and check that it's y coordinate is close to 0. If it is, everyting is well. If it's not, the issue persists. These two videos demonstrate this: Gazebo7: The pole is offset from the x axis due to the (timestamp?) issue: |
Original comment by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher). @ianchen @nkoenig It's been > 3 months since I provided a minimal working test example for this bug (albeit using ROS). It would be good to get a follow-up on this since this appears broken in all Gazebo versions after Gazebo2 since about 3 years. |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig). I agree, and sorry for the delay. We'll try to get this resolved as soon as possible. |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig). |
Original comment by Ian Chen (Bitbucket: Ian Chen, GitHub: iche033). related There is an open pull request for issue 408 |
Original comment by Nate Koenig (Bitbucket: Nathan Koenig). |
Original comment by Ian Chen (Bitbucket: Ian Chen, GitHub: iche033). @Stefan_Kohlbrecher I tried out your minimal example and was able to reproduce the problem in Going forward, we should try and merge: ros-simulation/gazebo_ros_pkgs#410 and I'll make another pull request to fix the timestamp issue for other rendering cameras in |
Original comment by Ian Chen (Bitbucket: Ian Chen, GitHub: iche033). added camera sensor timestamp integration test in pull request #2642 |
Original report (archived issue) by Stefan Kohlbrecher (Bitbucket: Stefan_Kohlbrecher).
It appears there is a (fairly significant) regression in how (camera) sensor data are timestamped. With Gazebo2, things work as would be expected from a simulator that "perfectly" timestamps camera data. With Gazebo4, we observe lots of artifacts that can be explained by timestamps for sensor data being off.
Here are two video of a robot using a simulated RGB-D sensor to map the environment:
using Gazebo2
using Gazebo4
There have been absolutely no changes to the software apart from installing Gazebo4 (from .debs) and recompiling custom gazebo plugins accordingly. It can clearly be seen that the obstacle map generated with Gazebo4 contains artifacts that are explainable by timestamps between camera sensor data and joint data being not consistent. Here are two screenshots highlighting the difference:
with Gazebo2, everything looks as expected:
with Gazebo4, artifacts consistently appear that cause spurious obstacles (circled red using my mad Gimp skills):
This is a subtle bug that will not have major effect as long as simulated cameras are not moving too fast, but once they do, it will introduce erroneous measurements that users do not expect. In our use case for instance, the robot will occasionally fail to explore the environment because of the phantom obstacles introduced by the bug.
I haven't tested this specifically with Gazebo6 (or newer versions) yet, but I seem to remember to see the same issue with Gazebo6.
The text was updated successfully, but these errors were encountered: