-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image Delay when using Global Time Enabled #8003
Comments
Hi @r91andersson I looked at the details of your case very carefully. Could you confirm please whether you are using ROS, as the image of the dynamic_reconfigure settings interface above would suggest? If you are using ROS, the large timestamp drift when global_time is true would be consistent with a recent RealSense ROS case where this was found to be occurring. The fix was to try one of the following actions:
IntelRealSense/realsense-ros#1454 (comment) In regard to RGB blurring: could you try setting RGB to 60 FPS instead of 30 and see if it reduces the blurring, please. |
@MartyG-RealSense Sorry, forgot totally to mention that we're using ROS. So yes, I have been configuring the parameters with the command: I have been using initial_reset:=true in my launch file, however, the global_time_enabled have always been default (true). So my requirement is that I need to know the exact time when the image was taken ( in ROS time) to be able to do a lookup back in time to check the exact position where we were positioned. But won't we violate this requriment when we set the global_time_enabled to false? I will re run the test but with RGB stream set to 60 FPS. |
The global timestamp can provide correction of drift between the host computer and the camera. Doronhi the RealSense ROS wrapper developer provides advice about use of the global timestamp in ROS in the link below. IntelRealSense/realsense-ros#796 (comment) My understanding of Doronhi's advice is that since the ROS wrapper provides correction for time drifting, using the global timestamp may be less necessary in ROS than in librealsense. In regard to the blurring: in a past case, a RealSense team member suggested that 70 manual exposure and 6 FPS may reduce RGB blurring. |
Great! I guess that you meant 70 manual exposure and 60 FPS (and not 6?) to reduce blurring. Shall I open a new issue in realsense-ros repo, and reference this issue? |
It was six FPS. :) The mathematics of FPS can be a bit complicated with manual exposure. |
You can open a new ticket, though I will likely be handling it too and so handling it here should be fine. |
Thanks! So when I'm setting the manual exposure value to 250, the theoretical FPS would be 1000ms/25ms = 40 FPS. But I've configured the camera RGB stream to run at 30 FPS. So it shouldn't be a problem regarding exposure time. Ok, I just wanted to make sure that this issue got a ticket in correct backlog! Meanwhile wating for this issue to be resolved, I will use global_time_enabled set to false Thanks again @MartyG-RealSense , I appreciate your customer service here on github, always quick to reply and willing to help!! |
You are very welcome. :) If you are satisifed with the outcome, feel free to close the issue with the Close Issue button under the comment box. Good luck with your project! |
I can close this issue. However I don't agree on that we have solved this issue, rather found out a "quick-fix". But as long as you're intention is to solve the real issue behind this, I'm satisfied! |
You can keep the case open if you are not satisifed yet. Could you provide details please about what you feel the continuing problem is? |
I would like that the test run 1 and test run 2 (in test result above) showed more or less exactly the same result. Changing the global_time_enabled between true and false shouldn't affect the result (between the runs) at all in my opinion. As the global_time_enabled is a function for keeping the sync between the host clock and the HW clock on the camera. It should defently not add a delay to all images by 200ms by just enable the global_time_enable. |
If you would like to repeat the test yourself, I could give you the test script that we run. Or if you would like to just examine it. |
I am not equipped to replicate your multi-cam test. I would recommend posting a question to the RealSense ROS GitHub after all and including the name-tag doronhi in the message to draw it to the attention of Doronhi the RealSene ROS wrapper developer. https://github.com/IntelRealSense/realsense-ros/issues |
I will close this specific case number as it is now being continued with Doronhi the RealSense ROS wrapper developer at IntelRealSense/realsense-ros#1581 - it will remain accessible for reading. |
Issue Description
When parameter global_time_enabled is enabled, we're experiencing a huge delay in retrieval of the images. The timestamp however doesn't say that the image is as old as it is in reality.
I've been testing and can confirm that it's the same for both firmware 05.12.09.00 and 5.12.08.200.
Test Description:
We have a movable camera module (where the camera is attached onto) on a linear rail. This linear rail has a motor that can drive the whole camera module forward and backward at a constant speed. We have a digital encoder mounted on the linear rail motor that has a resolution of 0.0000469398m/tick. In this way, we can save the absolute position of the camera module for each image taken.
Test Procedure:
Step 1.
We drive the camera module to a known absolute encoder position equal to 6500. We refer and name this position to encoder_trigger_image_position
Step 2.
When the camera module is still at the encoder_trigger_image_position, we take and save an image. This image is named and will be refered to our ground truth image.
Step 3.
We drive the camera module to the intial position equal to 0.
Step 4
We set the camera module in a linear motion by setting the speed to 1000 tick/s (=0.0469398023 m/s).
Step 5
Just when the camera module passes the encoder_trigger_image_position we grab the latest image and save this image and name it speed_1000. We are aware of that we can potentially miss 1 sample here, but the fact that we're sampling the camera stream at 30Hz, we shouldn't expect too much drift.
Step 6
We repeat the steps 3-6, but change the speed to 2000 (=0.0938796047 m/s)
Step 7
We do an ocular inspection of the 3 images.
The ground truth image is where we want the other images to be as close as possible to.
We have a green pin sticking up from the soil, so we can easily see if the images are close to each other by comparing this pin in each of the images to the ground truth image.
Test Result:
Test run 1:
When we run the test with the parameter global_time_enabled enabled, we can see a huge drift in the images compared to the ground truth image.
Ground truth:
Speed 1000:
Speed 2000:
Test run 2
When we run the test with the parameter global_time_enabled disabled, we see a much smaller drift in the images compared to the ground truth image.
Ground truth:
Speed 1000:
Speed 2000:
It's much more obvious and clear when downloading the images and then previewing them fast, by looking at ground truth image, then fast switch to speed_1000, then last speed_2000.
The camera config we ran the setup with (only difference is we toggled the Global Time Enabled):
Another thing, should the image with the speed_2000 really be that blurry? I mean the exposure is set to 250, and we're driving at the speed of 0.0938796047 m/s.
The text was updated successfully, but these errors were encountered: