Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image Delay when using Global Time Enabled #8003

Closed
r91andersson opened this issue Dec 15, 2020 · 15 comments
Closed

Image Delay when using Global Time Enabled #8003

r91andersson opened this issue Dec 15, 2020 · 15 comments

Comments

@r91andersson
Copy link

r91andersson commented Dec 15, 2020

Required Info
Camera Model D435i
Firmware Version 05.12.09.00 and 5.12.08.200
Operating System & Version Ubuntu 18.04
Kernel Version (Linux Only) Linux 4.9.140-tegra
Platform NVIDIA Xavier AGX
SDK Version v2.40.0
Language C++
Segment Robot
Jetpack Version v.4.4
VL4 Version L4T-32.4.3
Backend Camera Driver installation Native kernel with the patch applied as in the tutorial, RSUSB=False

Issue Description

When parameter global_time_enabled is enabled, we're experiencing a huge delay in retrieval of the images. The timestamp however doesn't say that the image is as old as it is in reality.
I've been testing and can confirm that it's the same for both firmware 05.12.09.00 and 5.12.08.200.

Test Description:

We have a movable camera module (where the camera is attached onto) on a linear rail. This linear rail has a motor that can drive the whole camera module forward and backward at a constant speed. We have a digital encoder mounted on the linear rail motor that has a resolution of 0.0000469398m/tick. In this way, we can save the absolute position of the camera module for each image taken.

Test Procedure:

Step 1.
We drive the camera module to a known absolute encoder position equal to 6500. We refer and name this position to encoder_trigger_image_position

Step 2.
When the camera module is still at the encoder_trigger_image_position, we take and save an image. This image is named and will be refered to our ground truth image.

Step 3.
We drive the camera module to the intial position equal to 0.

Step 4
We set the camera module in a linear motion by setting the speed to 1000 tick/s (=0.0469398023 m/s).

Step 5
Just when the camera module passes the encoder_trigger_image_position we grab the latest image and save this image and name it speed_1000. We are aware of that we can potentially miss 1 sample here, but the fact that we're sampling the camera stream at 30Hz, we shouldn't expect too much drift.

Step 6
We repeat the steps 3-6, but change the speed to 2000 (=0.0938796047 m/s)

Step 7
We do an ocular inspection of the 3 images.
The ground truth image is where we want the other images to be as close as possible to.
We have a green pin sticking up from the soil, so we can easily see if the images are close to each other by comparing this pin in each of the images to the ground truth image.

Test Result:

Test run 1:
When we run the test with the parameter global_time_enabled enabled, we can see a huge drift in the images compared to the ground truth image.
Ground truth:
-6500_50_0_0 0318372
Speed 1000:
-6500_50_1000_0 0299593
Speed 2000:
-6501_50_2000_0 0320945

Test run 2
When we run the test with the parameter global_time_enabled disabled, we see a much smaller drift in the images compared to the ground truth image.
Ground truth:
-6500_50_0_0 0318372
Speed 1000:
-6503_50_1000_0 039107
Speed 2000:
-6504_50_2000_0 0295636

It's much more obvious and clear when downloading the images and then previewing them fast, by looking at ground truth image, then fast switch to speed_1000, then last speed_2000.

The camera config we ran the setup with (only difference is we toggled the Global Time Enabled):
Screenshot from 2020-12-15 14-55-10

Another thing, should the image with the speed_2000 really be that blurry? I mean the exposure is set to 250, and we're driving at the speed of 0.0938796047 m/s.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 15, 2020

Hi @r91andersson I looked at the details of your case very carefully. Could you confirm please whether you are using ROS, as the image of the dynamic_reconfigure settings interface above would suggest?

If you are using ROS, the large timestamp drift when global_time is true would be consistent with a recent RealSense ROS case where this was found to be occurring. The fix was to try one of the following actions:

  1. Set global_time_enabled to false; or
  2. Add initial_reset:=true to the end of the roslaunch instruction to perform a hardware reset of the camera at launch.

IntelRealSense/realsense-ros#1454 (comment)

In regard to RGB blurring: could you try setting RGB to 60 FPS instead of 30 and see if it reduces the blurring, please.

@r91andersson
Copy link
Author

@MartyG-RealSense Sorry, forgot totally to mention that we're using ROS. So yes, I have been configuring the parameters with the command: rosrun rqt_reconfigure rqt_reconfigure

I have been using initial_reset:=true in my launch file, however, the global_time_enabled have always been default (true).

So my requirement is that I need to know the exact time when the image was taken ( in ROS time) to be able to do a lookup back in time to check the exact position where we were positioned. But won't we violate this requriment when we set the global_time_enabled to false?

I will re run the test but with RGB stream set to 60 FPS.

@r91andersson
Copy link
Author

@MartyG-RealSense

When I set the global_time_enabled to false, it seems to work much better. However my question is still, what does that really mean, how is the timestamp affected? As I said before, what we require is to know the ROS timestamp of when the image was taken. As we're are moving at a constant speed with the camera module, the timestamp when the image is taken is important for us to know how far we have traveled during the process of the image (up until the point when we receive it in ROS).
However, it seems much better now when disbled this parameter.
But my concern is more is this reliable in the long term? I mean, should we actually use global_time_enabled set to true in our case? And if so, is there a bug in the firmware or backend-driver related to this parameter because we're seeing a huge drift when using this?

I re-run the test with both cameras streaming both at 30 and 60 FPS, and with two different exposure values. You can see the result below. It seems at the exposure value of 250, I cant see any difference in the blurring. When using exposure value at 50 the image became much less blurry, however no change in bluryness between 30 and 60 FPS tho.

Exposure value = 250, Camera Frame Rate = 30 Hz, Speed=0.093m/s
250_3000_0 0317937_30Hz

Exposure value = 250, Camera Frame Rate = 60Hz, Speed=0.093m/s
250_3000_0 0686988_60Hz

Exposure value = 50, Camera Frame Rate = 30Hz, Speed=0.093m/s
50_3000_0 0483095_30Hz

Exposure value = 50, Camera Frame Rate = 60Hz, Speed=0.093m/s
50_3000_0 0366711_60Hz

However, I do feel that running the camera with an exposure value of 250 at the speed of 0.0938796047 m/s it shouldn't be that blurry? I haven't mentioned but we're using the resolution of 640x480 in all of our tests described in this issue.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 16, 2020

The global timestamp can provide correction of drift between the host computer and the camera. Doronhi the RealSense ROS wrapper developer provides advice about use of the global timestamp in ROS in the link below.

IntelRealSense/realsense-ros#796 (comment)

My understanding of Doronhi's advice is that since the ROS wrapper provides correction for time drifting, using the global timestamp may be less necessary in ROS than in librealsense.

In regard to the blurring: in a past case, a RealSense team member suggested that 70 manual exposure and 6 FPS may reduce RGB blurring.

@r91andersson
Copy link
Author

Great! I guess that you meant 70 manual exposure and 60 FPS (and not 6?) to reduce blurring.

Shall I open a new issue in realsense-ros repo, and reference this issue?

@MartyG-RealSense
Copy link
Collaborator

It was six FPS. :)

#2461 (comment)

The mathematics of FPS can be a bit complicated with manual exposure.

#1957 (comment)

@MartyG-RealSense
Copy link
Collaborator

You can open a new ticket, though I will likely be handling it too and so handling it here should be fine.

@r91andersson
Copy link
Author

Thanks!

So when I'm setting the manual exposure value to 250, the theoretical FPS would be 1000ms/25ms = 40 FPS. But I've configured the camera RGB stream to run at 30 FPS. So it shouldn't be a problem regarding exposure time.

Ok, I just wanted to make sure that this issue got a ticket in correct backlog! Meanwhile wating for this issue to be resolved, I will use global_time_enabled set to false

Thanks again @MartyG-RealSense , I appreciate your customer service here on github, always quick to reply and willing to help!!

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. :) If you are satisifed with the outcome, feel free to close the issue with the Close Issue button under the comment box. Good luck with your project!

@r91andersson
Copy link
Author

I can close this issue. However I don't agree on that we have solved this issue, rather found out a "quick-fix". But as long as you're intention is to solve the real issue behind this, I'm satisfied!

@MartyG-RealSense
Copy link
Collaborator

You can keep the case open if you are not satisifed yet. Could you provide details please about what you feel the continuing problem is?

@r91andersson
Copy link
Author

r91andersson commented Dec 16, 2020

I would like that the test run 1 and test run 2 (in test result above) showed more or less exactly the same result. Changing the global_time_enabled between true and false shouldn't affect the result (between the runs) at all in my opinion. As the global_time_enabled is a function for keeping the sync between the host clock and the HW clock on the camera. It should defently not add a delay to all images by 200ms by just enable the global_time_enable.
And as we're running a multicamera setup with two different subsystems over ROS, and synchronized via chronyc, we must ensure that the cameras are also synced against the ROS master clock. And to ensure this, we want the global_time_enabled set to true. But at the moment we cant have it set to true, because then we see this delay 200ms, and we can't have that. So a quick fix (for us at least) is to set the global_time_enabled to false, and trust that the images are in synced. But we cant be sure that they stay synced because we are not having the global_time_enabled set to true.

@r91andersson r91andersson reopened this Dec 16, 2020
@r91andersson
Copy link
Author

If you would like to repeat the test yourself, I could give you the test script that we run. Or if you would like to just examine it.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 17, 2020

I am not equipped to replicate your multi-cam test. I would recommend posting a question to the RealSense ROS GitHub after all and including the name-tag doronhi in the message to draw it to the attention of Doronhi the RealSene ROS wrapper developer.

https://github.com/IntelRealSense/realsense-ros/issues

@MartyG-RealSense
Copy link
Collaborator

I will close this specific case number as it is now being continued with Doronhi the RealSense ROS wrapper developer at IntelRealSense/realsense-ros#1581 - it will remain accessible for reading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants