-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timing issue - HW timestamp on D455 is wrong #11873
Comments
As an aside, can anyone point me to a list of release notes for each firmware release? I'm coming up blank with google. If any FW release affected timing recently, that's a good candidate. |
Hi @nathanieltagg If you are using SDK version 2.53.1 then you should not downgrade the firmware, as using one older than 5.14.0.0 with 2.53.1 may cause errors. Regarding where exposure begins, sensor_timestamp marks the middle of sensor exposure, but other types of timestamp have different behaviours. A RealSense team member describes these behaviours at #2188 (comment) If your own choice of USB cable is longer than 5 meters then it would be preferable to use a cable with active repeater components inside it in order to boost the signal over distance. The company Newnex, who provide high quality cables validated for use with RealSense, can supply these. https://www.newnex.com/realsense-3d-camera-connectivity.php On RealSense 400 series cameras such as D455, Global Time should be enabled by default, unless hardware metadata is not available and in that case Global Time will not be invoked. Hardware metadata support is enabled automatically if installing the SDK from packages or by compiling it from source code with CMake with the -DFORCE_RSUSB_BACKEND=true build flag included. If the SDK is built from source code but the RSUSB build flag is not set to True then a kernel patch script must be applied to the Linux kernel to add hardware metadata support. For a source code build on kernel 5.15, the patch script In regard to firmware release notes, they are bundled inside the compressed zip files of the firmware versions on the firmware releases page. When the zip is extracted to obtain the firmware .bin file within, you can obtain the PDF release notes too. https://dev.intelrealsense.com/docs/firmware-releases For your convenience, the link below has the firmware release notes from the current latest firmware download 5.14.0.0. |
@MartyG-RealSense I experimented briefly with a couple of older firmware versions, and they worked identically to the 5.14 release, but based on your post I'll stop playing with that, but thank you for the helpful post. Two different machines are giving different timing offsets. Currently the best data we have is using a laptop to read a D455 or D457 with a short USB cable. When doing this, I see a 15 ms error on the timestamp when running at an exposure setting of 400 (40 ms) and a 7.5 ms error when running at 20 ms. We see much larger errors when running in our other setup, which uses a windows desktop machine running ubuntu under WSL, but there I suspect the usbipd bridge that we have to use to connect through the windows host. Those errors are much larger, on the order of 25-40 ms, but they also scale directly with exposure time which seems indicative of a camera-level issue. The two cameras I've tried give the same results. It's hard to read the tea leaves to figure out what's going on under the hood, but it seems to me that there is a latency correction that works at short exposure times but is failing at larger exposure times on the RGB camera specifically. I'm guessing that most of the effort in getting timing issues right were focused on depth data. Any further advice welcomed, but I'm running out of things to try on my end. |
If you are using a manual RGB exposure value then values in a certain range can affect the FPS speed. For example, an RGB exposure of 400 at 30 FPS could result in the FPS becoming 25 FPS, as advised by a RealSense team member at #1957 (comment) Whilst RealSense cameras can be used with WSL2 as it can simulate USB controllers, I recommend using a dedicated Linux PC or a dual-boot computer (Windows and Linux) if possible to do so. |
Hi @nathanieltagg Do you require further assistance with this case, please? Thanks! |
Using a direct short cable to a Linux computer, and using all the settings you recommend, the problem persists. There is a measurable time offset of approximately 15-20 ms on the RGB camera timing. For the moment, we are simply subtracting a fixed offset but without understanding the cause this is not very satisfactory. This creates a problem during sensor fusion steps, since the camera images lag.
|
Would you describe your problem as being that you are receiving old timestamps - 'stale frames' - as discussed at the link below? |
No, this is quite different. The effect size is less than 1 frame in duration. As I said, the size of the effect is approximately one-half to one frame exposure (17-40 ms). This could be caused, for example, by an incorrect timing correction for the mid-exposure point on the RGB camera. |
OK, a new development. I came back to this, and discovered a brand new offset of something like 40-50ms, which IS consistent with a missed frame. But note that I'm not counting frames, I'm looking at the timestamp on a particular image in the frameset. |
A bit more, here's a set of experiments I just did: None of my measurments are super accurate, so I'm having trouble making sense of these numbers, but I'm not seeing clear patterns. The offset tends to change over time but not in predictable ways... |
What is the offset like if you only enable the left IR stream? If it has a low offset like when only RGB is enabled then it might suggest that the computer is lagging when more than one stream is enabled. |
OK, here is a somewhat more extensive list. Using vanilla build: Using FORCE_RSUSB_BACKEND build: My measurement error is probably +/- 5ms or so, so the first three results are consistent with constant offset. I believe that although the values look fairly repeatable in the short term, they tend to change over the course of hours or days, meaning that applying simple offsets on the backend (my hack fix) doesn't work all the time. So:
|
By default the RealSense SDK has a frame queue capacity value of '1'. If more than one stream is enabled though then Intel suggest changing the frame queue's capacity to '2' in order to minimize latency. |
I am using rs2::pipeline to get the data, since I sometimes do want IR or depth frames as well. Is there a relevant way to handle that case? Or is using the pipeline not recommended for getting accurate timings? |
I am not aware of a significant difference in timings between the pipeline and syncer approaches to handling frames. Pipeline is typically the most user-friendy way to program RealSense applications. At #1238 a RealSense team member states that "when you configure multiple streams, pipeline will delay the faster stream until all frames of the frameset are ready". |
That is fine, providing the frame time is accurate. However, we're seeing that it isn't, that the time reported back is wrong by 20-50 ms larger then what we think the correct timing is. The magnitude seems to be effected by a lot of things, including the exposure time, which to me indicates that the exposure time correction is probably not correct. |
Let me try to guess what's going on under the hood.
One possibility is that the correction to middle-of the frame is not correct for the color camera (although I suspect it's been throughly tested for depth), maybe the wrong units or something. However, this is not consistent with the other changes we see: turning on or off the IR camera, for example, changes the offset. The things I've seen:
Maybe the color camera exposure correction is using the IR camera exposure time and not the color exposure time or something nutty like that? I don't know, it's like reading tea leaves through fog. We may attempt a simpler measurement technique to show what we're seeing more accurately, but it is definitely a real problem with either the camera or library. |
The readout of FRAME_TIMESTAMP begins after exposure is completed - see #4525 (comment) - whilst SENSOR_TIMESTAMP (also known as the optical timestamp) marks the middle of sensor exposure. It is more common for RealSense users to retrieve FRAME_TIMESTAMP to use as their timestamp than it is for SENSOR_TIMESTAMP to be used. Do you see a significant offset if you use FRAME_TIMESTAMP for your measurement instead of SENSOR_TIMESTAMP? FRAME_TIMESTAMP and SENSOR_TIMESTAMP both require hardware metadata support to be enabled in the SDK. If building from packages then metadata support is included in the packages. If building the SDK from source code with the native method then metadata support is included if kernel patching is applied. In regard to building from source code with the RSUSB method, metadata support is automatically included in an RSUSB = true build of the SDK and a kernel patch does not need to be applied. Metadata is a complex and specialized subject, so your patience is very much appreciated! |
What is the method to tell librealsense to use FRAME_TIMESTAMP instead of SENSOR_TIMESTAMP, if both are available? |
Thanks so much for the very detailed test feedback. Regarding no longer getting the SENSOR_TIMESTAMP if building from source code but not using RSUSB, the hardware metadata should be enabled and provide this timestamp if a kernel patch script has been applied to the kernel. For kernel 5.15 on Ubuntu 20.04, the patch script that should be appied is: ./scripts/patch-realsense-ubuntu-lts-hwe.sh |
I see. But it’s clear that the finer timestamp is worse for us, because it is not predictable. You hinted it was possible to turn off sensor_timestamp with a setting? |
sensor_timestamp should not be generated if metadata support is not enabled in your librealsense installation. If the instruction get_timestamp is used then if hardware metadata is supported, the frame_timestamp is returned by the instruction rather than the sensor_timestamp. If metadata support is not enabled then time_of_arrival is returned by get_timestamp instead. If Global Time is enabled then you get a host-calculated hardware timestamp instead of an unmodified one. Global Time is enabled by default on 400 Series cameras. The above information is referenced in the SDK documentation link here: |
Thanks, but that didn't exactly answer my question. The only way I know to turn off metadata support is to use the vanilla install (without kernel patch). Is there a way to programmatically change timing modes? If not, then I think I'm doing it the most reasonable way. |
No, there is not a switch to turn off metadata support. Building from source code and not kernel patching is the only method I know of to exclude metadata support from the SDK build. |
Hi @nathanieltagg Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
I have gone through several github issues opened since 2018 regarding multi cam syncing or syncing camera with host clock with software and hardware synchronisation approaches. I am of understanding that obtaining global timestamp is the only way to get the frame timestamps synced to host clock, so that it can be used to sync with other devices (not necessarily realsense). I am correct with this? |
Hi @MaheshAbnave #3909 describes how the global timestamp generates a common timestamp for all streams, even for multiple cameras, by comparing device time to computer time. Also, if the wait_for_frames() instruction is used in a script then the RealSense SDK will attempt to find the best match between the timestamps of different streams. It is possible to sync data between a RealSense camera and non-RealSense camera by using camera metadata, as described at #2186 Another common way to sync multiple cameras is to use hardware sync, where you can sync the cameras either by generating a trigger signal from one of the cameras that syncs other 'slave' cameras to it, or generate the trigger signal with an external signal generator device and have all cameras be slaves that sync to that external trigger. If you plan to have a master camera and a slave camera and both are RealSense cameras then the guide at the link below is the best reference. https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration If you plan to use an externally generated trigger signal and a non-RealSense camera then genlock hardware sync is better suited. Intel no longer support this sync method and removed its online documentation but it is still accessible in the SDK and you can find an archived PDF version of its documentation at the link below. |
Issue Description
I've been using a 455 in combination with a Motion Capture system, and discovered through a series of experiments that the timestamp coming from the 455 RGB camera is incorrect. I have verified using a different camera that the error is likely coming from the realsense timing. I've seen this in two configurations: using a linux laptop with a direct USB connection, and also using WSL with a usbipd connection and a long powered USB cable (our default setup).
Depending on the setup, the timestamp is wrong by between -20 and +40 ms. (Positive values indicate that the timestamp is too large in value.) One thing that seems to affect the latency error is the exposure time; using a manual exposure of 200 (20 ms) gives a latency approximately 20 ms different from that of an exposure time of 400 (40ms), and autoexposure (which seems to be around 30 ms) gives an intermediate answer.
I am using video_frame->get_timestamp() to get the time. Global time is enabled (although changing this does not significantly effect the result). I have verified that the HW Timestamp per-frame metadata can be read out, and if I understand the docs right that means we SHOULD be reading out the middle of the exposure time. I'm suspecting something like the exposure time correction is being applied backwards or something.
Because these experiments are time-consuming and difficult to make with my equipment, I've not been able to explore a lot of variations yet. Resetting the camera may or may not change the offset, but I've not caught a definative instance of it yet.
Also in my to-do list is to try a substitute camera, or downgrade the firmware.
If the offset were predicatable, that would be acceptable for us, but a large offset like this, as large as a whole frame, is problematic.
Any advice for how to isolate or remove the problem would be welcome.
The text was updated successfully, but these errors were encountered: