Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.34.0 435i libuvc python: frame latency/drops/incorrect emitter state #6202

Closed
dkumor opened this issue Apr 5, 2020 · 10 comments
Closed

2.34.0 435i libuvc python: frame latency/drops/incorrect emitter state #6202

dkumor opened this issue Apr 5, 2020 · 10 comments

Comments

@dkumor
Copy link

dkumor commented Apr 5, 2020

Required Info
Camera Model D435i
Firmware Version 05.12.03.00
Operating System & Version Arch Linux & Raspbian Buster
Kernel Version (Linux Only) 5.5.13 and 4.19.97
Platform PC and Raspberry Pi 4
SDK Version 2.34.0
Language python
Segment ML

Issue Description

After upgrading to 2.34.0 from 2.33.1, I am getting issues with frames, the emitter, and dropped data when using the python wrapper. The same issues manifest on both archlinux and raspberry pi 4, both using libuvc. I don't know if they come from the same underlying problem or not, so I have included all the symptoms here. Note that none of the issues are present on 2.33.1.

Symptom 1: 1-2 second video delay on simple example

In previous versions, using pipeline.wait_for_frames() with depth and color streams showed video with negligible delay in default configuration. Now, there is a 1-2 second delay. The issue also manifests itself in the default examples, such as the opencv_viewer:
https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py

If this is a change in default configuration, and not a bug, how would I go back to the previous behavior?

Symptom 2: Emitter on/off frames giving weird results

I am using emitter on/off at 60fps to get a stream of 30fps depth data, and 30fps IR camera data. However, the resulting video's frames have inconsistent/incorrect values for their laser state - the IR frames sometimes show laser, sometimes don't, and the depth frames likewise (all in the same run, meaning that the value of df.get_frame_metadata(rs.frame_metadata_value.frame_laser_power_mode) seems to switch meaning every couple frames). This might be related to #6191.

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 60)
config.enable_stream(rs.stream.infrared, 640, 480, rs.format.y8, 60)

# Start streaming
selection = pipeline.start(config)
for s in selection.get_device().query_sensors():
    if s.is_depth_sensor():
        d = s.as_depth_sensor()
        d.set_option(rs.option.emitter_enabled, 1)
        d.set_option(rs.option.emitter_on_off, 1)

try:
    loopnum = 0
    depth_image = None
    ir_image = None

    while True:
        frames = pipeline.wait_for_frames()
        df = frames.get_depth_frame()
        laser_on = df.get_frame_metadata(rs.frame_metadata_value.frame_laser_power_mode)
        # 0 here, because of issue 6191 (0 means laser is on)
        if laser_on == 0:
            depth_image = np.asanyarray(df.get_data())
        else:
            for frame in frames:
                ftype = frame.profile.stream_type()
                if ftype == rs.stream.infrared:
                    ir_image = np.asanyarray(frame.get_data())

        if loopnum > 0 and loopnum % 2 == 0:
            # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
            depth_colormap = cv2.applyColorMap(
                cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET
            )

            ir_color = cv2.cvtColor(ir_image, cv2.COLOR_GRAY2BGR)

            # Stack both images horizontally
            images = np.hstack((ir_color, depth_colormap))

            # Show images
            cv2.namedWindow("RealSense", cv2.WINDOW_AUTOSIZE)
            # cv2.imshow("RealSense", images)
            cv2.imshow("RealSense", images)
            cv2.waitKey(1)
        loopnum += 1
finally:
    pipeline.stop()

Symptom 3: Lots of dropped frames

The above symptoms lead to the question of whether all the frames are even being received. The following code gets frames in a callback, and it looks like there is a huge amount of dropped data:

import pyrealsense2 as rs
import numpy as np
import time

numgyro = 0
numaccel = 0
numdepth = 0
numvid = 0
totframes = 0


def frame_callback(frame):
    global numgyro, numaccel, numdepth, numvid, totframes
    try:
        totframes += 1
        frame_type = frame.profile.stream_type()
        if frame_type == rs.stream.accel:
            numaccel += 1
        elif frame_type == rs.stream.gyro:
            numgyro += 1
        else:
            # It returns IR & depth as a frameset
            frameset = frame.as_frameset()
            for frame in frameset:
                frame_type = frame.profile.stream_type()
                if frame_type == rs.stream.infrared:
                    numvid += 1
                elif frame_type == rs.stream.depth:
                    numdepth += 1
    except Exception as e:
        print("FAIL", e)


pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 480, 270, rs.format.z16, 60)
config.enable_stream(rs.stream.infrared, 480, 270, rs.format.y8, 60)
config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 250)

pipeline.start(config, frame_callback)

total_time = 30
time.sleep(total_time)


pipeline.stop()
print(
    "gyro=%.2f,accel=%.2f,depth=%.2f,ir=%.2f,tot=%.2f"
    % (
        numgyro / total_time,
        numaccel / total_time,
        numdepth / total_time,
        numvid / total_time,
        totframes / total_time,
    )
)

Running the above code gives

gyro=4.93,accel=8.27,depth=25.53,ir=25.53,tot=38.7

Note that the correct values would be:

gyro=200,accel=250,depth=60,ir=60,tot=510

Indeed, on 2.33.1, the values are close to the correct values.

Conclusion

The issues are present on two separate systems (both using libuvc), and are not present on an older version of librealsense. I hope that the code examples given above will be sufficient to reproduce the problems, since I would love to upgrade to the new librealsense version.

@dkumor dkumor changed the title 2.34.0 435i python frame latency/drops/incorrect emitter state 2.34.0 435i libuvc python: frame latency/drops/incorrect emitter state Apr 5, 2020
@ev-mp
Copy link
Collaborator

ev-mp commented Apr 6, 2020

@dkumor hello, thank you for the feedback.

For the first point mentioned -
a) Is the first frame delay affects Python only, or isit something that you also observe in Realsense-viewer/rs-capture ?
b) Please confirm that you build python wrapper/SDK in Release mode.

#2 - discussed in #6191 as noted. Please pay attention that the following code can make things rather worse a priory it is assumed that the metadata value is coherent to laser state and that the shift occurs sometime into the streaming:

 # 0 here, because of issue 6191 (0 means laser is on)
        if laser_on == 0:

#3 - You should not assume that the frame type defines the underlying frame container structure as seen in the callback:

def frame_callback(frame):
    global numgyro, numaccel, numdepth, numvid, totframes
    try:
        totframes += 1
        frame_type = frame.profile.stream_type()
        if frame_type == rs.stream.accel:
            numaccel += 1
        elif frame_type == rs.stream.gyro:
            numgyro += 1
        else:
            # It returns IR & depth as a frameset
            frameset = frame.as_frameset()
            for frame in frameset:
                frame_type = frame.profile.stream_type()
                if frame_type == rs.stream.infrared:
                    numvid += 1
                elif frame_type == rs.stream.depth:
                    numdepth += 1
    except Exception as e:
        print("FAIL", e)

This assumption will cause frame drops.
Instead you have to discover it dynamically:

  1. Establish whether the rs2:frame is a rs2::frameset.
  2. If yes - iterate over all the frames included
  3. If no - parse it a single frame

@dkumor
Copy link
Author

dkumor commented Apr 6, 2020

Hi, thanks for the quick response! Assuming that symptom 2 is a known problem that will be fixed in firmware, I guess all that is left is 1 and 3, great!

1 - python video delay

realsense-viewer does not show any delays, and works fine. It is only the python script that fails.
Please note that trying it today, I was only able to reproduce the issue on the PC, not the raspberry pi. I tried rebuilding librealsense, without any changes.

3 - dropped frames

Thanks! I didn't notice the is_frameset method before. Unfortunately, after changing to the following, I still get the same results on PC:

def frame_callback(frame):
    global numgyro, numaccel, numdepth, numvid, totframes
    try:
        if frame.is_frameset():
            for f in frame.as_frameset():
                frame_callback(f)
        else:
            totframes += 1
            frame_type = frame.profile.stream_type()
            if frame_type == rs.stream.accel:
                numaccel += 1
            elif frame_type == rs.stream.gyro:
                numgyro += 1
            elif frame_type == rs.stream.infrared:
                numvid += 1
            elif frame_type == rs.stream.depth:
                numdepth += 1
    except Exception as e:
        print("FAIL", e)
gyro=4.37,accel=8.93,depth=25.80,ir=25.80,tot=64.90

What's weirder, on the raspberry pi, I get:

gyro=186.17,accel=235.83,depth=119.60,ir=119.60,tot=661.20

It looks like IMU data is OK on the pi, but I doubt the validity of the 120fps... What's weird is that I'm getting ~30 fps on PC, and 120??? on pi, whereas the configuration was for 60.

It looks like the pi and PC might be exhibiting different issues entirely...

@ev-mp
Copy link
Collaborator

ev-mp commented Apr 7, 2020

@dkumor , actually your findings in 3 are consistent with 1 - in both cases the system runs properly on RPI but lags on PC, so the issue is isolated and reproduced on that particular PC only.
Can you post the CMAKE parameters that you invoke when building python on that PC ?

As for the measurements you make - they do not take into account the fact that each sensor has a different startup (i.e time to first frame) interval which affects the calculations.
To have a precise measurement of actual FPS you need to manage start_point and end_time per each sensor separately. To initialize it with the timestamp of the first frame of that particular stream and mark the end with the timestamp of the last frame of that type.

@dkumor
Copy link
Author

dkumor commented Apr 7, 2020

Regarding the raspberry pi, the script above gathers data for 30 seconds, and results in an average of 120 frames per second, when configured to give 60 - while the inexact timestamps certainly affect the frame numbers,could such a thing could cause a doubling of frame numbers over nearly half a minute of data?

As for the PC, this is an Arch system, with librealsense-git installed from the AUR, and using the default system python 3.8. The librealsense build uses the following command:

cmake .. \
    -DCMAKE_INSTALL_PREFIX=/usr \
    -DCMAKE_INSTALL_LIBDIR=lib \
    -DCMAKE_INSTALL_SBINDIR=bin \
    -DCMAKE_INSTALL_SYSCONFDIR=/etc \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_SHARED_LIBS=on \
    -DBUILD_WITH_STATIC_CRT=off \
    -DBUILD_WITH_OPENMP=on \
    -DBUILD_WITH_TM2=true \
    -DBUILD_EXAMPLES=true \
    -DFORCE_LIBUVC=true \
    -DBUILD_PYTHON_BINDINGS=true

@ev-mp
Copy link
Collaborator

ev-mp commented Apr 7, 2020

@dkumor , pipeline callbacks are different from sensor's callback in the sense that they provide synchronized framesets. During synchronization it is inevitably that some frames will be transmitted more then once, as in the case of pipe.wait_for_frame()
I'll try to demonstrate the anticipated result of pipeline callback invloked on Depth+RGB config:

Timeline:            0..1..2............................................................................       
Depth Frm arrived:   1                2                    3                   
Color Fram arrived:          1                  2                      3     
Callback Provides:  (1)     (1,1)    (2,1)     (2,2)      (3,2)       (3,3)  

This should explain the double frame rate in the above calculation for depth and rgb.
So for statistics you have to count only the unique frame numbers.

As with the archlinux distro- unfortunately I'm not familiar with the librealsense-git package. Still, in case you observe the low fps behavior with python wrapper but not in the other demos then I would recommend to focus on environment and start by rebuilding the wrapper with python 3.5 or 3.6 for comparison.

@dkumor
Copy link
Author

dkumor commented Apr 7, 2020

Thanks for the explanation, I assumed that wait_for_frameset had the property described, whereas a callback is called the moment a frame is received, with only the received data, which apparently isn't the case.

I looked at the rs-callback c++ example, which works without issue on the pi (https://github.com/IntelRealSense/librealsense/tree/master/examples/callback), meaning that indeed my problem was unique frame ids.

However when trying rs-callback on my pc (which has dropped frames issue), I get the following:

RealSense callback sample

Depth[0]: 108 [frames] || Color[3]: 108 [frames] || Accel[5]: 30 [frames] || Gyro[6]: 59 [frames] ||
Depth[0]: 125 [frames] || Color[3]: 125 [frames] || Accel[5]: 32 [frames] || Gyro[6]: 70 [frames] ||
Depth[0]: 144 [frames] || Color[3]: 144 [frames] || Accel[5]: 35 [frames] || Gyro[6]: 80 [frames] ||
Depth[0]: 164 [frames] || Color[3]: 164 [frames] || Accel[5]: 38 [frames] || Gyro[6]: 90 [frames] ||
Depth[0]: 181 [frames] || Color[3]: 181 [frames] || Accel[5]: 42 [frames] || Gyro[6]: 100 [frames] |
Depth[0]: 201 [frames] || Color[3]: 201 [frames] || Accel[5]: 44 [frames] || Gyro[6]: 110 [frames] |
Depth[0]: 218 [frames] || Color[3]: 218 [frames] || Accel[5]: 49 [frames] || Gyro[6]: 118 [frames] |
Depth[0]: 238 [frames] || Color[3]: 238 [frames] || Accel[5]: 52 [frames] || Gyro[6]: 128 [frames] |
Depth[0]: 256 [frames] || Color[3]: 256 [frames] || Accel[5]: 57 [frames] || Gyro[6]: 137 [frames] |
Depth[0]: 274 [frames] || Color[3]: 274 [frames] || Accel[5]: 61 [frames] || Gyro[6]: 146 [frames] |
Depth[0]: 294 [frames] || Color[3]: 294 [frames] || Accel[5]: 67 [frames] || Gyro[6]: 153 [frames] |

This means that the issue is there in the rs-callback example. I am not sure why it apparently does not show up in realsense-viewer. While my realsense-viewer stream does not have a huge delay, I am getting the get_device_time_ms error shown in #6189, so my problems might be fixed with the upcoming patch - I will try it again once the fix is merged.

I really appreciate your help and patience!

@ev-mp
Copy link
Collaborator

ev-mp commented Apr 7, 2020

Depending on the use-case that you work on I would recommend to either:

  1. Split the flow into two separate pipelines - one for UVC and another one for for IMU data. And handle each one with a different callback. This will prevent the syncer from filtering most of the IMU readings.
    Trying to synchronize IMU and UVC sensors with data rates that differ by an order of magnitude is challenging and, more important, is rarely utilized in real-life scenarios.
  2. Switch to sensor-API with callbacks to unleash the full SDK throughput, minimize latency and possibly eliminate the frame drops. This however, may fire-back in case you need to perform RGB-Depth alignment or textured pointcloud generation.

It is still not clear whether the IMU rates you posted are due to HW, SW or Config issue. Therefore please run rs-data-collect tool with the default configuration for D435i to verify that the IMU is working properly (follow the tool's page for details)

@RealSenseCustomerSupport
Copy link
Collaborator


@dkumor Any other questions with the reply of ev-mp? Looking forward to your update. Thanks!

@dkumor
Copy link
Author

dkumor commented Apr 22, 2020

I will close this issue for now - these problems are very similar to issues others are having, so I will wait for the next version of the software/firmware in hopes that their fixes will also solve my problems. I unfortunately don't currently have time to debug the issues myself.

@dkumor dkumor closed this as completed Apr 22, 2020
@anandhakrishnantecholution

Hey @dkumor , forgive me if I sound out of the blue, but in one of your code snippets you have mentioned finding whether the IR emitter was on or off for every frame,

laser_on = df.get_frame_metadata(rs.frame_metadata_value.frame_laser_power_mode)

You are in fact one of the few people I found who used this feature. I was trying out the same thing and I got an error

RuntimeError: Frame does not support this type of metadata

Do you have any idea what this might mean?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants