Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to stream a stable 300 fps using d435i? #6578

Closed
Wjordan2020 opened this issue Jun 13, 2020 · 13 comments
Closed

How to stream a stable 300 fps using d435i? #6578

Wjordan2020 opened this issue Jun 13, 2020 · 13 comments

Comments

@Wjordan2020
Copy link


Required Info
Camera Model D435
Firmware Version (05.12.05.00)
Operating System & Version {Win 10)
Platform PC
SDK Version { 2.33.1 }
Language {Python }

How can i steam a stable 300fps depth values using d435i? My application requires such a high frame rate. Currently, i am saving the depth values using "get_distance" into a list then at the end of the processing i will convert it to a csv file. This will help me in avoiding any memory issues i encountered while trying to save a bag file.
When i look at the csv file, the number of captured values is very different from the number of recorded frames. This means that i am missing so many frames and the capturing rate is way less than 300 fps.
Could you please help me? What are the best sensor values (i.e. exposure rate, queue size) to achieve a stable 300 fps? i tried different computers with different USB 3.0 and got the same result.
i am recording in sunlight environment

Thanks

@MartyG-RealSense
Copy link
Collaborator

I wonder if by saving into a list, you might be experiencing a Python limitation described in the link below.

#946 (comment)

@Wjordan2020
Copy link
Author

Thank you Marty. I dont think that is the issue as i could capture 300 fps earlier for a minute without any problem. I believe it is a consistency issue . Is there any way that you can help me ? I would like to capture 300 fps every time i run my code.

Thanks

@MartyG-RealSense
Copy link
Collaborator

Are you using RGB at the same time as depth, please?

@Wjordan2020
Copy link
Author

No, i am using the infrared stream as recommended in the white page: https://dev.intelrealsense.com/docs/high-speed-capture-mode-of-intel-realsense-depth-camera-d435

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 13, 2020

Okay, that is the reason I was asking, as I did not know if you had seen the paper and its Section 2.2 that advises against RGB + Depth. Thanks!

Guidance on setting exposure up in a way that enforces a constant FPS is also provided in the link below.

#1957 (comment)

Dorodnic has also recommended not changing the frame queue size on the sensor object.

#5041 (comment)

If you are recording in strong sunlight and you are using manual exposure than you could set exposure to around 1 ms and also help to avoid some motion artifacts by doing so. This is suggested by guidance provided in Intel's camera tuning best-practices guide for the 400 Series cameras.

https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf

image

@Wjordan2020
Copy link
Author

Hi
I tried all the options mentioned earlier but nothing worked for me. it feels that the camera is capturing at 100 fps. Please help me.

Thanks

@MartyG-RealSense
Copy link
Collaborator

Did you install Librealsense for Windows using the pre-built .exe file on the Releases page or did you compile it from source code on Windows, please? If you compiled from source, you may get a performance boost by including -DCMAKE_BUILD_TYPE=release in your CMake build statement if you have not done so already. This builds the SDK as a 'Release' version instead of a version designed for debugging.

Are you actively monitoring memory whilst your application is running to ensure that you do not have a "memory leak" that slows the program down as time passes and memory becomes consumed and not released?

An alternative approach to saving depth values into a list may be to use ROS. It has a --split command that splits the bag when a maximum duration or file size is reached. That may help avoid the memory issues that you experienced when saving a large bag file. Example commands are:

$ rosbag record --split --size=1024 /chatter
$ rosbag record --split --duration=30 /chatter
$ rosbag record --split --duration=5m /chatter
$ rosbag record --split --duration=2h /chatter

http://wiki.ros.org/rosbag/Commandline#record

The link below gives some guidance about how split may be configured in a ROS launch file instead of the command line.

https://answers.ros.org/question/333704/using-the-split-option-in-a-launch-file-for-rosbag-recording/

@Wjordan2020
Copy link
Author

Do you have a sample code to do this rosbag recording with d4335 using python?
Thanks

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 14, 2020

It's a somewhat complicated situation. Whilst you may be able to record a bag in ROS and use the --split command with it, the recorded bag file may be unreadable by the RealSense SDK's bag playback.

You may be able to record a bag by recording the output of a ROS camera topic with a command such as this:

rosbag record -O whatever --lz4 /joy /camera/depth/image_rect_raw

#3020

But you may need to use pure ROS utilities (not RealSense ROS tools) such as bag_tools to edit the recording.

http://wiki.ros.org/bag_tools

If Python is important to your project then it may be best to focus on the Python script that have been working on instead of considering ROS.

@Wjordan2020
Copy link
Author

Wjordan2020 commented Jun 14, 2020

But that will not solve the 300 fps issue i'm facing. Here is a part of my code, could you please see if i'm doing something wrong:

import pyrealsense2 as rs
import numpy as np
import cv2
import pandas as pd
import os




i=0
left=[]
right=[]
framek=[]
tstamp=[]


pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 0, 848, 100, rs.format.z16, 300)
config.enable_stream(rs.stream.infrared, 1, 848, 100, rs.format.y8, 300)

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
    sensors = dev.query_sensors()

# exp = sensors[0].get_option(rs.option.exposure) # Get exposure
# fr=sensors[0].get_option(rs.option.frames_queue_size) # Get frames queue size
# gn = sensors[0].get_option(rs.option.gain) # Get gain
# emit = sensors[0].get_option(rs.option.emitter_enabled) # Get emitter status
    
sensors[0].set_option(rs.option.exposure, 1.0) # Changing the exposure value
sensors[0].set_option(rs.option.enable_auto_exposure,True) # Example to Enable/disable auto exposure
sensors[1].set_option(rs.option.auto_exposure_priority,False) # Disable/Enable the Auto Exposure Priority
         

def stream(): 
        
        pipeline.start(config)
        
        # Start streaming
        
        
        try:
            while True:
        
                # Wait for a coherent pair of frames: depth and color
                frames = pipeline.wait_for_frames()
                depth_frame = frames.get_depth_frame()
                infrared_frame = frames.get_infrared_frame()
                
                if not depth_frame:
                    continue
        
        # Estimating depth and saving it to csv file
                    
                n = depth_frame.get_frame_number()
                t=depth_frame.timestamp
                zDepth_L = depth_frame.get_distance(int(150),int(50))
                zDepth_R = depth_frame.get_distance(int(650),int(50))
                
                framek.append(n)
                left.append(zDepth_L)
                right.append(zDepth_R)
                tstamp.append(t)
                
                # Convert images to numpy arrays
                depth_image = np.asanyarray(depth_frame.get_data())
                infrared_image = np.asanyarray(infrared_frame.get_data())
               
                # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
                depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        
                # Stack both images horizontally
                # images = np.hstack((infrared_image, depth_colormap))
        
                # Show images
                # cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
                cv2.imshow('RealSense1', infrared_image)
                cv2.imshow('RealSense2', depth_colormap)
                key = cv2.waitKey(1)
             # Press esc or 'q' to close the image window
                if key & 0xFF == ord('q') or key == 27:
                    cv2.destroyAllWindows()
                    break
        
        finally:
        
               # Stop streaming
               pipeline.stop()
               data = pd.DataFrame(columns = ['Left', 'Right','Frame','Time_Stamp']) 
               data['Left']=pd.Series(left)
               data['Right']=pd.Series(right)
               data['Frame']=pd.Series(framek)
               data['Time_Stamp']=pd.Series(tstamp)
               data.to_csv(r'test_New.csv')

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 14, 2020

I note that although you are correctly using infrared in place of RGB, your script is not accessing both the left and the right IR stream (index 1 and 2) like the example in Section 2.2 of the white paper does and is only using the left IR (index 1).

https://dev.intelrealsense.com/docs/high-speed-capture-mode-of-intel-realsense-depth-camera-d435#section-2-2-enabling-infrared-stream-for-monochrome-image-capture

Also, I researched the issue further and learned of an alternative Python method developed in 2019 called save_single_frameset, where every frame is saved as a separate bag file with different file names.

#2588 (comment)

Given that bags are the most efficient way to record data to file in RealSense, it may give sufficiently fast performance.

@MartyG-RealSense
Copy link
Collaborator

Hi @Wjordan2020 Do you still require assistance with this case please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants