Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot stream depth, colour, gyro and accel all at once using the python3 wrapper #6031

Closed
BetterRobotics opened this issue Mar 12, 2020 · 20 comments

Comments

@BetterRobotics
Copy link

BetterRobotics commented Mar 12, 2020

| Camera Model | D400 |
| Firmware Version | 05.12.03.00 |
| Operating System & Version | 18.04 |
| Kernel Version (Linux Only) | 4.9.140-tegra |
| Platform | jeston nano jetpack 4.3. |
| SDK Version | 2.33.1 |
| Language | python |
| Segment | Robot |

I am trying to gather the depth, colour, acc and gyro data from the d435i. but it's throwing a timeout error. I have read through the posts and cannot seem to find a solid answer on how it should be accomplished. any help would be greatly appreciated.

Here is my code, it's not actually trying to read the gyro data yet just setup the camera streams to run with each other but no luck.

from threading import Thread
import pyrealsense2 as rs
import numpy as np
import cv2, sys, time

class VideoStream:
    def __init__(self, resolution=(640, 480), framerate=15):
        self.FOV = 100.4 

        # point conversion
        self.num_sectors = 21 # number of sections
        self.pixel_group = resolution[0] / self.num_sectors
        self.distance_array = [0]*self.num_sectors

        self.color_image = np.zeros((resolution[0], resolution[1]))
        self.depth_image = self.color_image
        self.camera_stopped = False
        self.resolution = resolution
        self.framerate = framerate
        self.pipeline = rs.pipeline()
        self.config = rs.config()
        self.config.enable_stream(rs.stream.depth, resolution[0], resolution[1], rs.format.z16, framerate)
        self.config.enable_stream(rs.stream.color, resolution[0], resolution[1], rs.format.bgr8, framerate)
        self.config.enable_stream(rs.stream.gyro)        

    def start_camera(self):
        # start the thread to read frames from the video stream
        
        self.pipeline.start(self.config)
        Thread(target=self.update).start()

    def update(self):
        try:
            while True:
                # Wait for a coherent pair of frames: depth and color
                frames = self.pipeline.wait_for_frames()
                depth_frame = frames.get_depth_frame()
                color_frame = frames.get_color_frame()
                if not depth_frame or not color_frame:
                    continue

                # Convert images to numpy arrays
                self.depth_image = np.asanyarray(depth_frame.get_data())
                self.color_image = np.asanyarray(color_frame.get_data())
                
		# Bin pixels and determine local minima
                depth_index = []
                for idx in range(self.num_sectors):
                    a = int(idx*self.pixel_group)
                    b = int((idx+1)*self.pixel_group)
                    self.depth_image[self.depth_image==0] = 3000
                    depth_index = np.append(depth_index, np.min(self.depth_image[:,a:b]))

                self.depth_index = depth_index 


        except:
            self.pipeline.stop()
            print("Error in Vision", sys.exc_info())

        finally:
            self.pipeline.stop()


    def read(self):
        # return the frame most recently read
        return self.frame

    def stop_camera(self):
        # indicate that the thread should be camera_stopped
        self.camera_started = False
        self.camera_stopped = True

if __name__ == '__main__':

    vs = VideoStream()
    vs.start_camera()
    time.sleep(5)
    while True:
        cv2.imshow("image", vs.color_image)
        print ("array", vs.depth_index)
        cv2.waitKey(1)

This is the error I get when I leave self.config.enable_stream(rs.stream.gyro) uncommented

Error in Vision (<class 'RuntimeError'>, RuntimeError("Frame didn't arrived within 5000",), <traceback object at 0x7f96f6e688>)

@MartyG-RealSense
Copy link
Collaborator

I hope that the link below will be helpful to you.

#2945 (comment)

@BetterRobotics
Copy link
Author

I hope that the link below will be helpful to you.

#2945 (comment)

Issue 2945 is unrelated to my issue, I feel the SDK is broken not my code. I cannot stream IMU and video data at the same time in with python3, jetson nano running 18.04 and pyrealsense2 built from source using "jetsonhackers" script

@BetterRobotics BetterRobotics changed the title cannot stream depth, colour, gyro and accel all at once. cannot stream depth, colour, gyro and accel all at once using the python3 wrapper Mar 12, 2020
@Ezward
Copy link

Ezward commented Mar 13, 2020

This is a regression. 2.31.0 had a problem streaming IMU with RGB and Depth. This was resolved in 2.32.1. I am now having the same issue with 2.33.1. I had this bug, see 5628, and upgraded to 2.32.1, which solved it (along with giving the IMU it's own pipeline, see below). I've just built a new car which is identical to the other car, but it is using 2.33.1 and it cannot stream IMU data along with RGB and DEPTH. If I try to stream all 3 then I will get a timeout trying to read from the pipeline. If I turn off streaming of IMU the I can reliable get RGB and Depth together. I can also stream IMU by itself. The other car running 2.32.1 works great streaming all 3 simultaneously.

@BetterRobotics I found that a separate pipeline for IMU is required in anycase. You can see my code referenced in that bug; I found that is required is to give IMU it's own pipeline; RGB and depth can share a pipeline. I see in your code that all 3 share a pipeline; I don't think that can work. So I would use 2.32.1 and give the IMU it's own pipeline.

Here is the final code that I have working with 2.32.1 https://github.com/autorope/donkeycar/blob/dev/donkeycar/parts/realsense435i.py

@MartyG-RealSense
Copy link
Collaborator

@Ezward Thanks so much for your input and your code contribution!

@BetterRobotics Does the advice and code kindly provided by Ezward help with your problem, please?

@BetterRobotics
Copy link
Author

@Ezward Thanks for your reply, I believe that your solution will bring the needed data, will try tomorrow EST time, and report back.

@MartyG-RealSense this may solve my issue, but I feel that this should be better documented in an example. Happy to test functionality on my end. How ever given the information this should be set as BUG? from 2.32 - 2.33 - incongruity.|

Cheers

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 13, 2020

@BetterRobotics If you could please post a separate case explaining what you would like documented better then I can give it a 'Documentation' label so that it can be tracked.

In the meantime, given that there is more than one report of potential issues with the IMU in the recent SDK versions, I will add the 'Bug' label to this particular case so it can be tracked and investigated.

@Ezward
Copy link

Ezward commented Mar 13, 2020

I can confirm that re-installing 2.32.1 on my Nvidia Jetson Nano fixed the issue of streaming rgb, depth and IMU simultaneously (using the code referenced in my previous comment). I have a colleague that had the identical issue and also resolved the issue by reverting back to 2.32.1. So it is pretty clear 2.33.1 has issues, at least in the python code. PS: we both used the JetsonHacks InstallRealsenseSDK project to build and deploy the code on the Jetson Nano, specifically the buildLibrealsense.sh script; https://github.com/jetsonhacks/installRealSenseSDK

@BetterRobotics
Copy link
Author

@MartyG-RealSense can confirm running solution with 2.32.1, using the blow code

from threading import Thread
import pyrealsense2 as rs
import numpy as np
import cv2, sys, time


class VideoStream:
    def __init__(self, resolution=(640, 480), framerate=15):


        self.FOV = 100.4 

        # point conversion
        self.num_sectors = 21 # number of sections
        self.pixel_group = resolution[0] / self.num_sectors
        self.distance_array = [0]*self.num_sectors
        self.depth_index = []

        self.color_image = np.zeros((resolution[0], resolution[1]))
        self.depth_image = self.color_image
        self.camera_stopped = False
        self.resolution = resolution
        self.framerate = framerate
        
        self.imu_pipe = rs.pipeline()
        self.imu_config = rs.config()
        self.imu_config.enable_stream(rs.stream.gyro)
        self.imu_config.enable_stream(rs.stream.accel)
        self.acc = []
        self.gyro = []
        
        self.vid_pipe = rs.pipeline()
        self.config = rs.config()
        self.config.enable_stream(rs.stream.depth, resolution[0], resolution[1], rs.format.z16, framerate)
        self.config.enable_stream(rs.stream.color, resolution[0], resolution[1], rs.format.bgr8, framerate)
        
        
    def start_camera(self):
        # start the thread to read frames from the video stream
        self.vid_pipe.start(self.config)
        Thread(target=self.update_cam).start()       

    def start_imu(self):
        # start the thread to read frames from the video stream
        self.imu_pipe.start(self.imu_config)
        Thread(target=self.update_imu).start()

    def update_cam(self):
        try:
            print("got to Aa")
            while True:
                # Wait for a coherent pair of frames: depth and color
                vid_frames = self.vid_pipe.wait_for_frames()
                depth_frame = vid_frames.get_depth_frame()
                color_frame = vid_frames.get_color_frame()
                if not depth_frame or not color_frame:
                    continue

                # Convert images to numpy arrays
                self.depth_image = np.asanyarray(depth_frame.get_data())
                self.color_image = np.asanyarray(color_frame.get_data())
                
		        # Bin pixels and determine local minima
                depth_index = []
                for idx in range(self.num_sectors):
                    a = int(idx*self.pixel_group)
                    b = int((idx+1)*self.pixel_group)
                    self.depth_image[self.depth_image==0] = 3000
                    depth_index = np.append(depth_index, np.min(self.depth_image[:,a:b]))
                    
                self.depth_index = depth_index 
        except:
            self.vid_pipe.stop()
            print("Error in Vision", sys.exc_info())

        finally:
            self.vid_pipe.stop()


    def update_imu(self):
        try:
            print("got to Ab")
            while True:
                # Wait for a coherent pair of frames: depth and color
                mot_frames = self.imu_pipe.wait_for_frames()
                self.acc = mot_frames[0].as_motion_frame().get_motion_data()
                self.gyro = mot_frames[1].as_motion_frame().get_motion_data()

        except:
            self.imu_pipe.stop()
            print("Error in Vision", sys.exc_info())

        finally:
            self.imu_pipe.stop()



if __name__ == '__main__':

    vs = VideoStream()
    vs.start_imu()
    vs.start_camera()
    
    
    time.sleep(5)
    while True:
        cv2.imshow("image", vs.color_image)
        print("acc, gyro", vs.acc, vs.gyro)
        print ("array", vs.depth_index)
        cv2.waitKey(1)

Thanks to @Ezward for the info you were spot on.

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @BetterRobotics and @Ezward

@BetterRobotics Is there anything else you need assistance with?

@Ezward
Copy link

Ezward commented Mar 14, 2020

@MartyG-RealSense I think there should still be a bug logged against 2.33.1 for this issue. What I describe is a workaround, not a fix. Most users will simply install the latest and find their software does not work.

@MartyG-RealSense
Copy link
Collaborator

@Ezward okay thanks, this case will be left open with the "Bug" label for the RealSense team to consider.

@juliussin
Copy link

Hi! I'd like to join this talk, since I'm also using Jetson Nano, and I also need to stream depth and color (aligned) and IMU. I very satisfied with @Ezward explanation about IMU needs its own pipeline.

  1. I need to ask about the SDK, which version do you think better with all of this problem? With newest version, I had problem with timestamp error, and I want to stream all three of depth, color, IMU.

  2. I found that (using SDK 2.31.0) if I didn't align the color and the depth, I'm able to stream all three with one pipeline (haven't tried two pipeline). If I align the color and the depth, I can't get the IMU value. What is your suggestion? (I tried @Ezward code and it didn't work in my device, timeout)

  3. I still don't really understand about pipeline. Can I run my program in each data I get? For example, if the Gyro FPS is 200, and Accel FPS is 250, I want to process the data each time I get the new data, which is unsynchronized (gyro each 1/200 sec and accel each 1/250 sec).
    Or lets say in one pipeline, you can only get a frame, with those IMU settings, how many FPS do I get? 200 fps or 250 fps?

  4. If I have 2 pipeline, does it mean I can process the frame that I got first? For example IMU frame comes every 1/200 sec, and depth+color frame comes every 1/60 sec. Please correct me if I'm wrong

Thank you!

@BetterRobotics
Copy link
Author

BetterRobotics commented May 7, 2020

@BetterRobotics Is there anything else you need assistance with?

@MartyG-RealSense No thanks, appreciate the help.

@juliussin make sure have you got 2.32.1 also I found you needed to setup the IMU first, using the code I provided above I successfully read all three streams simultaneously using two pipelines, one for video and one for the IMU.

@juliussin
Copy link

@BetterRobotics Thank you for sharing your code! I've upgraded my SDK to 2.32.1, and I tried your code. However, I can't get the IMU data. It failed to read the IMU and showed acc, gyro [] []. I wonder what's wrong with my device? It's also lagging around 100ms per frame for depth and color (I use 30fps). I also found this version gave me lower fps than my previous version (2.31.0) with the same code I wrote earlier.

@BetterRobotics
Copy link
Author

@BetterRobotics Thank you for sharing your code! I've upgraded my SDK to 2.32.1, and I tried your code. However, I can't get the IMU data. It failed to read the IMU and showed acc, gyro [] []. I wonder what's wrong with my device? It's also lagging around 100ms per frame for depth and color (I use 30fps). I also found this version gave me lower fps than my previous version (2.31.0) with the same code I wrote earlier.

Ok so it seems the code is not the issue, have you updated you firmware I'm running 05.12.03.00? What version? Does the IMU data stream in realsense-viewer?

@juliussin
Copy link

@BetterRobotics Thank you for sharing your code! I've upgraded my SDK to 2.32.1, and I tried your code. However, I can't get the IMU data. It failed to read the IMU and showed acc, gyro [] []. I wonder what's wrong with my device? It's also lagging around 100ms per frame for depth and color (I use 30fps). I also found this version gave me lower fps than my previous version (2.31.0) with the same code I wrote earlier.

Ok so it seems the code is not the issue, have you updated you firmware I'm running 05.12.03.00? What version? Does the IMU data stream in realsense-viewer?

I've upgraded my firmware with recommended firmware option (which is 5.12.02.100) and yes, the IMU data streams in realsense-viewer my previous code to check:

import pyrealsense2 as rs
import numpy as np
import cv2
import time

print(rs.__version__)
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 360, rs.format.z16, 60)
config.enable_stream(rs.stream.color, 640, 360, rs.format.bgr8, 60)
config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 250)
config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)

pipeline.start(config)

try:
    while True:
        time1 = time.time()

        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        accel_frame = frames.first_or_default(rs.stream.accel).as_motion_frame()
        gyros_frame = frames.first_or_default(rs.stream.gyro).as_motion_frame()

        if not depth_frame or not color_frame or not accel_frame or not gyros_frame:
            continue

        if accel_frame:
            accel_sample = accel_frame.get_motion_data()
            print("Accel: ", accel_sample)
        if gyros_frame:
            gyros_sample = gyros_frame.get_motion_data()
            print("Gyros: ", gyros_sample)

        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        cv2.imshow("Color", color_image)
        cv2.imshow("Depth Colormap", depth_colormap)
        key = cv2.waitKey(1)
        if key == 27:
            break
        print('Done 1 Frame! Time: {0:0.3f}ms & FPS: {1:0.2f}'.format(
            (time.time()-time1)*1000,
            1/(time.time()-time1)
        ))

finally:
    pipeline.stop()
    cv2.destroyAllWindows

I know using only 1 pipeline is not a good idea, but this is just for testing purpose (before I read this issue) and I can get the IMU data with this code (but failed to read IMU with the code you provided above). But now with extra bit of delay in depth and color stream. Did I do something wrong?

@BetterRobotics
Copy link
Author

@juliussin I'm not sure how the SDK is setup, unclear on multiple data sample rates in the same pipe, I let the pipe set defaults for the IMU, if it can that's great, try calibrating your IMU/device that might be a stand point. Also if you could try reverse the order as to which you setup the video and IMU pipe's, for some reason in my code It would give the [] [] response when starting a depth or color stream first.

@juliussin
Copy link

@juliussin I'm not sure how the SDK is setup, unclear on multiple data sample rates in the same pipe, I let the pipe set defaults for the IMU, if it can that's great, try calibrating your IMU/device that might be a stand point. Also if you could try reverse the order as to which you setup the video and IMU pipe's, for some reason in my code It would give the [] [] response when starting a depth or color stream first.

This is interesting. I tried to calibrate my IMU with rs-imu-calibration.py but I got error:

Writing calibration to device.

Done. failed to set power state
Segmentation fault (core dumped)

is also got the same error if i do the video pipeline first, which gave me error:

Traceback (most recent call last):
  File "/home/.../test.py", line 99, in <module>
    vs.start_imu()
  File "/home/.../test.py", line 43, in start_imu
    self.imu_pipe.start(self.imu_config)
RuntimeError: failed to set power state

should I open a new issue with this?

@BetterRobotics
Copy link
Author

BetterRobotics commented May 8, 2020

@juliussin > should I open a new issue with this?

I feel this is the same Issue with more details as to why it wouldn't stream but, may require a new tag for more attention, also, try updating to 05.12.03.00 and then calibrate and try again.

@MartyG-RealSense
Copy link
Collaborator

Case closed due to potential solution being present in #6370

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants