Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

starting and stopping streams independently in runtime #1984

Closed
wongfei opened this issue Jul 1, 2018 · 8 comments
Closed

starting and stopping streams independently in runtime #1984

wongfei opened this issue Jul 1, 2018 · 8 comments
Assignees

Comments

@wongfei
Copy link
Contributor

wongfei commented Jul 1, 2018

Looking for a good way to start and stop different streams in runtime (after pipeline started). In examples the configuration is filled once before starting the pipeline and then just waiting for frames. Does it means I should create different pipeline for each stream; or completely restart/reconfigure the shared pipeline in case something changed?

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@wongfei
Yes, you can do that via "sensor" object, attached the code for your reference.

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include "example.hpp"          // Include short list of convenience functions for rendering
#include <vector>

using namespace std;
using namespace rs2;

enum SENSOR{
 DEPTH,
 COLOR
};

stream_profile getStreamProfile(sensor s, int w, int h, int fps, rs2_format format)
{
 for (auto p : s.get_stream_profiles())
 {
  if (p.as<video_stream_profile>().width() == w && p.as<video_stream_profile>().height() == h && p.fps() == fps && p.format() == format)
      return p;
 }
}

// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{
    // Create a simple OpenGL window for rendering:
    window app(1280, 720, "RealSense Capture Example");
    // Declare two textures on the GPU, one for color and one for depth
    texture depth_image, color_image;

    // Declare depth colorizer for pretty visualization of depth data
    rs2::colorizer color_map;
 vector<stream_profile> sps;

 rs2::context ctx;
 syncer sync;
 device dev = ctx.query_devices()[0];
 vector<rs2::sensor> sensors = dev.query_sensors();

 sps.push_back(getStreamProfile(sensors[DEPTH], 1280, 720, 30, RS2_FORMAT_Z16));
 sps.push_back(getStreamProfile(sensors[COLOR], 1280, 720, 30, RS2_FORMAT_RGB8));
 sensors[DEPTH].open(sps[DEPTH]);
 sensors[COLOR].open(sps[COLOR]);

 sensors[DEPTH].start(sync);
 sensors[COLOR].start(sync);

 int count = 0;
 while (app && count++ < 200)
 {
  rs2::frameset data = sync.wait_for_frames(); // Wait for next set of frames from the camera

  rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
  rs2::frame color = data.get_color_frame();            // Find the color data

                // Render depth on to the first half of the screen and color on to the second
  depth_image.render(depth, { 0,              0, app.width() / 2, app.height() });
  color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });
 }

 sensors[DEPTH].stop();

 count = 0;
 while (app && count++ < 200)
 {
  rs2::frameset data = sync.wait_for_frames(); // Wait for next set of frames from the camera

  //rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
  rs2::frame color = data.get_color_frame();            // Find the color data

                // Render depth on to the first half of the screen and color on to the second
  //depth_image.render(depth, { 0,              0, app.width() / 2, app.height() });
  color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });
 }

 sensors[DEPTH].start(sync);
 count = 0;
 while (app && count++ < 200)
 {
  rs2::frameset data = sync.wait_for_frames(); // Wait for next set of frames from the camera

  rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
  rs2::frame color = data.get_color_frame();            // Find the color data

                // Render depth on to the first half of the screen and color on to the second
  depth_image.render(depth, { 0,              0, app.width() / 2, app.height() });
  color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });
 }
  return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    return EXIT_FAILURE;
}
catch (const std::exception& e)
{
    std::cerr << e.what() << std::endl;
    return EXIT_FAILURE;
}

@wongfei
Copy link
Contributor Author

wongfei commented Jul 9, 2018

Nice example. thanks!

@wernerdd
Copy link

wernerdd commented Apr 14, 2023

In the example code here the different streams (Depth, Color) are stopped and started at runtime. But my tests using the tool usbtop show that the traffic stays unchanged ( about 40 / 20.000 kb/s to / from device ).
Is there a way that the according usb traffic is changing too if the amount of streams is changed ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 15, 2023

Hi @wernerdd The script starts the Depth and Color sensors.

 sensors[DEPTH].start(sync);
 sensors[COLOR].start(sync);

It only stops the Depth sensor though, indicating that the color sensor is still active and likely continuing to transmit data through the USB after that Depth sensor stop instruction.

sensors[DEPTH].stop();

@wernerdd
Copy link

wernerdd commented Apr 17, 2023

Thanks for the info. So it was a miss understanding of me. Not the sensor stream in the cam is stopped and started it is just regarding the local internal stream and its displaying.
Meanwhile I found this (your) interesting discussion in the community of intelrealsense D415 External-Triggering - Single frames .
So it would be fine to have such a triggering mode (SW (and or HW)) for the D415 cam too - best for both sensors and separately. Shouldn't it be possible - would solve my issue goal too.
The goal is to get further frame(s) very quickly if wanted and not having unneeded traffic on the usb bus all the time ( wasting traffic capacity and perhaps cpu load too).

@MartyG-RealSense
Copy link
Collaborator

If your goal was - like in the linked discussion - to only capture when a trigger signal is received, then external triggering could do that for the depth stream but not for the color stream.

Instead of using hardware sync, an alternative solution may be a C++ script at #2219 (comment) that captures a .png image simultaneously from all attached cameras when the script is run.

@wernerdd
Copy link

wernerdd commented Feb 7, 2024

BTW: By just switching the SDK Version from 2.54.1 to 2.54.2 and according librealsense the needed processor load of my UP2/iNUC is reduced drastic. ( With SDK 2.54.1 it is about 40% of one CPU all the time, with SDK 2.54.2 goes down to about 4% and 15% while triggering and getting an image via USB-IF only ( 4xD415-Cam connected and running at the same time). This is to see independent of the used D415-Cam FW ( 5.14.0.0 or 5.15.1.0) ).

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @wernerdd for sharing your experience of processor load reduction with the 2.54.2 SDK version!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants