-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access & control any devices in a multicam setup #2219
Comments
An interesting information source is a script posted a week ago by the user Westa. He describes it as follows: "If explains how the RealSense context works - and how to iterate over all the current devices. It also shows a mechanism for resetting a found Intel device - and a mechanism to wait for the hardware to finishing resetting before continuing" |
First of all, Thank's for your idea and this new issue. so after reading all of this, I can uderstand that you want to capture and save frames simultaneously from both D415 & Webcam. in the program below we can see that the condition works with D400 series camera, auto dev = devices[0];
So as we see, I think if you want to detect the connectivity of your webcam you should create another veriable ( auto dev2 = devices[1] ) and make the appropriate condition which verify it. for my supposition I think that OpenCV function ( VideoCapture cap(0) ) will handle this nicely. NB: The program posted by WESTA, sweep all devices connected but with only the Realsense series and I don't think that WEBCAM will appear in the scan. I hope that will help you with somthing, keep us in touch for any further questions. |
Thank you @MartyG-RealSense, I took a look at this post but unfortunately, I don't think that my difficulties are related to hardware reset (I'll keep the the code anyway, it might come in handy someday !). I managed to go through the different devices with this code (which is working) :
But as soon as I start a pipeline, only the first device is selected... I went through the rs-multicam code (again) I mentioned earlier and I noticed something. In this bit :
It seems that the devices are selected with their respective serial numbers, is that correct ? To this day, I based my algorithm on this code and tried to iterate through
|
Thank you for your interest @TAREK-ELOUARET ! |
My intention was that the 'iteration through all devices' part might be the main part that was of potential use. Hardware reset just happened to be part of that script. smiles I have seen examples where the camera is selected either through a device number - e.g (0) or (1) - or by providing the camera's full serial number. Both approaches are likely valid. For example, the multiple connection aspect of the unofficial SDK 2.0 Unity wrapper extension takes the approach of asking for the serial number to be provided. https://user-images.githubusercontent.com/20081122/38487218-6ab82dce-3c1a-11e8-9cd7-31d742cb5bba.png |
I agree with that ! The bit you are talking about :
is based on And you are right ! It is indeed working when I want to display the different devices information (see my 2nd post).
I hope it'll work with the serial numbers, because it looks like I don't know how to use the device number ! Anyway, I think my best chance at the moment is to modify the rs-multicam example, because it is also useful for accessing cameras simultaneously (I want to do that too). |
I've got it to work with the serial number selection (finally) so I can now select the camera I want (yay), but now I'm struggling with the "simultaneously" part. So my question is now, what is the best way to do the captures simultaneously ? By that I mean that I want to save to disk 2 different pointclouds/images from 2 different cameras corresponding to the same instant (more or less, but the closer the captures are from each other, the better). Any idea is welcome ! For those who want to know, some bits of code (yes it could be way better):
(part of) The main :
Please tell me if it can be improved, I'm here to learn. |
I was sure someone had asked recently about saving both streams separately, and after much searching I found that it was you, at the start of this discussion. Shows the importance of going back to the start and re-reading to refresh the memory when a discussion thread gets long. :) The Software Support Model doc page provides some useful insights about how Librealsense2 handles multi streaming, in the section headed "Multi-Streaming Model". https://github.com/IntelRealSense/librealsense/blob/master/doc/rs400_support.md |
Ok, so from what I read, the rs400_support.md says that multi-stream is possible (and easier than before), which is not surprising because of the rs-multicam example.
I took a quick look at the SDK documentation but I still can't find those functions. Your document says it's possible, but not how :( (unless I missed something ?). Some bits of code are provided, but if I'm not wrong, are only useful for sensors (not devices) control. And yes, I try to describe as much as I can the difficulties I'm facing, so it is much likely that these threads will become unreasonably long (!). |
Nothing unreasonable about being long. It takes as long as it takes. I've had cases go to 6 or 7 pages sometimes, so don't worry! It's helpful to be given as much detail as possible, so thanks muchly for that. |
@AntoineWefit Thanks for sharing with us your experience : ) , for the simultaneously problem , I think you should use the multithreading process in c++. so you put each process on thread in order to get a good and fast results as you want. |
Hi @TAREK-ELOUARET @MartyG-RealSense @AntoineWefit The basic
The following snippet shows how to control individual sensors: #include <librealsense2/rs.hpp>
#include <thread>
#include <vector>
#include <chrono>
#include <map>
#include <iostream>
int main()
{
using namespace rs2;
using namespace std;
using namespace std::chrono;
// Using context.query_devices():
context ctx;
std::vector<sensor> sensors;
for (auto dev : ctx.query_devices())
{
for (auto sensor : dev.query_sensors())
{
sensor.open(sensor.get_stream_profiles().front()); // TODO: Find the correct profile
sensor.start([sensor](frame f){
std::cout << "New frame from " << sensor.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER)
<< " of type " << f.get_profile().stream_name() << std::endl;
});
sensors.push_back(sensor); // Save & keep-alive for later clean-up
}
}
this_thread::sleep_for(chrono::seconds(3));
for (auto sensor : sensors) {
sensor.stop();
sensor.close();
}
} I'm trying to follow the post to understand how best to help.
The |
Thanks for coming in to help at the weekend, Dorodnic. :) The aspect of the camera that @AntoineWefit was looking for documentation clarification on was how to stream multiple cameras within a single process (something that the Software Support Model documentation page said was possible). Thanks muchly for providing the script above. |
I must be missing something, isn't that what We should probably collect all these snippets into the wiki / some other document. Happy to help. Here's the same idea with #include <librealsense2/rs.hpp>
#include <iostream>
#include <map>
#include <chrono>
#include <thread>
#include <string>
int main()
{
using namespace rs2;
using namespace std;
using namespace chrono;
context ctx;
map<string, pipeline> pipes;
for (auto dev : ctx.query_devices())
{
auto serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
config cfg;
cfg.enable_device(serial);
pipeline p;
p.start(cfg);
pipes[serial] = p;
}
auto start = steady_clock::now();
while (duration_cast<seconds>(steady_clock::now() - start).count() < 3)
{
for (auto kvp : pipes)
{
frameset fs;
if (kvp.second.poll_for_frames(&fs))
{
std::cout << fs.size() << " synchronized frames from camera " << kvp.first << std::endl;
}
}
this_thread::sleep_for(milliseconds(1)); // Otherwise we get 100% CPU
}
} |
In regard to documentation additions, top of my own personal wish-list would be a doc on performing hardware sync with D435s using an external signal generator (a similar setup to the January 2018 volumetric capture with four D435s at the Sundance tech shack), or alternatively a D415 master to provide the sync pulse and all-D435 slaves. Thanks again for your very hard work. Dorodnic. |
I see, we will try to add more information on that. The multi-camera white paper is constantly being updated, but I agree that it is talking less about the software side. Here is another snippet, this time with #include <librealsense2/rs.hpp>
#include <iostream>
#include <map>
#include <chrono>
#include <thread>
#include <string>
#include <vector>
int main()
{
using namespace rs2;
using namespace std;
using namespace chrono;
context ctx;
vector<thread> threads;
for (auto dev : ctx.query_devices())
{
string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
config cfg;
cfg.enable_device(serial);
pipeline p;
p.start(cfg);
threads.emplace_back([p, serial](){
auto start = steady_clock::now();
while (duration_cast<seconds>(steady_clock::now() - start).count() < 3)
{
frameset fs = p.wait_for_frames();
std::cout << fs.size() << " synchronized frames from camera " << serial << std::endl;
}
});
}
for (auto& t : threads) t.join(); // Must join / detach all threads
} |
I'll post a link to this new material over on the Intel Support RealSense site. Cheers. ^_^ |
Thank you for your help @dorodnic @MartyG-RealSense @TAREK-ELOUARET ! I appreciate it.
Yes, I think so ! Due to my lack of knowledge in coding, I understood how to start multiple streams (kind of), but I had difficulties handling them correctly : selecting the right camera / saving data to disk / selecting the right pipeline for example... I'm, in fact, trying to combine different examples from the documentation; I managed to make them work separately, but all together is an other story ! Out of curiosity, I saw that you used
As I said in my first and second posts, I tried it but without success... |
[EDIT] : Ok, so now, with your "pipeline & threads" snippet @dorodnic I managed to start both cameras simultaneously. More questions :
bit is doing ? The code I tried really quickly, please feel free to suggest improvements (fyi I have 2 cameras, which explain the
I basically tried to differentiate the 2 pipelines by giving them an index, and then use this index to save the data from each camera once both of them are started to stream. It kind of worked except that it seems like I can't access the pipelines with this method as the program is saving 2 files with the same name (which mean only 1 pipeline is selected). I have difficulties understanding "where" the data actually are when you start a pipeline, and how to access them. For my case, do I have to retrieve them from the Please pardon my poor coding skills/knowledge of the SDK. |
Trying to conserve lines of code I missed
// Since we want to use wait_for_frames (that waits until frames from specific camera are ready) but we don't want the two cameras to block each other, we will run both pipelines in parallel.
// We define new function (can be function outside main, but convenient to do in place)
// This function will be executed for every pipeline
auto what_to_do = [p, serial](){ // [p, serail] part is the list of variables we want to share with the function
// We need to define some "exit" condition, otherwise threads will keep waiting for frames
// This specific condition will exit after 3 seconds
// Alternatively, you can create std::atomic_bool to_exit variable in your main function, initialise it to false, run while(!to_exit), and before thread::join set it to true.
// You can also just run the content of while just once
auto start = steady_clock::now();
while (duration_cast<seconds>(steady_clock::now() - start).count() < 3)
{
// Block this specific thread until frames for this camera are ready
frameset fs = p.wait_for_frames();
// Here you can put any processing you want to do
// For example:
rs2::pointcloud pc;
rs2::points points;
auto depth = fs.get_depth_frame();
auto color = fs.get_color_frame();
pc.map_to(color);
points = pc.calculate(depth);
string filename = serial + ".ply";
cout << "Saving 3D model to " << filename << endl;
points.export_to_ply(filename, color);
}
};
// Once this function is defined, we can create thread to run it in parallel to everything else
// This will start executing the function right away
std::thread t(what_to_do);
// If we don't save the thread object somewhere, it will be destroyed instantly, so we add it to threads collection
threads.push_back(t); Hope this helps |
@dorodnic Yes, quite a bit ! If I'm not mistaken, you suggest to define a function that'll do the capture and run that function 2 times in parallel. With your help, I think I'm really close to my goal. However, I'm still stuck with the error I mentioned earlier, which might be triggered by the
correctly. Can't start to use the second camera (the blurred out text is the path to the .exe) : The code :
|
Well well well, looks like I can answer myself this time ! Thanks to @dorodnic and his examples I managed to make it work. I have to admit that the I'm pretty happy with the result as the code is much more clean than before, and is ,of course, working (!) For anyone interested, the following code will capture and save a picture for each connected D415 simultaneously. (One can easily replace the picture capture with a pointcloud capture for example.) Thank you @MartyG-RealSense @TAREK-ELOUARET and of course @dorodnic for your help ! How I checked for simultaneity of the (2) captures : The code :
|
Issue Description
What is the best way to access and control (a) device(s) in a multicam setup ?
I wrote a code based on the examples in the SDK, which is supposed to take control of cameras in a multiple D415 cameras setup. "Supposed" because right now, only 1 of the cameras is selected to capture data.
Let's be clear, the problem is clearly my code and not the hardware or the OS because I'm learning as I go.
The idea is simple : I would like to write a function which will select a camera after another to capture a pointcloud for each camera (with texture and save it to disk), and ideally, another one which will select both cameras to capture an image for each (and save it to disk).
What I've got so far is a program doing the different captures, but not the camera selection and is far from selecting both camera simultaneously.
I reviewed the rs-multicam example, which should have helped me, but I don't understand how to select the cameras as I want. I'm also guessing that the sensor-control example is providing some information. Unfortunately, I can't provide the code I wrote (which doesn't work anyway).
Naïvely, I thought that replacing :
auto dev = devices[0];
by
auto dev = devices[1];
in this example will selected the second device but it's not working...
[EDIT]
It is working for other cameras (Platform camera) : with two D415 and a Webcam, I can select the webcam with
auto dev = devices[2];
but both
auto dev = devices[0];
andauto dev = devices[1];
select the same D415...
I don't really care if the cameras are synchronized or not, because I don't want to stream and I want every data to be captured independently.
Any help is appreciated !
The text was updated successfully, but these errors were encountered: