-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multicamera setup with D415 & L515 How to avoid interference? #8511
Comments
This is an interesting setup and configuration. The two different camera technologies handle syncing very differently as outlined in the whitepapers that discuss syncing. And having an actual sync cable or signal that feeds into both camera technologies is something that we haven't investigated on our side. The D400 has a master/slave type of syncing mechanism with L515, the sync signal is basically turning off the laser in simple terms. And ramping up the laser takes time when the sync signal is initiated to the camera. |
We already had a lot of depth camera’s and would like to test the Lidars as well, and hoped it would be easier… We are familiar with the sync pulse of the D415, sadly this camera only syncs to pulses which are close to the camera frequency. This means we cannot use the sync pulses to turn the D415 on/off. The raspberry pi’s can be used to create pulses of the right frequency, however, each camera has its own raspberry pi and we fail to sync them correctly on the sub second level. For the Lidars the sync pulses might work (it’s no problem to wait a few seconds so the laser can powerup). But syncing them with the D415’s is difficult (without sending all the sync pulses from one raspberry) Since we don’t have any speed requirements, we prefer to turn the camera’s on/off through the software. For example by opening/closing the stream. This works on windows, but throws errors on a raspberry pi. As if some buffers overflow/are not closed… |
Glad to hear that you're able to determine that just opening and closing the stream is good enough for your setup and configuration. Also that things are working on Windows. If it's not working on raspberry pi then maybe it's more of a setup/installation of the SDK on the system then it is the cameras themselves. Maybe the installation docs within the doc folder may help: https://github.com/IntelRealSense/librealsense/tree/master/doc |
Thank you for the reply, we followed those instruction, sadly it did not help. Please let me know if you have any other tips/sugestions... We invested a lot of time and effort into building this setup, and really don't want to fail on this small issues ( bugs?) |
Hi @autimator Do you require further assistance with this case, please? The case will be closed after 7 days from the time of writing this if we do not hear from you. Thanks! |
Yess @MartyG-RealSense the problem still persists. My solution right now is to reboot the raspberry pi when this happens, but that's far from ideal. The new updates of the library & lidar have not fixed the issue. Please let me know if you can think of any other solutions/things to try. |
I recall a past plant analysis project with 400 Series cameras where an image was being captured periodically every 10-15 minutes, though in their case they used a single camera and had the plants on a conveyor belt. It's an interesting case to read. IntelRealSense/realsense-ros#1354 In regard to the syncing issues described for the 400 Series cameras in your original message: whilst in the hardware sync system described in the original 400 Series multicam white paper the cameras only waited for a trigger pulse for a limited period of time before capturing and were also difficult to trigger with signal generator equipment, the later multicam white-paper External Synchronization addressed these issues. Under this expanded system that added inter cam sync modes 3 and 4+, cameras can wait indefinitely for a trigger when in the new 'genlock' mode and it is also easier to sync triggers. https://dev.intelrealsense.com/docs/external-synchronization-of-intel-realsense-depth-cameras However, the External Synchronization system is suited to cameras that have a fast global shutter on their sensors like D435 / D435i and D455, so it is not likely to be suited to your D415s unfortunately. In regard to the problem of selecting a series of cameras one by one, a RealSense user once created a script to do this with D415 cameras. Their goal was actually to have all captures initiated simultaneously for all cameras, which they eventually succeeded in. Mid-way through their project though, they had an intermediary C++ script where the list of cameras was initiated one by one. This script has a section of code that only allows D415 cameras to activate, so this would have to be edited / removed if you wanted to capture with different models. If you are able to use C++, it may be worth trying that script in place of your own to see whether the errors still occur. It may also be worth trying the 'all cameras capture simultaneously' final version of the script to see whether taking an instant snapshot from all cameras avoids the usual problems caused by interference between L515s. If you needed a Python-based multicam reference, a RealSense user recently shared a Python multiple camera script. |
Hi @autimator Do you require further assistance with this case, please? Thanks! |
Hello @MartyG-RealSense, thinking in a different direction. Would it maybe be possible to turn only the lasers on/off while the camera stays on?. That might also solve the interference problems. (if that does not crash the lidar like restarting does.) As for the synchronization, the camera's are all controlled by there own raspberry pi and we don't have the option to lay additional cables for the hardware triggering. This leads to some small time differences between images but we don't have any problems with those as the object remains stationary (moves centimeters on an entire day). |
The IR emitter of the D415, D435/i and D455 cameras is on a separate projector component and can operate independently of the imaging sensors. So you can enable and disable the laser on those camera models whilst the camera remains constantly enabled. Turning off the laser can cause the detail of the depth image to reduce. You can compensate for this if the scene that the cameras are in is well lit, as the 400 Series cameras can alternatively use ambient light instead to analyze surfaces in the observed scene for depth detail. |
That sounds promising, I also saw that the laser settings can be controlled through the python wrapper: #1258 Sadly I don't have any L515's laying around at the moment to test with. Do you know if turning the laser power down and up again can be used as a trick to avoid interference of the lidars? |
Logically if a disruptive infrared source in the vicinity of the L515 is removed then it should improve the L515's performance. I don't have a practical example to quote though, as typically it is recommended that different models of RealSense camera are not mixed together in a multi-camera setup (e.g having all D415 or all L515). |
Hello @MartyG-RealSense thats a whole new level. Right now we just have 10 D415 cameras and 4 Lidars pointed at different parts of the same object and are trying to take images without interference. And I made this git issues to find a way to set this up.... As each camera is controlled by a different computer the hardware triggers where not an option for us. But maybe just turning the laser power down to zero can also be used as trick to avoid interference from other lidar camera's? |
As long as the scene that the cameras are in is well lit then the D415 cameras should be able to use ambient light in the scene to analyze surfaces for depth detail in the absence of the IR dot projection. The L515 cameras may not need to necessarily all be on the same computer in order to take advantage of being able to deal with interference from multiple L515 grouped together (described in the L515 hardware sync white-paper document). In the L515 multicam system, all the L515 are slaves and the master sync signal is transmitted by an external trigger device. So in theory (I do not have a practical example to quote to confirm it), all L515 should follow the timing of the external triggering device even if they are not on the same computer so long as the program on a particular computer has placed the L515 camera attached to that computer in Slave mode via the INTER_CAM_SYNC_MODE instruction and the master sync signal is coming from the same external trigger device for all computers. So you could perhaps arrange the L515s so that their field of view only points directly towards another L515 opposite it, and use hardware sync to negate the interference. You may still have infrared projections from D415s crossing the L515 fields of view if the D415 lasers are not toggled off, though it may have less impact if crossing the L515 FOV side-on instead of head-on. There was an interesting video recently posted on YouTube about external triggering with L515. |
Hi @autimator Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
@ttsesm You posted a message on this discussion (below) but it seems to have been deleted. Do you still require assistance with this, please? I have a similar setup where I want to obtain the rgb images from 2 D415 and one L515 realsense cameras. I am interested only on the rgb output and I do not really mind at least for now on the depth. I found and tried to use the demo from this example https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py which is related to multi-camera setup and I've managed to partially make it work. I say partially because it works fine if I load only the two D415 cameras or the single L515 but when I try to load all three together I am getting the following error when it tries to start the pipeline for the L515 camera: 3 devices have been found My modifications are, the DeviceManager is called as follows: def run_demo():
...
... Thanks. |
Hi @MartyG-RealSense, thanks for the prompt response. No, finally I've manged to resolve it. It seems that it was a cable issue, once I've changed the usb-c cable I was able to get all three cameras output ;-). Thanks a lot again :-). |
That's great to hear. Thanks for the confirmation that you're okay now! |
Hello,
We want to create 3 models of plant rows using both 10 D415 camera’s and 4 Lidar’s.
To avoid USB cable length issues each camera is controlled by its own raspberry pi, this works fine for the depth camera’s. The lidars are a bit harder as they interfere with each other.
The external triggers are normally used to solve this:
Lidar multicamera configuration
But we cannot trigger the D415’s like this, they don’t wait for a pulse (after a few seconds they start making their own synchronization puls again)
As the plants don’t move and we only want one/two images an hours my current plan is to turn the camera’s off/on in order: First the depth camera’s, than the lidars one by one. But I am having trouble getting this to work.
In my first attempt I created the pipeline as variable, started & stopped it. This works for the D415’s, but the Lidars sometimes gives: ‘frame didn't arrived within 5000’.
In an attempt to solve this issue I put the camera controls in a separate function, this ensures that the pipeline is completely cleaned up every time. This works fine for a few hours and then gives:
‘Backend terminated or disconnected. Use 'Stop/Restart' to restart.’
In the pages long debug log I don’t see any errors or different messages before the crash… My suspicion is a memory leak somewhere,
Any advise on how to get this setup working is welcome (We use the depth camera’s a lot, but this is the first time we are trying to deploy the Lidars…)
The text was updated successfully, but these errors were encountered: