-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Resource temporarily unavailable" on NVIDIA Jetson TX2 #6780
Comments
Hi @rcroset You can add a hardware_reset() routine to your scripting to perform an automated hardware reset of the camera that has a similar effect to physically unplugging and re-plugging the camera in the USB port. The link below has example scripting for doing so with Python and multiple cameras using the camera serial numbers: If you are able to check the CPU usage of your TX2, what percentage of the CPU is being used when your project is running, please? Is it "maxing out" at or near 100% usage? If you are using poll_for_frames() in your project (as is recommended for multicam applications) then it is important to control when the CPU is put to sleep and for how long, otherwise the CPU can max-out its processing. More information about this can be found in the link below. |
I unfortunately can't perform the hardware_reset() routine because I cannot access the device. When I want to query a device from the context, I get the error (see below), so I cannot call any routine on any devices:
The CPU usage of the TX2 is quite high (but still not near 100%) when running my project, but even when running the snippet above the error occurs. |
@tispratik had a similar situation with this set_xu(ctrl=1) failed! Last Error: Resource temporarily unavailable error when using Ubuntu and Python, and doing a hardware_reset() did not correct the problem for them either. |
Thank you for pointing this link. Unfortunately, it doesn't seem to provide a solution. Rebooting the system is not enough in our case. |
In past cases where doing a hardware_reset() on the camera has not been practical due to detection problems and you cannot reset the computer, there has been the possibility of achieving the same effect by using a USB port reset script with Ubuntu. Please google for ubuntu usb port reset script for more details. |
I've already tried that kind of script, unsuccessfully. It appears that only a manual reconnection works. |
Thanks for your patience. I have been considering further possibilities. To aid that analysis, could you tell me please if the project is being used outdoors, with the cameras exposed to sunlight? |
The project is used indoor. There are a small window providing sunlight but the cameras are not directly exposed (i.e. they are on the same wall as the window). |
Okay, thank you very much. My reading of your situation is that 3 cameras are attached to one TX2 board. And when one camera stops responding, the other two remain accessible. Is that correct, please? What FPS speed are the cameras running at? |
The 3 cameras are actually connected to a USB3 hub connected to the TX2 board. When a camera stops responding, it also shuts down the cameras plugged below on the hub (e.g. when the camera plugged on the USB slot 2 of the hub stops responding, it also affect the one plugged in slot 3). So when the first cameras stops responding, it blocks all cameras. The cameras are running at 6 FPS with maximum resolution for depth and color stream. |
6 FPS is a speed that is prone to errors that disappear at 15 FPS or higher. Would it be possible to try 15 FPS and see if you continue to experience the problems, please? |
Ok I'll try this and I'll get back to you in a few days to inform you if the problem comes back. How is it possible that low FPS can introduce such errors that disappear at higher framerate ? |
I believe it is because when the frames are updating at a slow rate, it can lead to timeouts while waiting for frames to arrive. |
We've been running at 15 FPS for almost one week and this problem didn't occur (yet). But a new problem appeared. The depth frames seemed not to update anymore. They update every minute or so. I've tried using |
As you are using Python, how you are storing the frames could be a factor. Like in the case below: |
Thanks for this link. However, I am not storing the frames in an array. All post-processing is done on a copy of the frame, and the variable containing the original frame is overridden at each iteration of the loop. |
My understanding is that when a frame is modified (for example, by a post-processing filter), the original is not destroyed but instead a copy of the frame is automatically created. Every frame has a counter stating how many copies of the same frame are being held in the frameset (which is a collection of frame objects). Old frames get pushed out when a new frame enters the frame queue. wait_for_frames() pulls a frame from the queue. |
Indeed, but the original is destroyed and overridden by the return value of Is it possible to have an example on how to properly use |
Remember from earlier in this case that for multicam projects, it is recommended that poll_for_frames() is used instead of wait_for_frames(). This includes programs that do not use multicam hardware sync, such as rs-multicam. I could not find a short and neat example of multicam use of poll_for_frames() in Python, though a device manager script in the multi-camera box_dimensioner_multicam example program makes use of it: You can read the frame counter from metadata with Python, though I'm not sure that is what you were asking for. |
I remember, thanks :) |
I apologise for the limitations of my Python programming knowledge, which may be slowing this diagnostic process. Some Python users who have multiple streams active have found that it can help to separate the stream types into different pipelines. The Python script in the link below separates RGB / depth in one pipeline and IMU in the other. Whilst you are not using IMU, conceivably you could put RGB in one pipeline and depth in the other pipeline. |
Thanks for the link and the idea but I'm not sure this will suit our project. We need to gather color frames and the corresponding depth frames to be able to look at them at the same time. Will those two pipelines be perfectly synchronous with each other? |
I would think so. Librealsense can freely pass data between threads for purposes such as having a different processing pipeline for each stream type. https://dev.intelrealsense.com/docs/frame-management#section-frames-and-threads |
Thanks, I will try this. Do you recommend continuing to use pipelines or switching to something else like a syncer ? |
My understanding is that syncer is useful for synchronizing between different streams and between different devices (e.g all sensors across all cameras). So it may be appropriate for your three-camera project. |
Hi @rcroset Do you have an update that you can provide about this case, please? Thanks! |
Hi @MartyG-RealSense we still didn't tried your idea, and we have issues with the cameras disconnecting by themselves forcing us to reboot the platform every hour, so things are moving slowly. I'll come back to you as soon as possible. |
Thanks @rcroset for the update - good luck with your work. |
Hi @MartyG-RealSense ! We still didn't try your idea (increasing the frame queue size) but we noticed something strange during out investigations... Just before the cameras die and crash the whole USB system (which forces us to reboot to have again access to the cameras), it seems that librealsense calls |
Does your program perform checking of events during the long-run? The RSUSB method has the potential to miss events because it only checks for device changes every 5 seconds by default. |
Hi @MartyG-RealSense! Sorry for the long delay answer. We don't perform checking of events. We'll try to do as suggested in the issue you pointed soon. |
Hii @rcroset What some projects do to keep the cameras running in 'low power' mode when capture is not currently required is to set the Laser Power value to zero. Doing so turns off the projector but keeps the pipeline active. When a capture needs to be performed, the Laser Power value is increased above zero, activating the projector. When Laser Power is minimized, the depth image will be sparse in detail. When the capture is completed, Laser Power is set to zero again. A way to have fine-control over timing and camera triggering that is compatible with D435 is external synchronization (genlock). https://www.intelrealsense.com/depth-camera-external-sync-trigger/ https://dev.intelrealsense.com/docs/external-synchronization-of-intel-realsense-depth-cameras |
Thanks for your answer. |
The projector is a component that is separate from the imagers. It can enhance the image by providing light and a dot-pattern projection that the camera can use as a texture source to perform depth analysis of surfaces that have low texture or no texture (doors, walls, desks, etc). The streams will still be active if the projector is turned off. By having Laser Power minimized, you can reduce the camera's power draw and lower the operating temperature during periods of low / no activity between capture periods. |
Hi @rcroset Do you still require assistance with this case, please? Thanks! |
Hi @MartyG-RealSense ! Yes, we still have issues with the cameras disconnecting for no reasons and killing the whole usb bus. We need to fix this before going any further with this issue. I'll come back to you as soon as we have some results. Btw if you have any hints for those cameras disconnecting, we'll be happy to read about them ;) |
SDK 2.35.2 was the version where improvements to the handling of multicam were introduced. This was during a period where there were a number of cases where Jetson boards were having non-detection problems with more than one camera (with the failed to set power state error particularly), so this SDK version is a good choice for Jetson multicam. The improvements mainly address problems related to rs2::pipeline though, and Jetson issues related to specific models of USB hub may still occur. A brand of mains-powered USB 3 hub that Intel have successfully tested with when developing their multiple camera white-paper document is AmazonBasics. I have one myself on my workstation and have no problems with it. |
Great, thanks for your answer ! We'll try to upgrade the SDK version as soon as possible. I'll keep you posted |
Hi @rcroset Do you have an update for us please? Thanks! |
Hi @MartyG-RealSense Not yet, we still have some other issues to solve before going any further |
Okay, thanks very much @rcroset for the update. I will keep this case open for a further time period. |
Adding a note to keep this case open for a further time period. |
Adding a note to keep this case open for a further period. |
2 similar comments
Adding a note to keep this case open for a further period. |
Adding a note to keep this case open for a further period. |
Hi @rcroset Do you have an update about whether you are ready to proceed with the subject on this case, please? Thanks! |
Hi @MartyG-RealSense. Not yet, sorry. But we have observed that the cameras crash less often when also enabling the infrared stream. Unfortunately, we cannot investigate further yet as we have some other things to take care of before. Sorry the the long delay. |
No problem @rcroset I totally understand - thanks for the continued updates. |
Adding a note to keep this case open for a further time period. |
3 similar comments
Adding a note to keep this case open for a further time period. |
Adding a note to keep this case open for a further time period. |
Adding a note to keep this case open for a further time period. |
Hi @rcroset Do you have an update that you can provide, please? Thanks! |
Case closed due to no further comments received. |
Issue Description
I'm currently working on a system involving 3 D435 cameras connected to a NVIDIA Jetson TX2 platform via a USB3 hub. Quite often when trying to access the cameras, I get a "Resource temporarily unavailable" from one of the camera and I can't access the device that throws this error, even in the RealSense Viewer. It happens quite randomly, on any of the cameras. To get things back on track, I have to shutdown the TX2 platform and reconnect the USB hub manually. Is there any way to do this (programmatically or other) without having to manually reconnect things ? Since this system will soon go to production, it won't be easy to manually fix things up. Any help or suggestion on how to get rid of this issue ? Many thanks :)
The text was updated successfully, but these errors were encountered: