-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pi camera not functional on Bullseye #1434
Comments
I noticed that motioneye has instructions to use pi_buster 4.3.2 build on bullseye: |
I'd add that the camera does not even function using ffmpeg directly. So using the v4l2 via |
A workaround has been found for this by disabling libcamera entirely and reverting to the previous camera stack as described in this link Essentially it is:
I would also note that to get my Pi zero camera to work I had to change the memory option from 128 to 256. So other users may also need to make this adjustment. |
Took me a week to figure this out, as described by Mr-Dave, |
Notes about deprecating the old camera api for No promises, but I may take a crack at writing support for libcamera in the next few days. |
Couple of things to keep in mind:
|
For fitting into Also, the But first, getting something working... |
I want to make sure I'm clear on this so there are no surprises. The next maintainer of Motion can make a different decision but for me, I'm not pulling into the Motion application C++. If it is C++ it goes into the MotionPlus application. |
Oh, I get it now. I hadn't seen motionplus before. Maybe this issue of "add libcamera support" moves over to motionplus and becomes a "WONT FIX" over here? |
The solution for Motion will be to use the video_device aka v4l2 or netcam_url. Neither of those options work right now. The v4l2 can be hacked into Motion quickly by changing The other possible option to "fix" on the Motion side is the netcam_url option. This is essentially trying to figure out whether and with what options will the camera work using ffmpeg. IMHO, the real issue seems to be upstream with the interface between the Pi camera and v4l2. If they had just implemented more mainstream pixel formats rather than picking the first one alphabetically, there wouldn't be an issue. The Bayer format seems to date back to the time of 5 1/4 floppies. |
@Mr-Dave, thanks for summarizing the technical details here. I spent much of yesterday sorting through a PI setup on Bullseye and I was doing some serious Google-Fu to unwind what I was seeing. So... What's your recommendation in the short-term for getting a RPI camera up and running under Motion? Is MotionPlus stable enough (very happy to see this getting spun up in C++). Or should we be falling back from Bullseye to Buster? I should add that I only really care about image detect/capture (for my use cases, video capture is less interesting). BTW, been using Motion in various projects for many years (https://github.com/richbl/go-distributed-motion-s3), and digging it. |
When I checked last, the option to revert to legacy camera stack was added into |
Confirmed. While a clean install of Bullseye (image as of this morning) doesn't offer the option for a legacy install of the PI camera stack in the raspi-config tool , after updating raspi-config itself to version 20220112, the option is then made available. |
I use two MMAL cameras (wavesahare) with motion 4.4.0 on bullseye. One on a Pi Zero2W and the other on a Pi 3A+. Both are working fine so far. Only one thing: There is a lag when motion is triggered. It lost 2-3s of video. |
It sounds like this should be marked as wont-fix, as we can reasonably assume no one is rewriting libcamera into C. The bottom line is there are two camera interfaces for a Raspberry Pi: One that's deprecated by Raspberry Pi foundation, the other requires libraries that motion won't use for the foreseeable future. There are two workarounds I can see. One is to use raspi-config to reenable the deprecated camera interface. The main drawback I've found with this is that it doesn't quite support some of the modes that should be supported at the correct framerate (I can get 1926 x 972 @ 30FPS on libcamera with the camera I'm using, but on legacy camera, I'm capped to 960x720 if I want 30FPS). The other option - until such time as motionplus gets libcamera support, is to convert the camera interface into something Motion can use:: libcamera-vid --width 1296 --height 972 -t 0 --listen -o tcp://127.0.0.1:8888 --codec mjpeg and then in motion.conf, This allows you to use standard Raspberry Pi new-interface tools to open up an mjpeg stream, and then have motion connect to that mjpeg stream. When I tested this latter option, I ran into some issues with this particular setup (like libcamera crashing when the stream got a bad connection, or like 200% CPU usage), though other conceptually similar setups probably exist (having libcamera write to a socket and have motion read from it or something). |
I would disagree and invite you to re-read the comments in this issue. Motion supports use of ffmpeg and v4l2 devices. Once libcamera matures to level of having an actual release or version number, perhaps at that time it will have resolved all the issues and errors that currently prevent the use of these mainstream methods. |
Having just been pointed at this issue, I'd like to clarify a few points:
libcamera provides the format you request. What you are seeing is the raw format from the sensor. You should not convert this manually in software, the HW ISP is supposed to deal with that, ... and that is the job that libcamera performs for you.
As far as I am aware, no one has added libcamera support to ffmpeg, no. Gstreamer support is being worked on, but doesn't have enough people working on it.
Yes, libcamera is written in C++, C bindings could be written if someone requires it and is willing to work on or sponsor the work. Otherwise, the gstreamer API is C. Rust/Python/Go/Every other language binding would be nice to haves too.
Sounds like you are trying to access the V4L2 devices that represent the raw sensor. If you access this device, you need to get the frames from the sensor, and run them through the ISP, and manage all algorithms for auto-exposure/white balance etc. ... which is why we have written libcamera to do that part for you.
Your conversion from Bayer to RGB/YUV was probably fine, but you would not have been running any auto-exposure or white balance or any other image enhancement that would be required to run through the ISP.
Definitely don't do this ... it won't work ... as above.
... I'm not sure how to comment there. There's no 'picking the first alphabetical format' going on. The bayer format is the real format produced by the sensor. You need to use the ISP to convert it to something you desire. The Pi is using standard V4L2 interfaces, but the issues are that cameras are /complex/ ... to support the ISP, and different hardware, it's not possible to use a single V4L2 device interface. libcamera's goal is to manage as much of that for you so that you don't have to manage the ISP for every non-UVC camera. Up until now, all of this processing was handled by the GPU in a closed source firmware, which exposed a single device as /dev/video0, so it conveniently 'looked' like a webcam type device. That's not going to be the case moving forwards. |
It sounds like the most discussion above about not using libcamera is because of the C++... please note that accessing /dev/video0 (without wrapping with our v4l2-compatibility layer) is /not/ libcamera. You are talking to a raw bayer sensor. It produces bayer formats and requires auto white-balance, and auto-exposure algorithms to be run to get a usable image. (Or manually set values of course) |
Thank you for your comments and clarifying my understanding. I had incorrectly believed that the With this clarification, I do not see a path forward for Motion support of the PI camera on systems with the libcamera stack. Motion needs a /dev/videoX device or ffmpeg to support it. |
Adding libcamera support to ffmpeg would be an interesting project for someone indeed. You can also try using the libcamera v4l2-compatibility layer as mentioned above. There is a script called 'libcamerify' which requests the LD_PRELOAD. Try something like:
|
Yea, so the lack of documentation/examples of using the libcamera compatibility layer is really one of the big problems here. I'm just a finance executive and not an embedded IT developer so I assumed when I read the libcamera documentation on the compatibility layer that it created the '/dev/videoX' devices. Now, indicating
Does not really resonate with what to do, how to get it to work or what would be the resulting end state device presented. I did look for the So, the revised "Help wanted" request on this issue would be to determine and document how to use the libcamera compatibility layer and add it to the Motion documentation as well as provide a generic version of that documentation to the libcamera project for other users. |
We added the libcamerify script at the beginning of February, it looks like it only got updated by the RPi packaging on the 5th March. libcamerify is in the package 'libcamera-tools' by the looks of it. |
this does not appear to work. wrapper says
motion log says
having spent an unreasonable amount of time on this, the only reliable workaround so far is to:
|
Works for me, on a raspberry pi 3 with the HQ camera. |
@michielhermes is that definitely with the new libcamera stack enabled on bullseye, and not the legacy video stack? |
So I can confirm that "libcamerify motion -b" streams a video to a browser on the LAN at :8081. |
I know above @Mr-Dave has said no C++ in this repository, but Arducam have added libcamera support to mjpg-streamer over in ArduCAM/mjpg-streamer@e503302 so it can be an example of how to wrap the C++ library to a C interface. I see Motion-Project/motionplus#13 too so perhaps the main libcamera support will happen there. |
I do not know how to check the nodes used. By using the custom thread names I tracked down the additional threads as being created by the following line in Motion.
The I am thinking that this will just be the normal CPU usage for libcamera sending images to v4l2. |
I'll see if we can get the libcamera Thread class to set custom names to help identify them in the future. |
For the Raspberry Pi I finally got libcamera to stream MJPG direct HTTP without the need of catching it with vlc, ffplay or anything but a browser. It is even picked up by Motion/Motioneye. I use the pi-attached camera (s) to stream over http. Those streams are picked up by motion(eye) running on a "hub". I used mjpg-streamer from jacksonliam as modified by ARDUCAM apt install git gcc g++ cmake libjpeg9-dev Run with |
@kbingham unfortunately I'm getting this error when running motion with libcamerify: libcamerify -d motion -b ERROR: ld.so: object '/usr/lib/aarch64-linux-gnu/v4l2-compat.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS64): ignored. any ideas? Maybe I'm doing something wrong ... Thanks! |
Wrong elf class sounds like a mismatch in architecture or compilation of the libraries or such. |
Thanks for the quick response! I have just installed the script via the libcamera-tools package. I'm running Raspberry Pi OS 64-bit on a 8GB raspberry pi 4. |
Is motion also compiled for 64-bit? The 'file' command should tell you the architecture of the binaries:
|
Oh! that's it! Thanks! Is there any Raspberry Pi 64-bits releases? The one I've installed is 4.4.0-1 armhf and it's 32 bits. |
You would have to build from source. |
@stewartoallen I tried your workaround technique and it didn't work for me. The motion log shows the same "V4L2 device failed to open" message. However, my situation might be a little unique. I'm trying to replicate a prior working project which used a Pi Zero W and an Arducam IMX477 HQ Pi camera with a switchable IR-cut filter running on the Buster OS. I'm trying to upgrade my project with a Pi Zero 2 W running the Bullseye OS using the latest version of their camera (https://www.uctronics.com/camera-modules/camera-for-raspberry-pi/high-quality-camera-raspberry-pi-12mp-imx477/arducam-12mp-imx477-ir-cut-filter-auto-switch-camera-for-raspberry-pi-b0270.html), following all the same steps that I used to successfully set up my original working project (except installing Bullseye and Motion 4.4.0). However, the supplier states that this version of their camera is ONLY compatible with the Bullseye OS, but does not provide any further specifics. I have inquired as to whether they still sell the legacy version of this product, but the only answer I got back was that the current product is only compatible with Bullseye (so I guess that's a "no"). I can get the new camera to respond properly to libcamera-hello, but only if I set camera_auto_detect=1 and dtoverlay=imx477 in /boot/config.txt. I have tried your workaround steps (i.e. enable legacy, remove camera_auto_detect, set start_x=1, and set gpu_mem=128), but when I run motion the V4L2 device fails to open. Below is what I get from the motion log: [0:motion] [NTC] [ALL] conf_load: Processing thread 0 - config file ./motion.conf I have even tried the libcamerify script to no avail. Can you offer any suggestions on getting this to work with Bullseye? I might try going back to the Pi Zero W to see if I get different results. I can't go to a Pi 3 or Pi 4 because those exceed the size and power requirements for my project. I could revert to using Buster, but I can't find any camera with the equivalent features (HQ with a switchable IR-cut filter). If push comes to shove, I could cannibalize the older version of camera from my existing project, but I'd rather avoid that if at all possible. |
I'm not sure why |
@Mr-Dave That did it! Thanks! BTW, it's a Pi Zero 2 W. The "default" configuration for this particular Pi HQ camera requires camera_auto_detect=0 and dtoverlay=imx477 in /boot/config.txt. |
For me too |
Updated instructions in commit ae63db8 |
In my previous project on the Pi Zero W with Buster, motion worked with the height and width set to 800x480. In my current project with a Pi Zero 2 W with Bullseye, motion fails with a segmentation fault if I use 800x480. I have tried various combinations. The only ones that have worked so far are ones that preserve the 4:3 aspect ratio (e.g. 640x480, 960x720). What are the valid combinations of Motion width and height in Bullseye? |
This is a different issue and something upstream. Change the palette being used in Motion and it will work. |
Thanks. Closer inspection of motion output revealed the supported formats for the IMX477 camera as follows: [1:ml1] [NTC] [VID] v4l2_pixfmt_select: Supported palettes: If I omit video_params from my configuration and use dimensions of 800x480 pixels, Motion defaults to YU12 and then crashes with a segmentation fault. I then included the video_params palette setting in my configuration tested each of the 20 palettes. What's interesting is that if the palette configured does not match one in the supported palettes list, Motion uses the UYVY palette (not the YU12 palette) and does not crash. Only the YU12 palette (either by default or by intentional selection) causes a segmentation fault. The other two palettes common to both lists, RGB3 and YUYV, can be used without Motion crashing. So I have a total of three palettes that I can use. Again, thanks for the quick response. |
The issue has been identified. For this camera, pixel format and resolution, the camera is providing a different stride (bytes per line) than the width. Specify the width as the bytes per line as reported in the debug Motion log.
Resolved in commit d859a3e Motion will now automatically change the configuration width value provided to match the bytes per line. |
I can follow this discussion what I do not understand: what does ISP mean? |
Sorry, your stride patch does not work for my USB Capture Card with an analog camera setting with 512 height 384 v4l2-ctl --get-fmt-videoFormat Video Capture: v4l2-ctl --list-formats-extioctl: VIDIOC_ENUM_FMT
|
I've testet my settings. For native motion they are not ok. With libcamerify motion the settings seem to be correct. |
ISP stands for Image Signal Processor. Google definition: |
I have a lot of distortions on the movies. Screen with pink stripes (broad), black & white distortions. |
02/20/2023 Edit: Since this issue continues to get views years after the original posting, I'm adding a header to this issue. TLDR: Use
libcamerify motion
for the Motion application as documented here or the MotionPlus application.MMAL and V4L2 processing for the Pi Camera was converted to libcamera in the Bullseye version of Raspberry Pi OS. As a result of this change, the Pi camera is no longer functional in Motion.
EDIT: For those that wish to assist, we need a C routine to convert from one of the new Bayer formats libcamera is providing for the PI camera into RGB24.
Users wishing to use the Pi camera and Motion should continue with the Buster version of Raspberry Pi OS.
The text was updated successfully, but these errors were encountered: