Skip to content

Conversation

@avolmat-st
Copy link

@avolmat-st avolmat-st commented Oct 12, 2025

Allow creating a pipeline as follow
camera receiver -> encoder -> uvc

I post this PR while there are still some points to improve (cf hardcoded stuff detailed below) in order to get some first feedbacks. Since this depends on the UVC PR (PR #93192) and DCMIPP UVC PR (PR #94562), there are lots of commits in this PR. However only the LAST 8 COMMITS is relevant for this PR.

If the chosen zephyr,videoenc is available, the sample will pipe the camera receiver to the encoder and then the UVC device instead of directly the camera receiver to the UVC.

In order to making the change as simple as possible, the source device of the UVC device is renamed uvc_src_dev instead of video_dev previously since, depending on the configuration, the UVC source might be either the video_dev or the encoder_dev.

Current implementation has several points hardcoded for the time being:
1. intermediate pixel format between the camera receiver and encoder
is set to NV12. This is temporary until proper analysis of video_dev
caps and encoder caps is done, allowing to select the common
format of the two devices.
2. it is considered that encoder device do NOT perform any resolution
change and that encoder output resolution is directly based on the
camera receiver resolution. Thanks to this, UVC exposed formats
are thus the encoder output pixel format & camera receiver
resolutions.

This has been tested using the STM32N6-DK and the JPEG codec, leading to the following pipe:
IMX335 -> CSI/DCMIPP -> JPEG -> UVC

compiled via:

west build -p -b stm32n6570_dk//sb --shield st_b_cams_imx_mb1854 samples/subsys/usb/uvc/ -DCONFIG_USE_STM32_MW_ISP=y -DFILE_SUFFIX=jpeg

And also VENC codec, leading to the following pipe:
IMX335 -> CSI/DCMIPP -> VENC -> UVC

compiled via:

west build -p -b stm32n6570_dk//sb --shield st_b_cams_imx_mb1854 samples/subsys/usb/uvc/ -DCONFIG_USE_STM32_MW_ISP=y -DFILE_SUFFIX=venc
 and can be displayed on linux machine via gst-launch command:
gst-launch-1.0 v4l2src device=/dev/videoXX ! 'video/x-h264,width=1920,height=1080' ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

@erwango
Copy link
Member

erwango commented Oct 13, 2025

Moving assignee to Video subsystem maintainer, as it seems more appropriate to the content of the RFC

@josuah josuah added the DNM This PR should not be merged (Do Not Merge) label Oct 14, 2025
@josuah
Copy link
Contributor

josuah commented Oct 14, 2025

Adding a DNM flag for PRs dependencies.
Please anyone feel free to remove it once dependencies are merged and this PR rebased.

josuah
josuah previously approved these changes Oct 14, 2025
Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this submission. It makes sense for me as it is, and AFAI the FIXME are for planning future infrastructure of the video area, not for this particular video sample

@erwango erwango added this to the v4.3.0 milestone Oct 15, 2025
@avolmat-st avolmat-st force-pushed the pr-uvc-videoenc-support branch from 75eec12 to 3edf0b3 Compare October 19, 2025 15:47
@zephyrbot zephyrbot requested a review from ngphibang October 23, 2025 21:54
@avolmat-st
Copy link
Author

Had to push a new version to correct an issue in the app_add_filtered_formats function when doing iteration over the caps of the encoder & camera receiver. Code slightly reworked to avoid potential access to wrong index of caps table while still looking properly at the resolution exposed by the camera receiver. Thank you @josuah for finding the issue and working together to figure out an acceptable solution.

josuah
josuah previously approved these changes Oct 23, 2025
Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for finding the a simple solution that works for all current supported hardware!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like nothing but it saves a lot of variable definition in the samples 👍

uint32_t dwFrameInterval[CONFIG_USBD_VIDEO_MAX_FRMIVAL];
} __packed;

struct uvc_frame_based_continuous_descriptor {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josuah IMO uvc_frame_based_continuous_descriptor and uvc_frame_based_discrete_descriptor are a bit too long names.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the _based_ part can be left out? uvc_frame_continuous_descriptor + uvc_frame_discrete_descriptor should be non-ambiguous:

It could fit well in the list of descriptors:

	struct uvc_format_uncomp_descriptor fmt_uncomp;
	struct uvc_format_mjpeg_descriptor fmt_mjpeg;
	struct uvc_format_frame_descriptor fmt_frame;
	struct uvc_frame_descriptor frm;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alain Volmat added 2 commits October 24, 2025 09:24
Replace video_dequeue / video_enqueue buffer exchange code by
video_transfer_buffer helper function.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
In preparation for the introduction of video encoder support
add an indirection for handling of the buffers of the UVC
source device. Currently this is only video_dev however it
can also be an encoder device when encoder is introduced
between video capture device and the UVC device.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
erwango
erwango previously approved these changes Oct 24, 2025
Alain Volmat added 8 commits October 24, 2025 11:18
Allow creating a pipeline as follow
   camera receiver -> encoder -> uvc

If the chosen zephyr,videoenc is available, the sample will pipe
the camera receiver to the encoder and then the UVC device instead
of directly the camera receiver to the UVC.

Current implementation has several points hardcoded for the time
being:
1. intermediate pixel format between the camera receiver and encoder
   is set to NV12. This shouldn't be hardcoded and should instead be
   discovered as a commonly capable format from the encoder / video dev
2. it is considered that encoder device do NOT perform any resolution
   change and that encoder output resolution is directly based on the
   camera receiver resolution.  Thanks to this, UVC exposed formats
   are thus the encoder output pixel format & camera receiver
   resolutions.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Add rough estimate of a worth case H264 output size.

The video_estimate_fmt_size would need more information
such as quality, profile in order to give a better
estimate for each formats so for the time being just
stick to 16bpp based size, same as for JPEG.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
This commit prepares introduction of the UVC Frame Based support by
using the struct uvc_frame_descriptor as parameter of most of the UVC
functions. struct uvc_frame_descriptor contains the common fields for
all supported frame type and then depending on the DescriptorSubtype
the pointer is casted in the correct struct definition.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
The frame_based descriptors differ from the frame descriptors
in that there is no dwMaxVideoFrameBufferSize field.
In order to do that, add a new uvc_frame_based_discrete_descriptor
structure to be used to fill in proper information into the
frame descriptor. In addition to that, a new format descriptor
is also added for frame based transfer.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Add proper check of the return value of video_enqueue / video_dequeue.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Add overlay files in order to enable usage of
the encoder in the UVC sample.
This work with platform defining node label
    zephyr_jpegenc
    zephyr_h264enc

Mode can be selected by using -DFILE_SUFFIX="jpegenc" or
-DFILE_SUFFIX="h264enc" when building the sample while
also adding -DCONFIG_VIDEO_ENCODER_JPEG or
-DCONFIG_VIDEO_ENCODER_H264 as well in the command line.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Add zephyr_h264enc and zephyr_jpegenc labels on node in order to
be able to use VENC and JPEG codec from samples.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Add entries in sample.yaml for enabling h264enc / jpegenc
uvc based test on the stm32n6570_dk/stm32n657xx/sb platform.

Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
@avolmat-st avolmat-st force-pushed the pr-uvc-videoenc-support branch from d3a72d3 to 431233f Compare October 24, 2025 09:30
@zephyrbot zephyrbot requested a review from jfischer-no October 24, 2025 09:32
@sonarqubecloud
Copy link

@cfriedt cfriedt merged commit 0f6a0d9 into zephyrproject-rtos:main Oct 24, 2025
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: Boards/SoCs area: Samples Samples area: USB Universal Serial Bus area: Video Video subsystem platform: STM32 ST Micro STM32 Release Notes To be mentioned in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants