This is a server program, that takes video stream from another server, transforms it and returns to clients.
This is my graduate project.
Tested with:
- My Raspberry Pi RTSP Server as a source video server
- VLC 3.0.12 as a HLS client
- Receives MJPEG-encoded video by RTSP/RTP protocol
- Transcodes it to the H.264 codec
- Packs this to the MPEG2-TS container
- And sends final video to client via HLS protocol
First of all, that's a learning project, so there is a few limitations:
- No audio support
- No RTCP support
- No config support
- No file logging support
- No authorization support
- No encryption support
All of these limitations can be fixed, but that's not gonna be done within the frameworks of this project.
The code of this project is written in the way to allow easy extensibility.
Every module, that works with media data, is Observer and/or Provider specified with concrete data type. Observers are subscribed to Providers of the same data.
For example, rtsp::Client provides MJPEG-encoded frames, so it inherits Proivder<MjpegFrame>
. In the same time, MjpegToH264 converter class inherits from Observer<MjpegFrame>
and Provider<H264Frame>
and is subscribed to rtsp::Client.
So any media-data flow can be easily extended by adding new element in chain. If you want to add class, that would append some text above H264-encoded video-stream and pass it to HLS, you can inherit this class from Observer<H264Frame>
and Provider<H264Frame>
, subscribe it to MjpegToH264 class and subscribe Mpeg2TsPackager to it.
You may want to add new protocol to communicate with clients. For example MPEG-DASH.
So you might need a specified port for that. All you have to do is creating a new PortHandler instance and register it in PortHandlerManager. Now your port is watched.
The next thing is servlet. It's a specific class, extended from Servlet class. It provides response for given request. So you need to just write such servlet and register it in your PortHandler instance on specified path. Now this path is handled by your servlet. You could also register a lot of different servlets on different paths. All request dispatching is already done.
This project uses ffmpeg to transcode and pack video. So to be able to build and run it manually you have to install some packets:
- On Ubuntu:
sudo apt install libavcodec-dev libavformat-dev libavutil-dev libswscale-dev
- On Arch:
pacman -S ffmpeg
docker compose -f prod-docker-compose.yml build
docker compose -f prod-docker-compose.yml up
Make sure you have installed all dependencies 👆
All commands are shown from project root directory.
Build:
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build . -j 8
Run:
bin/Release/media-server <rtsp-stream-url>
To test it you can simple open http://yourip:8080/playlist.m3u
in VLC player