-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change savant version to 0.2.2 (#212) #215
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* #20 add video converter to scale frame to the original resolution * SavantBoost library added as part of the entire framework * Simplified docker for jetson deepstream 6.1 * #25 draw on frames after nvstreamdemux * #25 don't retrieve frame image in NvDsPyFuncPlugin * #25 build pyds with unmap_nvds_buf_surface binding * #25 use context manager to automatically unmap NvDsBufSurface * #25 remove drawbin gst element * #25 update PyFunc documentation * #37 use hardware jpeg encoder when it's available * added element name for pyfunc and extended drawbin * fix bug with division by zero * fixed bug with adding rbbox to frame meta * reformat * extended comment for rendered_objects * fixed bug after merge * #43 move building pyds out of separate dockerfile * #43 add opencv module "savant" * #43 fix mapping to EGL in DSCudaMemory * #43 add python bindings for DSCudaMemory in savantboost * #43 add helpers for cv2.cuda.GpuMat * Update README.md * Update architecture.md * added supporting mpeg stream demuxer (#54) * added supporting mpeg stream demuxer * Fixed a grammatical mistake * #48 add FrameParameters class for processing frame config * #48 respect rows alignment in CUDA memory * #48 move drawing on frames before demuxer * #48 add padding to frame parameters * #48 fix scaling/shifting bboxes * Add files via upload * Update README.md * Update README.md * 53 move to ds 62 (#64) Update DS to 6.2 and Savant to 0.2.0. * Update README.md * #43 implement benchmarks for drawing on frames * implement draw element artist using gpumat opencv (#68) * added gpumat based artist, added bbox drawing * added full implementation for opencv artist * fixed pyfunc config crash * added cpu blur, fixed roi for gpu blur * Removed Cairo artist code, added docs * removed extra reference * fixed import name * file rename * fixed alphacomp mode, fixing overlay+padding wip * fixed corner cases in add_overlay() * change back to ghcr registry * removed outdated build arg * Frame RoI change (#69) * Refactor nvds utils. * Support frame roi property. * quality and bitrate configuration for savant output frame (#70) * fixed typo in debug logs * added docs for output_frame module parameter * added encoder elements properties lists * Removed mention of gstreamer encoder elements * NvDsFrameMeta is extended and returns frame tags (#62) The "NvDsFrameMeta" has been extended to include frame tags and other video frame metadata information. The pipeline metadata now includes the source metadata, and the source video adapter reads and adds frame metadata to the sent frames. * Always-On Low Latency Streaming Sink (RTSP) (#74) * Move draw func before output meta preparation. (#81) * 57 add an option to avoid scaling the frames back at the end of the pipeline (#78) * NvDsFrameMeta is extended and returns frame tags * Taking savant_frame_meta only when accessing tags * Refactoring and all incoming meta information is transferred to the deepstream meta. * Source metadata is added to the pipeline metadata, and the source video adapter reads and adds frame metadata to the sent frame. * fixed bugs with scaling * fixed convert to srt * Input and output metadata coordinates is in absolute coordinates * #59 filter zeromq messages by source ID * Update README.md * Update architecture.md * Update architecture.md * Update architecture.md * Update README.md * Update README.md * Update architecture.md * Update publications-samples.md * Optimize the default selector with numba (#82) * Fix numpy data types in model postprocessing. * Add numba. * Finalize Pub/Sub, Req/Rep, Dealer/Router configurations (#79) * #51 add dealer/router zeromq sockets * #51 set default zeromq sockets for source to dealer/router * #51 add docstring for RoutingIdFilter * #76 transfer multimedia object outside the avro message (#83) * Fix numpy data types in rapid converter. * Disable nvjpegdec on pipeline level. * Support source EOS event callback in pyfunc. * Fix EOS event propagation in pyfunc. * Fixed JSON serialization in console sink. (#92) * technical preview release demo pipeline (#94) * Fix RTSP source adapter (#98) * Filter caps on RTSP source adapters * Filter out non-IDR frames at the start of the stream * Drop EOS on nvstreammux when all sources finished (#97) * Support rounded rect in artist (#101) * Support rounded rect in artist. * Review fix. * Remove Jetson Nano support. (#102) * Update README.md * 107 describe provided adapters (#111) * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update README.md * Update architecture.md * Create README.md * Update README.md * Update README.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update README.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Fix adapters parameters. * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update README.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update adapters.md * Update architecture.md * Update architecture.md * Update architecture.md * Update architecture.md * Add files via upload * Update README.md * Delete peoplenet-blur-demo.webp * Add files via upload * Delete peoplenet-blur-demo.webp * Add files via upload * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update github workflows (#114) * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Updated demo pipeline configuration (#121) Changed live demo to output RGBA frames, added demo_performance pipeline * Add 0.2.0 environment compatibility test script (#124) * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Create docker compose demo run (#125) * Add compose file with dGPU images * Add compose file with jetson images * Add start delay for rtsp source --------- Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com> * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Add files via upload * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Add test image removal in env check (#129) * Create runtime-configuration.md * Update runtime-configuration.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update runtime-configuration.md * Fix input objects missing parents (#134) * Add docker compose config for nvidia_car_classification (#137) * Rename deepstream_test2 to nvidia_car_classification * Track jpegs in LFS * Add peoplenet demo stub image to LFS * Move stub img * Add stub image for 720p * Add docker compose configs for car classification * Add draw func for car classification * Change default frame output to raw-rgba * Add README entry, preview file * 130 demonstrate mog2 background removal with opencv cuda (#141) * added background remover sample base on MOG2 --------- Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com> * Mog2 publishing (#144) Update docs * Update person_face_matching.py (#148) * Update person_face_matching.py * Add parent assignment on pyds meta level (#140) * Update nvidia-car-classification (#138) * Add fullscale webp for nvidia-car-classification * Update README * #115 configure sending EOS at the end of each file (#152) * #136 implement video loop source adapter (#143) * #126 calculate DTS for encoded frames in RTSP source (#159) * fixed restart argument to conform #156 (#163) * ok * Update README.md * #128 encode ZeroMQ socket type and bind/connection to endpoint (#162) * #151 embed mediamtx into always-on-rtsp sink (#167) * changed d/r to p/s (#170) * Add line crossing demo module (#157) * Add line crossing module wip * Add conditional inference skip, reformat * Add graphite + graphana stats * Remove track id from graphite metrics * Update to count stats * Update README, docs, stale tracks removing * Remove dependency on savant-samples image * Change ROUTER/DEALER to PUB/SUB * Fix preview file link * Update samples/line_crossing/README.md Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com> --------- Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com> * Changed main cfg version to be Savant-flavor for car sample (#174) * #178 skip non-keyframes after each generated EOS in avro_video_demux (#180) * Update README.md * Update README.md * Literal fixes to demo (#183) * Add yolov8 detector to line crossing demo (#186) * Add model builder patch * Add x86 yolov8 module * Fix base savant image * Add env file with detector choice * Update Jetson dockerfile * Fix line cross demo direction bug (#190) * Fix direction bug * Add obj class label to metric name scheme * Make ExternalFrame.type a string rather than enum (#191) * Deploy savant as package (#187) * Fix workflow. * Fix build-docker workflow. * 0.2.1 release fixes (#192) * Add validation for custom format model (#195) * custom_lib_path should be a file * engine_create_func_name should be set * Fix config checker for inference element with engine file specified (#196) * Add implicit setup of engine options * Add skip of calib file check for built engine * Update car classification sample config * Download file before starting the stream in video-loop-source (#197) * Update demos to 0.2.1 (#194) * Fix traffic meter config error (#199) * Fix nvinfer config bug. * Change default detector to peoplenet * WIP: 0.2.1 Documentation (#153) Initial documentation --------- Co-authored-by: Denis Medyantsev <denisvmedyantsev@gmail.com> Co-authored-by: Oleg Abramov <abramov-oleg@users.noreply.github.com> * Update README.md * Fix build docs: lfs=true. * Fix build docs: install git-lfs before checkout. * Update README.md * Add per-batch cuda stream completion wait (#205) * Add per-batch cuda stream completion wait * Make artist stream argument mandatory * Add per-batch stream completion wait pyfunc * Update Artist usage in bg_remover * Add prepare release script * Add opencv deb packages build file * Add installing OpenCV package from savant-data * Separate release and latest build workflows * Change savant version to 0.2.2 --------- Co-authored-by: Pavel A. Tomskikh <tomskih_pa@bw-sw.com> Co-authored-by: Pavel Tomskikh <tomskikh@users.noreply.github.com> Co-authored-by: bogoslovskiy_nn <bogoslovskiy_nn@bw-sw.com> Co-authored-by: Nikolay Bogoslovskiy <bogoslovskii@gmail.com> Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com> Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com> Co-authored-by: Denis Medyantsev <denisvmedyantsev@gmail.com> Co-authored-by: bwsw <bitworks@bw-sw.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Keep the original frame resolution/aspect ratio on pipeline output #20 add video converter to scale frame to the original resolution
SavantBoost library added as part of the entire framework
Simplified docker for jetson deepstream 6.1
Fix memory leak when using a frame access on Jetson device #25 draw on frames after nvstreamdemux
Fix memory leak when using a frame access on Jetson device #25 don't retrieve frame image in NvDsPyFuncPlugin
Fix memory leak when using a frame access on Jetson device #25 build pyds with unmap_nvds_buf_surface binding
Fix memory leak when using a frame access on Jetson device #25 use context manager to automatically unmap NvDsBufSurface
Fix memory leak when using a frame access on Jetson device #25 remove drawbin gst element
Fix memory leak when using a frame access on Jetson device #25 update PyFunc documentation
Use a hardware JPEG encoder when it's available #37 use hardware jpeg encoder when it's available
added element name for pyfunc and extended drawbin
fix bug with division by zero
fixed bug with adding rbbox to frame meta
reformat
extended comment for rendered_objects
fixed bug after merge
Access to frames from PyFunc as OpenCV GpuMat #43 move building pyds out of separate dockerfile
Access to frames from PyFunc as OpenCV GpuMat #43 add opencv module "savant"
Access to frames from PyFunc as OpenCV GpuMat #43 fix mapping to EGL in DSCudaMemory
Access to frames from PyFunc as OpenCV GpuMat #43 add python bindings for DSCudaMemory in savantboost
Access to frames from PyFunc as OpenCV GpuMat #43 add helpers for cv2.cuda.GpuMat
Update README.md
Update architecture.md
added supporting mpeg stream demuxer (added supporting mpeg stream demuxer #54)
added supporting mpeg stream demuxer
Fixed a grammatical mistake
Extend in-GPU image dimensions to add spare space for cropped and exogenous elements placement #48 add FrameParameters class for processing frame config
Extend in-GPU image dimensions to add spare space for cropped and exogenous elements placement #48 respect rows alignment in CUDA memory
Extend in-GPU image dimensions to add spare space for cropped and exogenous elements placement #48 move drawing on frames before demuxer
Extend in-GPU image dimensions to add spare space for cropped and exogenous elements placement #48 add padding to frame parameters
Extend in-GPU image dimensions to add spare space for cropped and exogenous elements placement #48 fix scaling/shifting bboxes
Add files via upload
Update README.md
Update README.md
53 move to ds 62 (53 move to ds 62 #64)
Update DS to 6.2 and Savant to 0.2.0.
Update README.md
Access to frames from PyFunc as OpenCV GpuMat #43 implement benchmarks for drawing on frames
implement draw element artist using gpumat opencv (49 implement draw element artist using gpumat opencv #68)
added gpumat based artist, added bbox drawing
added full implementation for opencv artist
fixed pyfunc config crash
added cpu blur, fixed roi for gpu blur
Removed Cairo artist code, added docs
removed extra reference
fixed import name
file rename
fixed alphacomp mode, fixing overlay+padding wip
fixed corner cases in add_overlay()
change back to ghcr registry
removed outdated build arg
Frame RoI change (Frame RoI change #69)
Refactor nvds utils.
Support frame roi property.
quality and bitrate configuration for savant output frame (66 quality and bitrate configuration for savant output frame #70)
fixed typo in debug logs
added docs for output_frame module parameter
added encoder elements properties lists
Removed mention of gstreamer encoder elements
NvDsFrameMeta is extended and returns frame tags (NvDsFrameMeta is extended and returns frame tags #62)
The "NvDsFrameMeta" has been extended to include frame tags and other video frame metadata information. The pipeline metadata now includes the source metadata, and the source video adapter reads and adds frame metadata to the sent frames.
Always-On Low Latency Streaming Sink (RTSP) (Always-On Low Latency Streaming Sink (RTSP) #74)
Move draw func before output meta preparation. (Move draw func before output meta preparation. #81)
57 add an option to avoid scaling the frames back at the end of the pipeline (57 add an option to avoid scaling the frames back at the end of the pipeline #78)
NvDsFrameMeta is extended and returns frame tags
Taking savant_frame_meta only when accessing tags
Refactoring and all incoming meta information is transferred to the deepstream meta.
Source metadata is added to the pipeline metadata, and the source video adapter reads and adds frame metadata to the sent frame.
fixed bugs with scaling
fixed convert to srt
Input and output metadata coordinates is in absolute coordinates
Pub/Sub topic filtering #59 filter zeromq messages by source ID
Update README.md
Update architecture.md
Update architecture.md
Update architecture.md
Update README.md
Update README.md
Update architecture.md
Update publications-samples.md
Optimize the default selector with numba (Optimize the default selector with numba #82)
Fix numpy data types in model postprocessing.
Add numba.
Finalize Pub/Sub, Req/Rep, Dealer/Router configurations (Finalize Pub/Sub, Req/Rep, Dealer/Router configurations #79)
Finalize Pub/Sub, Req/Rep, Dealer/Router configurations #51 add dealer/router zeromq sockets
Finalize Pub/Sub, Req/Rep, Dealer/Router configurations #51 set default zeromq sockets for source to dealer/router
Finalize Pub/Sub, Req/Rep, Dealer/Router configurations #51 add docstring for RoutingIdFilter
Change transport protocol to optionally transfer the multimedia object outside of the AVRO message #76 transfer multimedia object outside the avro message (Change transport protocol to optionally transfer the multimedia object outside of the AVRO message #83)
Fix numpy data types in rapid converter.
Disable nvjpegdec on pipeline level.
Support source EOS event callback in pyfunc.
Fix EOS event propagation in pyfunc.
Fixed JSON serialization in console sink. (Fixed JSON serialization in console sink. #92)
technical preview release demo pipeline (technical preview release demo pipeline #94)
Fix RTSP source adapter (Fix RTSP source adapter #98)
Filter caps on RTSP source adapters
Filter out non-IDR frames at the start of the stream
Drop EOS on nvstreammux when all sources finished (Drop EOS on nvstreammux when all sources finished #97)
Support rounded rect in artist (Support rounded rect in artist #101)
Support rounded rect in artist.
Review fix.
Remove Jetson Nano support. (Remove Jetson Nano support #102)
Update README.md
107 describe provided adapters (107 describe adapters functionality at docsadaptersmd #111)
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update README.md
Update architecture.md
Create README.md
Update README.md
Update README.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update README.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Fix adapters parameters.
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update README.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update adapters.md
Update architecture.md
Update architecture.md
Update architecture.md
Update architecture.md
Add files via upload
Update README.md
Delete peoplenet-blur-demo.webp
Add files via upload
Delete peoplenet-blur-demo.webp
Add files via upload
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update github workflows (Update github workflows #114)
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Updated demo pipeline configuration (Updated demo pipeline configuration #121)
Changed live demo to output RGBA frames, added demo_performance pipeline
Add 0.2.0 environment compatibility test script (Add 0.2.0 environment compatibility test script #124)
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Create docker compose demo run (Create docker compose demo run #125)
Add compose file with dGPU images
Add compose file with jetson images
Add start delay for rtsp source
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Add files via upload
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Add test image removal in env check (Add test image removal to check env script #129)
Create runtime-configuration.md
Update runtime-configuration.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update README.md
Update runtime-configuration.md
Fix input objects missing parents (Fix input objects missing parents #134)
Add docker compose config for nvidia_car_classification (Add docker compose config for nvidia_car_classification #137)
Rename deepstream_test2 to nvidia_car_classification
Track jpegs in LFS
Add peoplenet demo stub image to LFS
Move stub img
Add stub image for 720p
Add docker compose configs for car classification
Add draw func for car classification
Change default frame output to raw-rgba
Add README entry, preview file
130 demonstrate mog2 background removal with opencv cuda (130 demonstrate mog2 background removal with opencv cuda #141)
added background remover sample base on MOG2
Update docs
Update person_face_matching.py (Update person_face_matching.py #148)
Update person_face_matching.py
Add parent assignment on pyds meta level (Add parent attribute assignment on pyds meta level #140)
Update nvidia-car-classification (Update nvidia-car-classification #138)
Add fullscale webp for nvidia-car-classification
Update README
Optional EOS for file-based sources #115 configure sending EOS at the end of each file (Optional EOS for file-based sources #152)
Video loop source adapter #136 implement video loop source adapter (#136 implement video source to loop video file #143)
RTSP Source Adapter doesn't support streams with b-frames #126 calculate DTS for encoded frames in RTSP source (#126 calculate DTS for encoded frames in RTSP source #159)
fixed restart argument to conform Add 'restart:always' to source/sink/module in all demo-related compose files #156 (156: Fixed restart argument to conform the ticket #163)
ok
Update README.md
Encode ZMQ socket type and source information into socket URL #128 encode ZeroMQ socket type and bind/connection to endpoint (Encode ZMQ socket type and source information into socket URL #162)
Modify AA-RTSP Sink to support embedded MediaMTX #151 embed mediamtx into always-on-rtsp sink (Modify AA-RTSP Sink to support embedded MediaMTX #167)
changed d/r to p/s (Change DEALER/ROUTER to PUB/SUB #170)
Add line crossing demo module (Add line crossing demo module #157)
Add line crossing module wip
Add conditional inference skip, reformat
Add graphite + graphana stats
Remove track id from graphite metrics
Update to count stats
Update README, docs, stale tracks removing
Remove dependency on savant-samples image
Change ROUTER/DEALER to PUB/SUB
Fix preview file link
Update samples/line_crossing/README.md
Changed main cfg version to be Savant-flavor for car sample (Changed main cfg version to be Savant-flavor #174)
Module bug 'nvidia-car-classification-demo' #178 skip non-keyframes after each generated EOS in avro_video_demux (#178 skip non-keyframes after each generated EOS in avro_video_demux #180)
Update README.md
Update README.md
Literal fixes to demo (Literal fixes to demo #183)
Add yolov8 detector to line crossing demo (Add yolov8 detector to line crossing demo #186)
Add model builder patch
Add x86 yolov8 module
Fix base savant image
Add env file with detector choice
Update Jetson dockerfile
Fix line cross demo direction bug (Fix line cross demo direction bug #190)
Fix direction bug
Add obj class label to metric name scheme
Make ExternalFrame.type a string rather than enum (Make ExternalFrame.type a string rather than enum #191)
Deploy savant as package (Deploy savant as package #187)
Fix workflow.
Fix build-docker workflow.
0.2.1 release fixes (021 fixes #192)
Add validation for custom format model (Add validation for custom format model #195)
custom_lib_path should be a file
engine_create_func_name should be set
Fix config checker for inference element with engine file specified (Fix config checker for inference element with engine file specified #196)
Add implicit setup of engine options
Add skip of calib file check for built engine
Update car classification sample config
Download file before starting the stream in video-loop-source (Download file before starting the stream in video-loop-source #197)
Update demos to 0.2.1 (Update demos to 0.2.1 #194)
Fix traffic meter config error (Fix traffic meter config error #199)
Fix nvinfer config bug.
Change default detector to peoplenet
WIP: 0.2.1 Documentation (WIP: 0.2.1 Documentation #153)
Initial documentation
Update README.md
Fix build docs: lfs=true.
Fix build docs: install git-lfs before checkout.
Update README.md
Add per-batch cuda stream completion wait (Add per-batch cuda stream completion wait #205)
Add per-batch cuda stream completion wait
Make artist stream argument mandatory
Add per-batch stream completion wait pyfunc
Update Artist usage in bg_remover
Add prepare release script
Add opencv deb packages build file
Add installing OpenCV package from savant-data
Separate release and latest build workflows
Change savant version to 0.2.2