diff --git a/README.md b/README.md index 7f03763..be83c31 100644 --- a/README.md +++ b/README.md @@ -1,17 +1,82 @@ -# [vidformer](https://ixlab.github.io/vidformer/) - A data-systems focused library for declarative video synthesis +# vidformer - Video Data Transformation -A research project providing the infrastructure for future video interfaces. +[![Test](https://github.com/ixlab/vidformer/actions/workflows/test.yml/badge.svg)](https://github.com/ixlab/vidformer/actions/workflows/test.yml) +[![PyPI version](https://img.shields.io/pypi/v/vidformer.svg)](https://pypi.org/project/vidformer/) +![Crates.io Version](https://img.shields.io/crates/v/vidformer) +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/ixlab/vidformer/blob/main/LICENSE) + + +A research project providing infrastructure for video interfaces and pipelines. Developed by the OSU Interactive Data Systems Lab. -Open source under Apache-2.0. +## 🎯 Why vidformer + +Vidformer efficiently transforms video data, enabling faster annotation, editing, and processing of video dataβ€”without having to focus on performance. + +It uses a declarative specification format to represent transformations. This enables: + +* **⚑ Transparent Optimization:** Vidformer optimizes the execution of declarative specifications just like a relational database optimizes relational queries. + +* **⏳ Lazy/Deferred Execution:** Video results can be retrieved on-demand, allowing for practically instantaneous playback of video results. + +* **πŸ”„ Transpilation:** Vidformer specifications can be created from existing code (like `cv2`). + +## πŸš€ Quick Start + +The easiest way to get started is using vidformer's `cv2` frontend, which allows most Python OpenCV visualization scripts to replace `import cv2` with `import vidformer.cv2 as cv2`: + +```python +import vidformer.cv2 as cv2 + +cap = cv2.VideoCapture("my_input.mp4") +fps = cap.get(cv2.CAP_PROP_FPS) +width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) +height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) + +out = cv2.VideoWriter("my_output.mp4", cv2.VideoWriter_fourcc(*"mp4v"), + fps, (width, height)) +while True: + ret, frame = cap.read() + if not ret: + break + + cv2.putText(frame, "Hello, World!", (100, 100), cv2.FONT_HERSHEY_SIMPLEX, + 1, (255, 0, 0), 1) + out.write(frame) + +cap.release() +out.release() +``` + +You can find details on this in our [Getting Started Guide](https://ixlab.github.io/vidformer/getting-started.html). + +## πŸ“˜ Documentation + +* [🌐 Website](https://ixlab.github.io/vidformer/) +* [🚦 Getting Started](https://ixlab.github.io/vidformer/getting-started.html) +* [🐍 vidformer-py](https://ixlab.github.io/vidformer/vidformer-py/) +* [πŸ› οΈ vidformer core](https://ixlab.github.io/vidformer/vidformer/) + +## πŸ” About the project + +Vidformer is a highly modular suite of tools that work together; these are detailed [here](https://ixlab.github.io/vidformer/tools.html). + +❌ vidformer is ***NOT***: +* A conventional video editor (like Premiere Pro or Final Cut) +* A video database/VDBMS +* A natural language query interface for video +* A computer vision library (like OpenCV) +* A computer vision AI model (like CLIP or Yolo) + +However, vidformer is highly complementary to each of these. +If you're working on any of the later four, vidformer may be for you. + +**License:** Vidformer is open source under [Apache-2.0](./LICENSE). Contributions welcome. **File Layout**: -- [*vidformer*](./vidformer/): The core synthesis library, written in Rust πŸ¦€ - - [Docs](https://ixlab.github.io/vidformer/vidformer/) -- [*vidformer-py*](./vidformer-py/): A Python 🐍 video editing client - - [Docs](https://ixlab.github.io/vidformer/vidformer-py/) -- [*vidformer-cli*](./vidformer-cli/): A command-line interface for vidformer -- [*snake-pit*](./snake-pit/): The main vidformer test suite -- [*docs*](./docs/): The vidformer website - - [Website](https://ixlab.github.io/vidformer/) +- [*./vidformer*](./vidformer/): The core transformation library +- [*./vidformer-py*](./vidformer-py/): A Python video editing client +- [*./vidformer-cli*](./vidformer-cli/): A command-line interface for vidformer servers +- [*./snake-pit*](./snake-pit/): The main vidformer test suite +- [*./docs*](./docs/): The [vidformer website](https://ixlab.github.io/vidformer/) diff --git a/docs/introduction.md b/docs/introduction.md new file mode 100644 index 0000000..e10b99d --- /dev/null +++ b/docs/introduction.md @@ -0,0 +1 @@ +# Introduction diff --git a/docs/src/faq.md b/docs/src/faq.md index ddc3a76..4e8c4db 100644 --- a/docs/src/faq.md +++ b/docs/src/faq.md @@ -28,4 +28,4 @@ vidformer uses the [FFmpeg/libav*](https://ffmpeg.org/) libraries internally, so ### How does vidformer compare to OpenCV/cv2? vidformer orchestrates data movment in video synthesis tasks, but does not implement image processing directly. -Most use cases will still use OpenCV for this +Most use cases will still use OpenCV for this. diff --git a/docs/src/introduction.md b/docs/src/introduction.md deleted file mode 100644 index f9bb077..0000000 --- a/docs/src/introduction.md +++ /dev/null @@ -1,37 +0,0 @@ -# Introduction - -*vidformer* enables people to create videos without worrying about performance or efficiency. -vidformer does this via its novel data-oriented declarative video editor. - -We target these use cases: -* Creating videos as results to queries (e.g., "show me every time zebras fight, and overlay the time, date, location, and animal IDs") -* Visualization of computer vision models (e.g., Drawing bounding over objects in videos in a python notebook) - -For these use cases, vidformer strives to: -* **Create the resulting videos instantly and efficiently** -* Allow easy and idiomatic editing of videos combined with data -* Interact with existing technologies, data, and workloads -* Serve both exploratory ad hoc use cases *and* embedded use in production web applications, VDBMSs, and IaaS deployments. - -vidformer is ***NOT***: -* A conventional video editor (like Premiere Pro or Final Cut) -* A video database/VDBMS -* A natural language query interface for video -* A computer vision library (like OpenCV) -* A computer vision AI model (like CLIP or Yolo) - -However, vidformer is highly complementary to each of these. -If you're working on any of the later four, vidformer may be for you. - - - -## Quick Start - -See [Getting Started](./getting-started.md) - -## Next Steps - -vidformer is a highly modular suite of tools that work together. -The [tools overview](./tools.md) provides an overview and guidance on the full vidformer architecture. diff --git a/docs/src/introduction.md b/docs/src/introduction.md new file mode 120000 index 0000000..fe84005 --- /dev/null +++ b/docs/src/introduction.md @@ -0,0 +1 @@ +../../README.md \ No newline at end of file