Skip to content

Releases: roboflow/inference

v0.9.10

13 Feb 18:41
Compare
Choose a tag to compare

🚀 Added

inference Benchmarking 🏃‍♂️

A new command has been added to the inference-cli for benchmarking performance. Now you can test inference in different environments with different configurations and measure its performance. Look at us testing speed and scalability of hosted inference at Roboflow platform 🤯

scaling_of_hosted_roboflow_platform.mov

Run your own benchmark with a simple command:

inference benchmark python-package-speed -m coco/3 

See the docs for more details.

🌱 Changed

  • Improved serialisation logic of requests and responses that helps Roboflow platform to improve model monitoring

🔨 Fixed

  • bug #260 causing inference API instability in multiple-workers setup and in case of shuffling large amount of models - from now on, API container should not raise strange HTTP 5xx errors due to model management
  • faulty logic for getting request_id causing errors in parallel-http container

🏆 Contributors

@paulguerrie (Paul Guerrie), @SolomonLake (Solomon Lake ), @robiscoding (Rob Miller) @PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.9...v0.9.10

v0.9.10rc3

12 Feb 19:42
ac39653
Compare
Choose a tag to compare
v0.9.10rc3 Pre-release
Pre-release

This is a pre-release version that mainly addresses some instabilities in the model manager.

What's Changed

Full Changelog: v0.9.9...v0.9.10rc3

v0.9.9

07 Feb 17:22
7900225
Compare
Choose a tag to compare

🚀 Added

Roboflow workflows 🤖

A new way to create ML pipelines without writing code. Declare the sequence of models and intermediate processing steps using JSON config and execute using inference container (or Hosted Roboflow platform). No Python code needed! 🤯 Just watch our feature preview

workflows_feature_preview.mp4

Want to experiment more?

pip install inference-cli

inference server start --dev

Hit http://127.0.0.1:9001 in your browser, then click Jump Into an Inference Enabled Notebook → button and open the notebook named workflows.ipynb:

We encourage to acknowledge our documentation 📖 to reveal full potential of Roboflow workflows.

This feature is still under heavy development. Your feedback is needed to make it better!

Take inference to the cloud with one command 🚀

Yes, you got it right. inference-cli package now provides set of inference cloud commands to deploy required infrastructure without effort.

Just:

pip install inference-cli

And depended on your needs use:

inference cloud deploy --provider aws --compute-type gpu
# or
inference cloud deploy --provider gcp --compute-type cpu

With example posted here, we are just scratching the surface - visit our docs 📖 where more examples are presented.

🔥 YOLO-NAS is coming!

  • We plan to onboard YOLO-NAS to the Roboflow platform. In this release we are introducing foundation work to make that happen. Stay tuned!

supervision 🤝 inference

We've extended capabilities of inference infer command of inference-cli package. Now it is capable to run inference against images, directories of images and videos, visualise predictions using supervision and save them in the location of choice.

What does it take to get your predictions?

pip install inference-cli

# start the server
inference server start 

# run inference
inference infer -i {PATH_TO_VIDEO} -m coco/3 -c bounding_boxes_tracing -o {OUTPUT_DIRECTORY} -D

There are plenty of configuration options that can alter the visualisation. You can use predefined configs (example: -c bounding_boxes_tracing) or create your own. See our docs 📖 to discover all options.

🌱 Changed

  • breaking: Pydantic 2: Inference now depends on pydantic>=2.
  • breaking: Default values of parameters (like confidence, iou_threshold etc.) that were set for newer parts of inference (including inference HTTP container endpoints) were aligned with more reasonable defaults that hosted Roboflow platform uses. That is going to make the experience of inference usage consistent with Roboflow platform. This, however, will alter the behaviour of package for clients that do not specify their own values of parameters while making predictions. Summary: confidence is from now on defaulted to 0.4 and iou_threshold to 0.3. We encourage clients using self-hosted containers to evaluate results on their end. Changes to be inspected here.
  • API calls to HTTP endpoints with Roboflow models now accept disable_active_learning flag that prevents Active Learning being active for specific request
  • Documentation 📖 was refreshed. Redesign is supposed to make the content easier to comprehend. We would love to have some feedback 🙏

🔨 Fixed

  • breaking: Fixed the issue #260 with bug introduced in version v0.9.3 causing classification models with 10 and more classes to assign wrong class name to predictions (despite maintaining good class ids) - clients relying on class name instead on class_id of predictions were affected.
  • breaking: Typo coglvm -> cogvlm in inference-sdk HTTP client method name prompt_cogvlm(...)

Full Changelog: v0.9.8...v0.9.9

Release candidate of v0.9.9

07 Feb 12:14
83610b8
Compare
Choose a tag to compare
Pre-release

This is a draft release of v0.9.9.

v0.9.8

29 Dec 19:37
Compare
Choose a tag to compare

What's Changed

Highlights

Grounding DINO

Support for a new core model, Grounding DINO has been added. Grounding DINO is a zero-shot object detection model that you can use to identify objects in images and videos using arbitrary text prompts.

Inference SDK For Core Models

You can now use the Inference SDK with core models (like CLIP). No more complicated request and payload formatting. See the docs here.

Built In Jupyter Notebook

Roboflow Inference Server containers now include a built in Jupyter notebook for development and testing. This notebook can be accessed via the inference server landing page. To use it, go to localhost:9001 in your browser after starting an inference server. Then select "Jump Into An Inference Enabled Notebook". This will open a new tab with a Jupyterlab session, preloaded with example notebooks and all of the inference dependancies.

New Contributors

Full Changelog: v0.9.7...v0.9.8

v0.9.7

20 Dec 21:13
f64d58c
Compare
Choose a tag to compare

What's Changed

Highlights

Stream Management API (Enterprise)

The stream management api is designed to cater to users requiring the execution of inference to generate predictions using Roboflow object-detection models, particularly when dealing with online video streams. It enhances the functionalities of the familiar inference.Stream() and InferencePipeline() interfaces, as found in the open-source version of the library, by introducing a sophisticated management layer. The inclusion of additional capabilities empowers users to remotely manage the state of inference pipelines through the HTTP management interface integrated into this package. More info.

Model Aliases

Some common public models now have convenient aliases! The with this release, the COCO base weights for YOLOv8 models can be accessed with user friendly model IDs like yolov8n-640. See all available model aliases here.

Other Improvements

  • Improved inference CLI commands
  • Unified batching APIs so that all model types can accept batch requests
  • Speed improvements for HTTP interface

New Contributors

Full Changelog: v0.9.6...v0.9.7

v0.9.7rc2 - Test release for fix with CLI run problem

18 Dec 15:21
49816f9
Compare
Choose a tag to compare
v0.9.7.rc2

Fix makefile, such that onnx runtime is installed

v0.9.7rc1 - Test release for fix with CLI run problem

18 Dec 15:10
0de3ce2
Compare
Choose a tag to compare
v0.9.7.rc1

Fix problem with device request not being list

v0.9.6

13 Dec 18:12
Compare
Choose a tag to compare

What's Changed

Highlights

CogVLM

Inference server users can now run CogVLM for a fully self hosted, multimodal LLM. See the example here.

Slim Docker Images

For use cases that do not need Core Model functionality (e.g. CLIP), there are -slim docker images available which include fewer dependancies and are much smaller.

  • roboflow/roboflow-inference-server-cpu-slim
  • roboflow/roboflow-inference-server-gpu-slim

Breaking Changes

Infer API Update

The infer() method of Roboflow models now returns an InferenceResponse object instead of raw model output. This means that using models in application logic should feel similar to using models via the HTTP interface. In practice, programs that used the following pattern

...
model = get_roboflow_model(...)
results = model.infer(...)
results = model.make_response(...)
...

should be updated to

...
model = get_roboflow_model(...)
results = model.infer(...)
...

New Contributors

Full Changelog: v0.9.5...v0.9.6

v0.9.5

05 Dec 16:07
8b4e413
Compare
Choose a tag to compare

0.9.5

Features, Fixes, and Improvements

Full Changelog: v0.9.3...v0.9.5.rc2

New inference.Stream interface

We are excited to introduce the upgraded version of our stream interface: InferencePipeline. Additionally, the WebcamStream class has evolved into a more versatile VideoSource.

This new abstraction is not only faster and more stable but also provides more granular control over the entire inference process.

Can I still use inference.Stream?

Absolutely! The old components remain unchanged for now. However, be aware that this abstraction is slated for deprecation over time. We encourage you to explore the new InferencePipeline interface and take advantage of its benefits.

What has been improved?

  • Performance: Experience A significant boost in throughput, up to 5 times, and improved latency for online inference on video streams using the YOLOv8n model.
  • Stability: InferencePipeline can now automatically re-establish a connection for online video streams if a connection is lost.
  • Prediction Sinks: Introducing prediction sinks, simplifying the utilization of predictions without the need for custom code.
  • Control Over Inference Process: InferencePipeline intelligently adapts to the type of video source, whether a file or stream. Video files are processed frame by frame, while online streams prioritize real-time processing, dropping non-real-time frames.
  • Observability: Gain insights into the processing state through events exposed by InferencePipeline. Reference implementations letting you to monitor processing are also available.

How to Migrate to the new Inference Stream interface?

You need to change a few lines of code to migrate to using the new Inference stream interface.

Below is an example that shows the old interface:

import inference

def on_prediction(predictions, image):
    pass

inference.Stream(
    source="webcam", # or "rstp://0.0.0.0:8000/password" for RTSP stream, or "file.mp4" for video
    model="rock-paper-scissors-sxsw/11", # from Universe
    output_channel_order="BGR",
    use_main_thread=True, # for opencv display
    on_prediction=on_prediction, 
)

Here is the same code expressed in the new interface:

from inference.core.interfaces.stream.inference_pipeline import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
    model_id="rock-paper-scissors-sxsw/11",
    video_reference=0,
    on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()

Note the slight change in the on_prediction handler, from:

def on_prediction(predictions: dict, image: np.ndarray) -> None:
    pass

Into:

from inference.core.interfaces.camera.entities import VideoFrame

def on_prediction(predictions: dict, video_frame: VideoFrame) -> None:
    pass

Want to know more?

Here are useful references:

Parallel Robofolow Inference server

The Roboflow Inference Server supports concurrent processing. This version of the server accepts and processes requests asynchronously, running the web server, preprocessing, auto batching, inference, and post processing all in separate threads to increase server FPS throughput. Separate requests to the same model will be batched on the fly as allowed by $MAX_BATCH_SIZE, and then response handling will occurr independently. Images are passed via Python's SharedMemory module to maximize throughput.

These changes result in as much as a 76% speedup on one measured workload.

Note

Currently, only Object Detection, Instance Segmentation, and Classification models are supported by this module. Core models are not enabled.

Important

We require a Roboflow Enterprise License to use this in production. See inference/enterpise/LICENSE.txt for details.

How To Use Concurrent Processing

You can build the server using ./inference/enterprise/parallel/build.sh and run it using ./inference/enterprise/parallel/run.sh

We provide a container at Docker Hub that you can pull using docker pull roboflow/roboflow-inference-server-gpu-parallel:latest. If you are pulling a pinned tag, be sure to change the $TAG variable in run.sh.

This is a drop in replacement for the old server, so you can send requests using the same API calls you were using previously.

Performance

We measure and report performance across a variety of different task types by selecting random models found on Roboflow Universe.

Methodology

The following metrics are taken on a machine with eight cores and one gpu. The FPS metrics reflect best out of three trials. The column labeled 0.9.5.parallel reflects the latest concurrent FPS metrics. Instance segmentation metrics are calculated using "mask_decode_mode": "fast" in the request body. Requests are posted concurrently with a parallelism of 1000.

Results

Workspace Model Model Type split 0.9.5.rc FPS 0.9.5.parallel FPS
senior-design-project-j9gpp nbafootage/3 object-detection train 30.2 fps 44.03 fps
niklas-bommersbach-jyjff dart-scorer/8 object-detection train 26.6 fps 47.0 fps
geonu water-08xpr/1 instance-segmentation valid 4.7 fps 6.1 fps
university-of-bradford detecting-drusen_1/2 instance-segmentation train 6.2 fps 7.2 fps
fy-project-y9ecd cataract-detection-viwsu/2 classification train 48.5 fps 65.4 fps
hesunyu playing-cards-ir0wr/1 classification train 44.6 fps 57.7 fps