Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Updating references to docs #2503

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 20 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# OpenVINO™ Model Server

Model Server hosts models and makes them accessible to software components over standard network protocols: a client sends a request to the model server, which performs model inference and sends a response back to the client. Model Server offers many advantages for efficient model deployment:
Model Server hosts models and makes them accessible to software components over standard network protocols: a client sends a request to the model server, which performs model inference and sends a response back to the client. Model Server offers many advantages for efficient model deployment:
- Remote inference enables using lightweight clients with only the necessary functions to perform API calls to edge or cloud deployments.
- Applications are independent of the model framework, hardware device, and infrastructure.
- Client applications in any programming language that supports REST or gRPC calls can be used to run inference remotely on the model server.
Expand All @@ -15,21 +15,21 @@ OpenVINO™ Model Server (OVMS) is a high-performance system for serving mod

![OVMS picture](docs/ovms_high_level.png)

The models used by the server need to be stored locally or hosted remotely by object storage services. For more details, refer to [Preparing Model Repository](https://docs.openvino.ai/nightly/ovms_docs_models_repository.html) documentation. Model server works inside [Docker containers](https://docs.openvino.ai/nightly/ovms_docs_deploying_server.html#deploying-model-server-in-docker-container), on [Bare Metal](https://docs.openvino.ai/nightly/ovms_docs_deploying_server.html#deploying-model-server-on-baremetal-without-container), and in [Kubernetes environment](https://docs.openvino.ai/nightly/ovms_docs_deploying_server.html#deploying-model-server-in-kubernetes).
Start using OpenVINO Model Server with a fast-forward serving example from the [Quickstart guide](https://docs.openvino.ai/nightly/ovms_docs_quick_start_guide.html) or explore [Model Server features](https://docs.openvino.ai/nightly/ovms_docs_features.html).
The models used by the server need to be stored locally or hosted remotely by object storage services. For more details, refer to [Preparing Model Repository](https://docs.openvino.ai/2024/ovms_docs_models_repository.html) documentation. Model server works inside [Docker containers](https://docs.openvino.ai/2024/ovms_docs_deploying_server.html#deploying-model-server-in-docker-container), on [Bare Metal](https://docs.openvino.ai/2024/ovms_docs_deploying_server.html#deploying-model-server-on-baremetal-without-container), and in [Kubernetes environment](https://docs.openvino.ai/2024/ovms_docs_deploying_server.html#deploying-model-server-in-kubernetes).
Start using OpenVINO Model Server with a fast-forward serving example from the [Quickstart guide](https://docs.openvino.ai/2024/ovms_docs_quick_start_guide.html) or explore [Model Server features](https://docs.openvino.ai/2024/ovms_docs_features.html).

Read [release notes](https://github.com/openvinotoolkit/model_server/releases) to find out what’s new.

### Key features:
- **[NEW]** [Python code execution](https://docs.openvino.ai/nightly/ovms_docs_python_support_reference.html)
- **[NEW]** [gRPC streaming](https://docs.openvino.ai/nightly/ovms_docs_streaming_endpoints.html)
- [MediaPipe graphs serving](https://docs.openvino.ai/nightly/ovms_docs_mediapipe.html)
- Model management - including [model versioning](https://docs.openvino.ai/nightly/ovms_docs_model_version_policy.html) and [model updates in runtime](https://docs.openvino.ai/nightly/ovms_docs_online_config_changes.html)
- [Dynamic model inputs](https://docs.openvino.ai/nightly/ovms_docs_shape_batch_layout.html)
- [Directed Acyclic Graph Scheduler](https://docs.openvino.ai/nightly/ovms_docs_dag.html) along with [custom nodes in DAG pipelines](https://docs.openvino.ai/nightly/ovms_docs_custom_node_development.html)
- [Metrics](https://docs.openvino.ai/nightly/ovms_docs_metrics.html) - metrics compatible with Prometheus standard
- **[NEW]** [Python code execution](https://docs.openvino.ai/2024/ovms_docs_python_support_reference.html)
- **[NEW]** [gRPC streaming](https://docs.openvino.ai/2024/ovms_docs_streaming_endpoints.html)
- [MediaPipe graphs serving](https://docs.openvino.ai/2024/ovms_docs_mediapipe.html)
- Model management - including [model versioning](https://docs.openvino.ai/2024/ovms_docs_model_version_policy.html) and [model updates in runtime](https://docs.openvino.ai/2024/ovms_docs_online_config_changes.html)
- [Dynamic model inputs](https://docs.openvino.ai/2024/ovms_docs_shape_batch_layout.html)
- [Directed Acyclic Graph Scheduler](https://docs.openvino.ai/2024/ovms_docs_dag.html) along with [custom nodes in DAG pipelines](https://docs.openvino.ai/2024/ovms_docs_custom_node_development.html)
- [Metrics](https://docs.openvino.ai/2024/ovms_docs_metrics.html) - metrics compatible with Prometheus standard
- Support for multiple frameworks, such as TensorFlow, PaddlePaddle and ONNX
- Support for [AI accelerators](https://docs.openvino.ai/nightly/about-openvino/compatibility-and-support/supported-devices.html)
- Support for [AI accelerators](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html)

**Note:** OVMS has been tested on RedHat, and Ubuntu. The latest publicly released docker images are based on Ubuntu and UBI.
They are stored in:
Expand All @@ -39,26 +39,26 @@ They are stored in:

## Run OpenVINO Model Server

A demonstration on how to use OpenVINO Model Server can be found in [our quick-start guide](https://docs.openvino.ai/nightly/ovms_docs_quick_start_guide.html).
A demonstration on how to use OpenVINO Model Server can be found in [our quick-start guide](https://docs.openvino.ai/2024/ovms_docs_quick_start_guide.html).
For more information on using Model Server in various scenarios you can check the following guides:

* [Model repository configuration](https://docs.openvino.ai/nightly/ovms_docs_models_repository.html)
* [Model repository configuration](https://docs.openvino.ai/2024/ovms_docs_models_repository.html)

* [Deployment options](https://docs.openvino.ai/nightly/ovms_docs_deploying_server.html)
* [Deployment options](https://docs.openvino.ai/2024/ovms_docs_deploying_server.html)

* [Performance tuning](https://docs.openvino.ai/nightly/ovms_docs_performance_tuning.html)
* [Performance tuning](https://docs.openvino.ai/2024/ovms_docs_performance_tuning.html)

* [Directed Acyclic Graph Scheduler](https://docs.openvino.ai/nightly/ovms_docs_dag.html)
* [Directed Acyclic Graph Scheduler](https://docs.openvino.ai/2024/ovms_docs_dag.html)

* [Custom nodes development](https://docs.openvino.ai/nightly/ovms_docs_custom_node_development.html)
* [Custom nodes development](https://docs.openvino.ai/2024/ovms_docs_custom_node_development.html)

* [Serving stateful models](https://docs.openvino.ai/nightly/ovms_docs_stateful_models.html)
* [Serving stateful models](https://docs.openvino.ai/2024/ovms_docs_stateful_models.html)

* [Deploy using a Kubernetes Helm Chart](https://github.com/openvinotoolkit/operator/tree/main/helm-charts/ovms)

* [Deployment using Kubernetes Operator](https://operatorhub.io/operator/ovms-operator)

* [Using binary input data](https://docs.openvino.ai/nightly/ovms_docs_binary_input.html)
* [Using binary input data](https://docs.openvino.ai/2024/ovms_docs_binary_input.html)



Expand All @@ -72,7 +72,7 @@ For more information on using Model Server in various scenarios you can check th

* [RESTful API](https://restfulapi.net/)

* [Benchmarking results](https://docs.openvino.ai/nightly/openvino_docs_performance_benchmarks.html)
* [Benchmarking results](https://docs.openvino.ai/2024/openvino_docs_performance_benchmarks.html)

* [Speed and Scale AI Inference Operations Across Multiple Architectures](https://techdecoded.intel.io/essentials/speed-and-scale-ai-inference-operations-across-multiple-architectures/?elq_cid=3646480_ts1607680426276&erpm_id=6470692_ts1607680426276) - webinar recording

Expand Down
2 changes: 1 addition & 1 deletion demos/python_demos/clip_image_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# CLIP image classification {#ovms_demo_clip_image_classification}

Image classification demo using multi-modal CLIP model for inference and [Python code](https://docs.openvino.ai/nightly/ovms_docs_python_support_reference.html) for pre and postprocessing.
Image classification demo using multi-modal CLIP model for inference and [Python code](https://docs.openvino.ai/2024/ovms_docs_python_support_reference.html) for pre and postprocessing.
The client sends request with an image and input labels to the graph and receives the label with the highest probability. The preprocessing python node is executed first and prepares inputs vector based on user inputs from the request. Then inputs are used to get similarity matrix from inference on the CLIP model. After that postprocessing python node is executed and extracts the label with highest score among the input labels and sends it back to the user.

Demo is based on this [CLIP notebook](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/228-clip-zero-shot-image-classification/228-clip-zero-shot-classification.ipynb)
Expand Down
Loading