Skip to content

Commit

Permalink
CVS-46868 documentation fixes (#466)
Browse files Browse the repository at this point in the history
  • Loading branch information
dtrawins committed Jan 21, 2021
1 parent a5887f5 commit e098dae
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 12 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ A few key features:
- [Model reshaping](docs/shape_and_batch_size.md). The server supports reshaping models in runtime.
- [Directed Acyclic Graph Scheduler](docs/dag_scheduler.md) Connect multiple models to deploy complex processing solutions and reduce overhead of sending data back and forth.

**Note:** OVMS has been tested on CentOS* and Ubuntu*. Publically released docker images are based on CentOS.
**Note:** OVMS has been tested on CentOS* and Ubuntu*. Publicly released docker images are based on CentOS.


## Run OpenVINO Model Server
Expand Down Expand Up @@ -78,7 +78,7 @@ Learn more about tests in the [developer guide](docs/developer_guide.md)

* All contributed code must be compatible with the [Apache 2](https://www.apache.org/licenses/LICENSE-2.0) license.

* All changes needs to have pass linter, unit and functional tests.
* All changes have to have pass style, unit and functional tests.

* All new features need to be covered by tests.

Expand Down
15 changes: 7 additions & 8 deletions deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ inference requests to the running server.

## Installing Helm

Please refer to: https://helm.sh/docs/intro/install for Helm installation.
Please refer to [Helm installation](https://helm.sh/docs/intro/install) guide.

## Model Repository

If you already have a model repository you may use that with this helm chart. If you don't, you can use any model
from https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/.
from the [models zoo](https://download.01.org/opencv/2021/openvinotoolkit/2021.2/open_model_zoo/models_bin/).

Model Server requires a repository of models to execute inference requests. For example, you can
use a Google Cloud Storage (GCS) bucket:
Expand All @@ -39,9 +39,8 @@ are needed and you can proceed to _Deploy the Model Server_ section.
Bucket permissions can be set with the _GOOGLE_APPLICATION_CREDENTIALS_ environment variable. Please follow the steps below:

* Generate Google service account JSON file with permissions: _Storage Legacy Bucket Reader_, _Storage Legacy Object Reader_, _Storage Object Viewer_. Name a file for example: _gcp-creds.json_
(you can follow these instructions to create a Service Account and download JSON:
https://cloud.google.com/docs/authentication/getting-started#creating_a_service_account)
* Create a Kubernetes secret from this JSON file:
(you can follow these [instructions](https://cloud.google.com/docs/authentication/getting-started#creating_a_service_account) to create a Service Account and download a JSON file)
* Create a Kubernetes secret from the JSON file:

$ kubectl create secret generic gcpcreds --from-file gcp-creds.json

Expand Down Expand Up @@ -86,8 +85,8 @@ $ helm install ovms ovms --set model_name=resnet50-binary-0001,model_path=gs://m

## Deploy Model Server with a Configuration File

To serve multiple models you can run Model Server with a configuration file as described here:
https://github.com/openvinotoolkit/model_server/blob/master/docs/docker_container.md#starting-docker-container-with-a-configuration-file
To serve multiple models you can run Model Server with a
[configuration file](https://github.com/openvinotoolkit/model_server/blob/master/docs/docker_container.md#starting-docker-container-with-a-configuration-file).

To deploy with config file:
* create a configuration file named _config.json_ and fill it with proper information
Expand All @@ -113,7 +112,7 @@ openvino-model-server LoadBalancer 10.121.14.253 1.2.3.4 8080:3004

The server exposes an gRPC endpoint on 8080 port and REST endpoint on 8081 port.

Follow the instructions here: https://github.com/openvinotoolkit/model_server/tree/master/example_client#submitting-grpc-requests-based-on-a-dataset-from-a-list-of-jpeg-files
Follow the [instructions](https://github.com/openvinotoolkit/model_server/tree/master/example_client#submitting-grpc-requests-based-on-a-dataset-from-a-list-of-jpeg-files)
to create an image classification client that can be used to perform inference with models being exposed by the server. For example:
```shell script
$ python jpeg_classification.py --grpc_port 8080 --grpc_address 1.2.3.4 --input_name data --output_name prob
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

**<div align="center">Figure 1: Docker Container (VM or Bare Metal Host)</div>**

- OpenVINO&trade; Model Server requires the models to be present in the local file system or they could be hosted remotely on object storage services. Both Google Cloud Storage and S3 compatible storage are supported. Refer to [Preparing the Models Repository](./models_repository.md) for more details.
- OpenVINO&trade; Model Server requires the models to be present in the local file system or they could be hosted remotely on object storage services. Google Cloud, S3 and Azure compatible storage is supported. Refer to [Preparing the Models Repository](./models_repository.md) for more details.

- OpenVINO&trade; Model Server is suitable for landing in Kubernetes environment. It can be also hosted on a bare metal server, virtual machine or inside a docker container.

Expand Down
2 changes: 1 addition & 1 deletion docs/ovms_quickstart.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# OpenVINO&trade; Model Server Quickstart

The OpenVINO Model Server requires a trained model in Intermediate Representation (IR) format on which it performs inference. Options to download appropriate models include:
The OpenVINO Model Server requires a trained model in Intermediate Representation (IR) or ONNX format on which it performs inference. Options to download appropriate models include:

- Downloading models from the [Open Model Zoo](https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/)
- Using the [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert models to the IR format from formats like TensorFlow*, ONNX*, Caffe*, MXNet* or Kaldi*.
Expand Down

0 comments on commit e098dae

Please sign in to comment.