Skip to content

Commit

Permalink
Update README.md - mivisionx_inference_analyzer (#501)
Browse files Browse the repository at this point in the history
* Update README.md

* Readme Updates - Codacy fix

Co-authored-by: Kiriti Gowda <kiriti.nageshgowda@amd.com>
  • Loading branch information
LakshmiKumar23 and kiritigowda authored May 11, 2021
1 parent a991813 commit 2e2d756
Showing 1 changed file with 57 additions and 51 deletions.
108 changes: 57 additions & 51 deletions apps/mivisionx_inference_analyzer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,19 +52,19 @@ MIVisionX provides developers with [docker images](https://hub.docker.com/u/mivi

* Start docker with display

```
% sudo docker pull mivisionx/ubuntu-16.04:latest
% xhost +local:root
% sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-16.04:latest
```
```
% sudo docker pull mivisionx/ubuntu-16.04:latest
% xhost +local:root
% sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-16.04:latest
```

* Test display with MIVisionX sample

```
% export PATH=$PATH:/opt/rocm/mivisionx/bin
% export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/mivisionx/lib
% runvx /opt/rocm/mivisionx/samples/gdf/canny.gdf
```
```
% export PATH=$PATH:/opt/rocm/mivisionx/bin
% export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/mivisionx/lib
% runvx /opt/rocm/mivisionx/samples/gdf/canny.gdf
```

* Run [Samples](#samples)

Expand All @@ -73,7 +73,7 @@ MIVisionX provides developers with [docker images](https://hub.docker.com/u/mivi
### Command Line Interface (CLI)

```
usage: python mivisionx_inference_analyzer.py [-h]
usage: python3 mivisionx_inference_analyzer.py [-h]
--model_format MODEL_FORMAT
--model_name MODEL_NAME
--model MODEL
Expand Down Expand Up @@ -115,7 +115,7 @@ usage: python mivisionx_inference_analyzer.py [-h]
### Graphical User Interface (GUI)

```
usage: python mivisionx_inference_analyzer.py
usage: python3 mivisionx_inference_analyzer.py
```

<p align="center"><img width="75%" src="../../docs/images/analyzer-4.png" /></p>
Expand All @@ -138,23 +138,24 @@ usage: python mivisionx_inference_analyzer.py

* **Step 1:** Clone MIVisionX Inference Analyzer Project

```
% cd && mkdir sample-1 && cd sample-1
% git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git
```
```
% cd && mkdir sample-1 && cd sample-1
% git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX
% cd MIVisionX/apps/mivisionx_inference_analyzer/
```

**Note:**
**Note:**

+ MIVisionX needs to be pre-installed
+ MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/`
+ ONNX model conversion requires ONNX install using `pip install onnx`

* **Step 2:** Download pre-trained SqueezeNet ONNX model from [ONNX Model Zoo](https://github.com/onnx/models#open-neural-network-exchange-onnx-model-zoo) - [SqueezeNet Model](https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz)

```
% wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
% tar -xvf squeezenet.tar.gz
```
```
% wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
% tar -xvf squeezenet.tar.gz
```

**Note:** pre-trained model - `squeezenet/model.onnx`

Expand All @@ -164,15 +165,15 @@ usage: python mivisionx_inference_analyzer.py

+ View inference analyzer usage

```
% cd ~/sample-1/MIVisionX-inference-analyzer/
% python mivisionx_inference_analyzer.py -h
```
```
% cd ~/sample-1/MIVisionX-inference-analyzer/
% python3 mivisionx_inference_analyzer.py -h
```
+ Run SqueezeNet Inference Analyzer
```
% python mivisionx_inference_analyzer.py --model_format onnx --model_name SqueezeNet --model ~/sample-1/squeezenet/model.onnx --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-1/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
% python3 mivisionx_inference_analyzer.py --model_format onnx --model_name SqueezeNet --model ~/sample-1/squeezenet/model.onnx --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-1/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
```
<p align="center"><img width="100%" src="../../docs/images/sample-1-4.png" /></p>
Expand All @@ -187,34 +188,36 @@ usage: python mivisionx_inference_analyzer.py
* **Step 1:** Clone MIVisionX Inference Analyzer Project
```
% cd && mkdir sample-2 && cd sample-2
% git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git
```
```
% cd && mkdir sample-2 && cd sample-2
% git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX
% cd MIVisionX/apps/mivisionx_inference_analyzer/
```
**Note:**
**Note:**
+ MIVisionX needs to be pre-installed
+ MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/`
* **Step 2:** Download pre-trained VGG 16 caffe model - [VGG_ILSVRC_16_layers.caffemodel](http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel)
```
% wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
```
```
% wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
```
* **Step 3:** Use the command below to run the inference analyzer
+ View inference analyzer usage
```
% cd ~/sample-2/MIVisionX-inference-analyzer/
% python mivisionx_inference_analyzer.py -h
% python3 mivisionx_inference_analyzer.py -h
```
+ Run VGGNet-16 Inference Analyzer
```
% python mivisionx_inference_analyzer.py --model_format caffe --model_name VggNet-16-Caffe --model ~/sample-2/VGG_ILSVRC_16_layers.caffemodel --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-2/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
% python3 mivisionx_inference_analyzer.py --model_format caffe --model_name VggNet-16-Caffe --model ~/sample-2/VGG_ILSVRC_16_layers.caffemodel --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-2/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
```
<p align="center"><img width="100%" src="../../docs/images/sample-2-2.png" /></p>
Expand All @@ -227,41 +230,44 @@ usage: python mivisionx_inference_analyzer.py
* **Step 1:** Clone MIVisionX Inference Analyzer Project
```
% cd && mkdir sample-3 && cd sample-3
% git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git
```
```
% cd && mkdir sample-3 && cd sample-3
% git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX
% cd MIVisionX/apps/mivisionx_inference_analyzer/
```
**Note:**
**Note:**
+ MIVisionX needs to be pre-installed
+ MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/`
+ NNEF model conversion requires [NNEF python parser](https://github.com/KhronosGroup/NNEF-Tools/tree/master/parser#nnef-parser-project) installed
* **Step 2:** Download pre-trained VGG 16 NNEF model
```
% mkdir ~/sample-3/vgg16
% cd ~/sample-3/vgg16
% wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz
% tar -xvf vgg16.onnx.nnef.tgz
```
```
% mkdir ~/sample-3/vgg16
% cd ~/sample-3/vgg16
% wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz
% tar -xvf vgg16.onnx.nnef.tgz
```
* **Step 3:** Use the command below to run the inference analyzer
+ View inference analyzer usage
```
% cd ~/sample-3/MIVisionX-inference-analyzer/
% python mivisionx_inference_analyzer.py -h
% cd ~/sample-3/MIVisionX-inference-analyzer/
% python3 mivisionx_inference_analyzer.py -h
```
+ Run VGGNet-16 Inference Analyzer
```
% python mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
% python3 mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
```
* **Preprocessing the model:** Use the --add/--multiply option to preprocess the input images
% python mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes --add [-2.1179,-2.0357,-1.8044] --multiply [0.0171,0.0175,0.0174]
```
% python3 mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes --add [-2.1179,-2.0357,-1.8044] --multiply [0.0171,0.0175,0.0174]
```

0 comments on commit 2e2d756

Please sign in to comment.