Skip to content

Commit

Permalink
H/graph (#11)
Browse files Browse the repository at this point in the history
* nnir_to_openvx print (ROCm#103)

* Ubuntu 18.04 warnings fix (ROCm#104)

* Support for ONNX V1.3 (ROCm#101)

ONNX 1.3 support for ResNet50

* adding grouped convolution and updating readme (ROCm#105)

* Update README.md

* Update convolution_layer.cpp

* Loom Sample & Readme Updates (ROCm#106)

* Loom Readme updates

* Loom Samples Added

* Samples Readme Updates

* ROCm Version Updated

* Readme updates (ROCm#107)

* loom logo added

* logo updates for loom

* Samples readme updates

* Main readme updates

* WinML Readme updates

* OpenVX Readme updates

* loom readme updates

* Apps readme updates

* updates (ROCm#108)

* Extensions readme updates

* main readme update

* loom readme updates

* loom link updates

* updates to loom

* updates

* Set up script updates

* YoloV2 Fix (ROCm#109)

* cloud inference updates (ROCm#111)

* client app images added

* Readme for Cloud Application

* Cloud Inference Fix (ROCm#112)

* Server Rename fix

* Readme Link Fix

* Client Readme updates

* Server usage help added

* Model Compiler Path Set

* Model Compiler Scripts updated

* Default Model Compiler Path Added

* Server Readme fix

* Cloud Inference readme fix

* Server/Client bug fix

* Cloud Inference Fix (ROCm#113)

* Cloud Inference - Server Help (ROCm#114)

* Cloud Inference Application - Graphs & Enhancements (ROCm#115)

* graph added/polished

* Update annInferenceApp.pro

* Update inference_receiver.h

* Update inference_viewer.cpp

* Update inference_viewer.cpp

* Update perf_graph.ui

* Update inference_receiver.cpp

* Title update

* Update inference_receiver.cpp

* onnx_to_nnir help text fix (ROCm#118)

* bug fix

* bug fix

* bug fix

* Update perf_chart.cpp

* bug fix
  • Loading branch information
hansely123 authored and japarada committed May 14, 2019
1 parent b59a87d commit ed44b88
Show file tree
Hide file tree
Showing 46 changed files with 581 additions and 152 deletions.
10 changes: 5 additions & 5 deletions MIVisionX-setup.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
__author__ = "Kiriti Nagesh Gowda"
__copyright__ = "Copyright 2018, AMD Radeon MIVisionX setup"
__license__ = "MIT"
__version__ = "0.9.93"
__version__ = "1.0.0"
__maintainer__ = "Kiriti Nagesh Gowda"
__email__ = "Kiriti.NageshGowda@amd.com"
__status__ = "beta"
Expand Down Expand Up @@ -29,9 +29,9 @@
status, userName = commands.getstatusoutput("whoami")

if setupDir == '':
setupDir_deps = '~/deps'
setupDir_deps = '~/mivisionx-deps'
else:
setupDir_deps = setupDir+'/deps'
setupDir_deps = setupDir+'/mivisionx-deps'

# setup for CentOS or Ubuntu
linuxSystemInstall_check = '--nogpgcheck'
Expand Down Expand Up @@ -62,8 +62,8 @@
print("\nMIVisionX Dependencies Installation\n")
os.system('sudo -v')
os.system('sudo '+linuxFlag+' '+linuxSystemInstall+' -y '+linuxSystemInstall_check+' install cmake git wget unzip')
os.system('(cd '+setupDir+'; mkdir deps)')
os.system('(cd '+setupDir+'; mkdir deps)')
os.system('(cd '+setupDir+'; mkdir mivisionx-deps)')
os.system('(cd '+setupDir+'; mkdir mivisionx-deps)')
os.system('(cd '+deps_dir+'; git clone https://github.com/RadeonOpenCompute/rocm-cmake.git )')
os.system('(cd '+deps_dir+'; git clone https://github.com/ROCmSoftwarePlatform/MIOpenGEMM.git )')
os.system('(cd '+deps_dir+'; wget https://github.com/ROCmSoftwarePlatform/MIOpen/archive/'+MIOpenVersion+'.zip )')
Expand Down
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ MIVisionX toolkit is a comprehensive computer vision and machine intelligence li

* [AMD OpenVX](#amd-openvx)
* [AMD OpenVX Extensions](#amd-openvx-extensions)
* [Loom 360 Video Stitch Library](amd_openvx_extensions/amd_loomsl#radeon-loom-stitching-library-vx_loomsl)
* [Loom 360 Video Stitch Library](amd_openvx_extensions/amd_loomsl)
* [Neural Net Library](amd_openvx_extensions/amd_nn#openvx-neural-network-extension-library-vx_nn)
* [OpenCV Extension](amd_openvx_extensions/amd_opencv#amd-opencv-extension)
* [WinML Extension](amd_openvx_extensions/amd_winml#amd-winml-extension)
Expand All @@ -32,7 +32,7 @@ AMD OpenVX ([amd_openvx](amd_openvx#amd-openvx-amd_openvx)) is a highly optimize
## AMD OpenVX Extensions
The OpenVX framework provides a mechanism to add new vision functions to OpenVX by 3rd party vendors. This project has below OpenVX [modules](amd_openvx_extensions#amd-openvx-extensions-amd_openvx_extensions) and utilities to extend [amd_openvx](amd_openvx#amd-openvx-amd_openvx) project, which contains the AMD OpenVX Core Engine.

* [amd_loomsl](amd_openvx_extensions/amd_loomsl#radeon-loom-stitching-library-vx_loomsl): AMD Radeon LOOM stitching library for live 360 degree video applications
* [amd_loomsl](amd_openvx_extensions/amd_loomsl): AMD Radeon Loom stitching library for live 360 degree video applications
* [amd_nn](amd_openvx_extensions/amd_nn#openvx-neural-network-extension-library-vx_nn): OpenVX neural network module
* [amd_opencv](amd_openvx_extensions/amd_opencv#amd-module-for-opencv-interop-from-openvx-vx_opencv): OpenVX module that implements a mechanism to access OpenCV functionality as OpenVX kernels
* [amd_winml](amd_openvx_extensions/amd_winml#amd-winml-extension): WinML extension will allow developers to import a pre-trained ONNX model into an OpenVX graph and add hundreds of different pre & post processing `vision`/`generic`/`user-defined` functions, available in OpenVX and OpenCV interop, to the input and output of the neural net model. This will allow developers to build an end to end application for inference.
Expand All @@ -55,7 +55,7 @@ Neural Net Model Compiler & Optimizer ([model_compiler](model_compiler#neural-ne

## Toolkit

[MIVisionX Toolkit](toolkit#mivisionx-toolkit), is a comprehensive set of help tools for neural net creation, development, training and deployment. The Toolkit provides you with help tools to design, develop, quantize, prune, retrain, and infer your neural network work in any framework. The Toolkit is designed to help you deploy your work to any AMD or 3rd party hardware, from embedded to servers.
[MIVisionX Toolkit](toolkit#mivisionx-toolkit), is a comprehensive set of help tools for neural net creation, development, training, and deployment. The Toolkit provides you with helpful tools to design, develop, quantize, prune, retrain, and infer your neural network work in any framework. The Toolkit is designed to help you deploy your work to any AMD or 3rd party hardware, from embedded to servers.

MIVisionX provides you with tools for accomplishing your tasks throughout the whole neural net life-cycle, from creating a model to deploying them for your target platforms.

Expand All @@ -70,7 +70,7 @@ MIVisionX provides you with tools for accomplishing your tasks throughout the wh
* GPU: [GFX7 or above](https://rocm.github.io/hardware.html) [optional]
* APU: Carrizo or above [optional]

**Note:** Some modules in MIVisionX can be build for CPU only. To take advantage of advanced features and modules we recommend using AMD GPUs or AMD APUs.
**Note:** Some modules in MIVisionX can be built for CPU only. To take advantage of advanced features and modules we recommend using AMD GPUs or AMD APUs.

### Windows
* Windows 10
Expand All @@ -93,7 +93,7 @@ MIVisionX provides you with tools for accomplishing your tasks throughout the wh

#### Prerequisites setup script for Linux - `MIVisionX-setup.py`

For convenience of the developer, we here provide the setup script which will install all the dependencies required by this project.
For the convenience of the developer, we here provide the setup script which will install all the dependencies required by this project.

**MIVisionX-setup.py** builds all the prerequisites required by MIVisionX. The setup script creates a deps folder and installs all the prerequisites, this script only needs to be executed once. If directory option is not given, the script will install deps folder in the home directory(~/) by default, else in the user specified location.

Expand Down Expand Up @@ -153,6 +153,7 @@ sudo yum install mivisionx
* executables placed in `/opt/rocm/mivisionx/bin` and libraries in `/opt/rocm/mivisionx/lib`
* OpenVX and module header files into `/opt/rocm/mivisionx/include`
* model compiler, toolkit, & samples placed in `/opt/rocm/mivisionx`
* Package (.deb & .rpm) install requires OpenCV v3.4.0 to execute AMD OpenCV extensions

#### Using `MIVisionX-setup.py` and `CMake` on Linux (Ubuntu `16.04`/`18.04` or CentOS `7.5`/`7.6`) with ROCm
* Install [ROCm](https://rocm.github.io/ROCmInstall.html)
Expand All @@ -175,7 +176,7 @@ make -j8
sudo make install
````
**Note:**
* vx_winml is not supported on linux
* vx_winml is not supported on Linux
* the installer will copy all executables into `/opt/rocm/mivisionx/bin` and libraries into `/opt/rocm/mivisionx/lib`
* the installer also copies all the OpenVX and module header files into `/opt/rocm/mivisionx/include` folder

Expand Down Expand Up @@ -284,7 +285,7 @@ sudo docker run -it -v /home/:/root/hostDrive/ --device=/dev/kfd --device=/dev/d
### Tested configurations
* Windows 10
* Linux: Ubuntu - `16.04`/`18.04` & CentOS - `7.5`/`7.6`
* ROCm: rocm-dkms - `2.2.31`
* ROCm: rocm-dkms - `2.3.14`
* rocm-cmake - [github master:ac45c6e](https://github.com/RadeonOpenCompute/rocm-cmake/tree/master)
* MIOpenGEMM - [1.1.5](https://github.com/ROCmSoftwarePlatform/MIOpenGEMM/releases/tag/1.1.5)
* MIOpen - [1.7.1](https://github.com/ROCmSoftwarePlatform/MIOpen/releases/tag/1.7.1)
Expand Down
2 changes: 1 addition & 1 deletion amd_openvx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The OpenVX framework provides a mechanism to add new vision functions to OpenVX
* **vx_nn**: OpenVX neural network module that was built on top of [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen)
* **vx_opencv**: OpenVX module that implemented a mechanism to access OpenCV functionality as OpenVX kernels

This software is provided under a MIT-style license, see the file COPYRIGHT.txt for details.
This software is provided under an MIT-style license, see the file COPYRIGHT.txt for details.

## Features
* The code is highly optimized for both x86 CPU and OpenCL for GPU
Expand Down
1 change: 1 addition & 0 deletions amd_openvx/openvx/include/VX/vx_khr_nn.h
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,7 @@ typedef struct _vx_nn_convolution_params_t
vx_enum down_scale_size_rounding; /*!< \brief Rounding method for calculating output dimensions. See <tt>\ref vx_nn_rounding_type_e</tt> */
vx_size dilation_x; /*!< \brief “inflate” the kernel by inserting zeros between the kernel elements in the x direction. The value is the number of zeros to insert.*/
vx_size dilation_y; /*!< \brief “inflate” the kernel by inserting zeros between the kernel elements in the y direction. The value is the number of zeros to insert.*/
vx_size group; /*!< \brief Count for grouped Convolution. */
} vx_nn_convolution_params_t;


Expand Down
15 changes: 12 additions & 3 deletions amd_openvx_extensions/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,17 @@
# AMD OpenVX Extensions (amd_openvx_extensions)
# AMD OpenVX Extensions

The OpenVX framework provides a mechanism to add new vision functions to OpenVX by 3rd party vendors. This project has below OpenVX modules and utilities to extend [AMD OpenVX](../amd_openvx#amd-openvx-amd_openvx) (amd_openvx) project, which contains the AMD OpenVX Core Engine.

* [amd_loomsl](amd_loomsl#radeon-loom-stitching-library-vx_loomsl): AMD Radeon LOOM stitching library for live 360 degree video applications
* [amd_nn](amd_nn#openvx-neural-network-extension-library-vx_nn): OpenVX neural network module
* [amd_loomsl](amd_loomsl): AMD Radeon LOOM stitching library for live 360 degree video applications

<p align="center"><img width="80%" src="../docs/images/loom-2.jpg" /></p>

* [amd_nn](amd_nn#openvx-neural-network-extension-library-vx_nn): OpenVX neural network module. Learn more about neural net workflow in [Neural Net Model Compiler & Optimizer](../model_compiler#neural-net-model-compiler--optimizer)

<p align="center"><img width="80%" src="../docs/images/modelCompilerWorkflow.png" /></p>

* [amd_opencv](amd_opencv#amd-opencv-extension): OpenVX module that implements a mechanism to access OpenCV functionality as OpenVX kernels

* [amd_winml](amd_winml#amd-winml-extension): WinML extension will allow developers to import a pre-trained ONNX model into an OpenVX graph and add hundreds of different pre & post processing `vision`/`generic`/`user-defined` functions, available in OpenVX and OpenCV interop, to the input and output of the neural net model. This will allow developers to build an end to end application for inference.

<p align="center"><img width="80%" src="../docs/images/winmlFrameWorks.png" /></p>
35 changes: 26 additions & 9 deletions amd_openvx_extensions/amd_loomsl/README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,38 @@
# Radeon Loom Stitching Library (vx_loomsl)
Radeon Loom Stitching Library (beta preview) is a highly optimized library for 360 degree video stitching applications. This library consists of:
* *Live Stitch API*: stitching framework built on top of OpenVX kernels (see [live_stitch_api.h](live_stitch_api.h) for API)
* *OpenVX module* [***vx_loomsl***]: additional OpenVX kernels needed for 360 degree video stitching
<p align="center"><img src="../../docs/images/LOOM_LOGO_250X125.png" /></p>

# Radeon Loom Stitching Library
Radeon Loom Stitching Library (beta preview) is a highly optimized library for 360-degree video stitching applications. This library consists of:
* **Live Stitch API**: stitching framework built on top of OpenVX kernels. Look into [live_stitch_api.h](live_stitch_api.h) for information on the API.
* **vx_loomsl**: additional OpenVX kernels needed for 360 degree video stitching

The [loom_shell](../../utilities/loom_shell/README.md) command-line tool can be used to build your application quickly. It provides direct access to Live Stitch API by encapsulating the calls to enable rapid prototyping.

This software is provided under a MIT-style license, see the file COPYRIGHT.txt for details.
This software is provided under an MIT-style license, see the file COPYRIGHT.txt for details.

[![Loom Stitch](../../docs/images/loom-4.png)](https://youtu.be/E8pPU04iZjw)

## Features
* Real-time live 360 degree video stitching optimized for Radeon Pro Graphics
* Upto 31 cameras
* Upto 7680x3840 output resolution

<p align="center"><img width="80%" src="../../docs/images/loom-2.jpg" /></p>

* Real-time live 360-degree video stitching optimized for Radeon Pro Graphics
* Up to 31 cameras
* Up to 7680x3840 output resolution
* RGB and YUV 4:2:2 image formats
* Overlay other videos on top of stitched video
* Overlay other videos on top of the stitched video
* Support for 3rd party *LoomIO* plug-ins for camera capture and stitched output
* Support PtGui project export/import for camera calibration

## Samples

[Samples](../../samples#loom-360-stitch---radeon-loom-360-stitch-samples) to run 360 stitch on calibrated images is provided in the samples folder. The samples use [Loom Shell](../../utilities/loom_shell#radeon-loomshell), an interpreter that enables stitching 360-degree videos using a script. It provides direct access to Live Stitch API by encapsulating the calls to enable rapid prototyping.

* [Sample - 1](../../samples#sample---1)
* [Sample - 2](../../samples#sample---2)
* [Sample - 3](../../samples#sample---3)

**Note:** The output stitched image is saved as LoomOutputStitch.bmp

## Live Stitch API: Simple Example
Let's consider a 360 rig that has 3 1080p cameras with Circular FishEye lenses.
The below example demonstrates how to stitch images from these cameras into a 4K Equirectangular buffer.
Expand Down
2 changes: 2 additions & 0 deletions amd_openvx_extensions/amd_nn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ vx_nn is an OpenVX Neural Network extension module. This implementation supports
| Deconvolution|vxDeconvolutionLayer|org.khronos.nn_extension.deconvolution_layer |
| Fully Connected|vxFullyConnectedLayer|org.khronos.nn_extension.fully_connected_layer |
| Local Response Normalization|vxNormalizationLayer|org.khronos.nn_extension.normalization_layer |
| Permute|vxPermuteLayer|com.amd.nn_extension.permute_layer |
| Pooling|vxPoolingLayer|org.khronos.nn_extension.pooling_layer |
| Prior Box|vxPriorBoxLayer|com.amd.nn_extension.prior_box_layer|
| ROI Pooling|vxROIPoolingLayer|org.khronos.nn_extension.roi_pooling_layer |
| Scale|vxScaleLayer|com.amd.nn_extension.scale_layer |
| Slice|vxSliceLayer|com.amd.nn_extension.slice_layer |
Expand Down
26 changes: 23 additions & 3 deletions amd_openvx_extensions/amd_nn/src/convolution_layer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ static vx_status VX_CALLBACK validateConvolutionLayer(vx_node node, const vx_ref
if((type != VX_TYPE_FLOAT32) && (type != VX_TYPE_FLOAT16)) return ERRMSG(VX_ERROR_INVALID_TYPE, "validate: conv: #4 type=%d (must be float/float16)\n", type);
ERROR_CHECK_STATUS(vxQueryTensor((vx_tensor)parameters[4], VX_TENSOR_DIMS, output_dims, sizeof(output_dims)));

if(output_dims[3] != input_dims[3] || input_dims[2] != weights_dims[2] || output_dims[2] != weights_dims[3])
if(output_dims[3] != input_dims[3] || output_dims[2] != weights_dims[3])
return ERRMSG(VX_ERROR_INVALID_DIMENSION, "validate: conv: input[%ldx%ldx%ldx%ld] weights[%ldx%ldx%ldx%ld] output[%ldx%ldx%ldx%ld]\n",
input_dims[3], input_dims[2], input_dims[1], input_dims[0],
weights_dims[3], weights_dims[2], weights_dims[1], weights_dims[0],
Expand Down Expand Up @@ -173,15 +173,26 @@ static vx_status VX_CALLBACK initializeConvolutionLayer(vx_node node, const vx_r

vx_size pad_h, pad_w;
vx_size dilation_w, dilation_h;
vx_size groupCount;
vx_enum downscale_size_rounding, overflow_policy, rounding_policy;


pad_h = params.padding_y; pad_w = params.padding_x;
downscale_size_rounding = params.down_scale_size_rounding;
overflow_policy = params.overflow_policy;
rounding_policy = params.rounding_policy;
dilation_h = params.dilation_y + 1;
dilation_w = params.dilation_x + 1;
miopenConvolutionMode_t mode = miopenConvolution;
groupCount = params.group;
miopenConvolutionMode_t mode;
if(groupCount == 1)
{
mode = miopenConvolution;
}
else
{
mode = miopenGroupConv;
}

// override default cbr_mode by NN_MIOPEN_CBR_MODE environment variable.
vx_int32 nn_cbr_mode = getEnvironmentVariable("NN_MIOPEN_CBR_MODE");
Expand All @@ -203,7 +214,13 @@ static vx_status VX_CALLBACK initializeConvolutionLayer(vx_node node, const vx_r
vx_size num_dims;
ERROR_CHECK_STATUS(vxQueryTensor((vx_tensor)parameters[2], VX_TENSOR_NUMBER_OF_DIMS, &num_dims, sizeof(vx_size)));
ERROR_CHECK_STATUS(vxQueryTensor((vx_tensor)parameters[2], VX_TENSOR_DIMS, bias_dims, num_dims * sizeof(vx_size)));
}
}

if(input_dims[2] != (weights_dims[2] * groupCount))
return ERRMSG(VX_ERROR_INVALID_DIMENSION, "initialize: conv: input[%ldx%ldx%ldx%ld] weights[%ldx%ldx%ldx%ld] output[%ldx%ldx%ldx%ld]\n",
input_dims[3], input_dims[2], input_dims[1], input_dims[0],
weights_dims[3], weights_dims[2], weights_dims[1], weights_dims[0],
output_dims[3], output_dims[2], output_dims[1], output_dims[0]);

vx_size stride_h, stride_w;
vx_size kernel_h, kernel_w;
Expand Down Expand Up @@ -251,6 +268,9 @@ static vx_status VX_CALLBACK initializeConvolutionLayer(vx_node node, const vx_r
ERROR_CHECK_MIOPEN_STATUS(miopenCreateConvolutionDescriptor(&data->conv_desc));
ERROR_CHECK_MIOPEN_STATUS(miopenInitConvolutionDescriptor(data->conv_desc, mode, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w));

//Grouped Convolution
ERROR_CHECK_MIOPEN_STATUS(miopenSetConvolutionGroupCount(data->conv_desc, groupCount));

//Memory Declaration.
ERROR_CHECK_STATUS(vxQueryTensor((vx_tensor)parameters[0], VX_TENSOR_BUFFER_OPENCL, &data->input_mem, sizeof(data->input_mem)));
ERROR_CHECK_STATUS(vxQueryTensor((vx_tensor)parameters[4], VX_TENSOR_BUFFER_OPENCL, &data->output_mem, sizeof(data->output_mem)));
Expand Down
2 changes: 1 addition & 1 deletion amd_openvx_extensions/amd_winml/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ The AMD WinML (vx_winml) is an OpenVX module that implements a mechanism to acce

<p align="center"><img width="80%" src="../../docs/images/winmlFrameWorks.png" /></p>

WinML extension will allow developers to import a pre-trained ONNX model into an OpenVX graph and add hundreds of different pre & post processing `vision`/`generic`/`user-defined` functions, available in OpenVX and OpenCV interop, to the input and output of the neural net model. This will allow developers to build an end to end application for inference.
The WinML extension will allow developers to import a pre-trained ONNX model into an OpenVX graph and add hundreds of different pre & post processing `vision`/`generic`/`user-defined` functions, available in OpenVX and OpenCV interop, to the input and output of the neural net model. This will allow developers to build an end to end application for inference.

<p align="center"><img width="100%" src="../../docs/images/winmlRuntime.png" /></p>

Expand Down
2 changes: 1 addition & 1 deletion apps/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ This sample [application](./mivisionx_winml_yolov2#yolov2-using-amd-winml-extens

* [MIVisionX-Classifier](https://github.com/kiritigowda/MIVisionX-Classifier) - This application runs know CNN image classifiers on live/pre-recorded video stream.
* [YOLOv2](https://github.com/kiritigowda/YoloV2NCS) - Run tiny yolov2 (20 classes) with AMD's MIVisionX
* [Traffic Vision](https://github.com/srohit0/trafficVision#traffic-vision) - This app detects cars/buses in a live traffic at a phenomenal 50 frames/sec with HD resolution (1920x1080) using deep learning network Yolo-V2. The model used in the app is optimized for inferencing performnce on AMD-GPUs using MIVisionX toolkit.
* [Traffic Vision](https://github.com/srohit0/trafficVision#traffic-vision) - This app detects cars/buses in live traffic at a phenomenal 50 frames/sec with HD resolution (1920x1080) using deep learning network Yolo-V2. The model used in the app is optimized for inferencing performance on AMD-GPUs using MIVisionX toolkit.
Loading

0 comments on commit ed44b88

Please sign in to comment.