Skip to content

Commit

Permalink
Release 1.10.0 cherry pick round 1 (#9886)
Browse files Browse the repository at this point in the history
* Fix memset size (#9840)

(cherry picked from commit d012d9f)

* [js/web] do not use nodejs type 'Buffer' in web (#9839)

* [js/web] do not use nodejs type 'Buffer' in web

* resolve comments and validate tests

* remove 'Buffer' in test

(cherry picked from commit a3ebc5e)

* Fix potential data race with OrtValue usage in Python (#9841)

(cherry picked from commit 18fd2cf)

* [OpenVINO-EP] V3.4 Release with OpenVINO 2021.4.2 LTS Release (#9848)

* Changes to ensure openvino build go through in Windows

* Modified Hetero plugin Logic

*Modified Hetero Feature logic. In Hetero,
if the operator to be marked true in getcapability(),
it should be supported by either of the devices
specified with HETERO in the device_type.

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* OV updated to 2021.4.2 version

* OV updated to 2021.4.2 version

* Updated OV to 2021.4.2 version, mono download  link and dotnet version

* Copying Managed nugets in openvino c# docker file

*Copying Managed nuget to nugets artifacts
directory

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
(cherry picked from commit 0ae0f29)

* no fallback when enforcing explicit EP registration. (#9863)

* no fallback when enforcing explicit EP registration.

* add explicit ep registrations for python.

(cherry picked from commit 1e9e57d)

* layernorm throw error if input has no data (#9837)

(cherry picked from commit bf716e6)

* [js/node] npm audit fix (#9861)

(cherry picked from commit 27e337e)

* [python manylinux package] emit warning if missing CUDA/TensorRT dependency causes ld_preload to fail and user tries to register either CUDA/TensorRT EP (#9872)

* add warning if ld_preload fails for CUDA or TRT when trying to register either provider

* refactor

* change wording from register to create

(cherry picked from commit ec9b0ed)

* QDQ tool modification part2 (#9720)

* Add finetuned qdq options

* Add description

* Add unit tests

* Modify for channel axis

* Remove too specific feature. Move this implementation to e2e example

* Add OpTypesSupportPerChannelQuantization

* fix bug for unit test

* Keep flags OpTypesSupportPerChannelQuantization and QDQChannelAxis for internal use

Will have a follow-up PR to fine tune the code

* remove unnecessary warning

Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
(cherry picked from commit 0baf687)

* Cancel transpose optimizer for resize (#9870)

* cancel transpose optimizer for resize

* add UT

* addressing comments

* fix build err

(cherry picked from commit 16bfd3c)

* Add build option to enable cuda profiling (#9875)

(cherry picked from commit 9345894)

Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Maajid khan <n.maajidkhan@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Ye Wang <52801275+wangyems@users.noreply.github.com>
Co-authored-by: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Co-authored-by: RandySheriffH <48490400+RandySheriffH@users.noreply.github.com>
  • Loading branch information
9 people authored Nov 30, 2021
1 parent 8afd969 commit 31a4742
Show file tree
Hide file tree
Showing 42 changed files with 806 additions and 762 deletions.
12 changes: 11 additions & 1 deletion cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,8 @@ option(onnxruntime_PREBUILT_PYTORCH_PATH "Path to pytorch installation dir")
# external transformer src path
option(onnxruntime_EXTERNAL_TRANSFORMER_SRC_PATH "Path to external transformer src dir")

option(onnxruntime_ENABLE_CUDA_PROFILING "Enable CUDA kernel profiling" OFF)

if (onnxruntime_USE_CUDA)
set(onnxruntime_DISABLE_RTTI OFF)
endif()
Expand Down Expand Up @@ -960,7 +962,11 @@ if (WIN32)
# issued by thrust nonstandard extension used: nameless struct/union
list(APPEND ORT_WARNING_FLAGS "/wd4201")
# warning C4800: Implicit conversion from 'X' to bool. Possible information loss
list(APPEND ORT_WARNING_FLAGS "/w34800")
if (onnxruntime_USE_OPENVINO)
list(APPEND ORT_WARNING_FLAGS "/wd4800")
else()
list(APPEND ORT_WARNING_FLAGS "/w34800")
endif()
if (onnxruntime_USE_OPENMP)
list(APPEND ORT_WARNING_FLAGS "/wd6993") # Code analysis ignores OpenMP constructs
endif()
Expand Down Expand Up @@ -1696,6 +1702,10 @@ if (onnxruntime_ENABLE_TRAINING_OPS)
add_compile_definitions(ENABLE_TRAINING_OPS)
endif()

if (onnxruntime_ENABLE_CUDA_PROFILING)
add_compile_definitions(ENABLE_CUDA_PROFILING)
endif()

if (onnxruntime_ENABLE_TRAINING)
add_compile_definitions(ENABLE_TRAINING)
add_compile_definitions(ENABLE_TRAINING_OPS)
Expand Down
11 changes: 8 additions & 3 deletions cmake/onnxruntime_providers.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -353,13 +353,18 @@ if (onnxruntime_USE_CUDA)
endif()

add_dependencies(onnxruntime_providers_cuda onnxruntime_providers_shared ${onnxruntime_EXTERNAL_DEPENDENCIES} ${onnxruntime_tvm_dependencies})
target_link_directories(onnxruntime_providers_cuda PRIVATE ${onnxruntime_CUDA_HOME}/extras/CUPTI/lib64)
target_link_libraries(onnxruntime_providers_cuda PRIVATE cublas cudnn curand cufft cupti ${ONNXRUNTIME_PROVIDERS_SHARED})
target_include_directories(onnxruntime_providers_cuda PRIVATE ${ONNXRUNTIME_ROOT} ${CMAKE_CURRENT_BINARY_DIR} ${onnxruntime_CUDNN_HOME}/include ${eigen_INCLUDE_DIRS} ${TVM_INCLUDES} PUBLIC ${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES} ${onnxruntime_CUDA_HOME}/extras/CUPTI/include)
target_link_libraries(onnxruntime_providers_cuda PRIVATE cublas cudnn curand cufft ${ONNXRUNTIME_PROVIDERS_SHARED})
target_include_directories(onnxruntime_providers_cuda PRIVATE ${ONNXRUNTIME_ROOT} ${CMAKE_CURRENT_BINARY_DIR} ${onnxruntime_CUDNN_HOME}/include ${eigen_INCLUDE_DIRS} ${TVM_INCLUDES} PUBLIC ${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES})
# ${CMAKE_CURRENT_BINARY_DIR} is so that #include "onnxruntime_config.h" inside tensor_shape.h is found
set_target_properties(onnxruntime_providers_cuda PROPERTIES LINKER_LANGUAGE CUDA)
set_target_properties(onnxruntime_providers_cuda PROPERTIES FOLDER "ONNXRuntime")

if (onnxruntime_ENABLE_CUDA_PROFILING) # configure cupti for cuda profiling
target_include_directories(onnxruntime_providers_cuda PRIVATE ${onnxruntime_CUDA_HOME}/extras/CUPTI/include)
target_link_directories(onnxruntime_providers_cuda PRIVATE ${onnxruntime_CUDA_HOME}/extras/CUPTI/lib64)
target_link_libraries(onnxruntime_providers_cuda PRIVATE cupti)
endif()

if (onnxruntime_ENABLE_NVTX_PROFILE)
target_link_libraries(onnxruntime_providers_cuda PRIVATE nvToolsExt)
endif()
Expand Down
2 changes: 1 addition & 1 deletion dockerfiles/Dockerfile.openvino
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# SPDX-License-Identifier: MIT
#--------------------------------------------------------------------------

ARG OPENVINO_VERSION=2021.4.1
ARG OPENVINO_VERSION=2021.4.2


# Build stage
Expand Down
6 changes: 3 additions & 3 deletions dockerfiles/Dockerfile.openvino-centos7
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ FROM centos:7.8.2003
WORKDIR /code

ARG MY_ROOT=/code
ARG YUM_OV_PACKAGE=intel-openvino-runtime-centos7-2021.4.689.x86_64
ARG YUM_OV_PACKAGE=intel-openvino-runtime-centos7-2021.4.752.x86_64
ARG DEVICE=CPU_FP32
ARG ONNXRUNTIME_REPO=https://github.com/microsoft/onnxruntime
ARG ONNXRUNTIME_BRANCH=master

ENV INTEL_OPENVINO_DIR=/opt/intel/openvino_2021.4.689
ENV INTEL_OPENVINO_DIR=/opt/intel/openvino_2021.4.752
ENV InferenceEngine_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/share
ENV IE_PLUGINS_PATH=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64
ENV ngraph_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/ngraph/cmake
Expand Down Expand Up @@ -58,7 +58,7 @@ RUN yum update -y && \
yum update -y && yum list intel-openvino* && \
yum install -y $YUM_OV_PACKAGE && \
cd ${INTEL_OPENVINO_DIR}/install_dependencies/ && ./install_openvino_dependencies.sh -y && \
printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino_2021.4.689/bin/setupvars.sh && \
printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino_2021.4.752/bin/setupvars.sh && \
cd /opt/libusb-1.0.22 && \
/usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
Expand Down
9 changes: 5 additions & 4 deletions dockerfiles/Dockerfile.openvino-csharp
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ ARG MY_ROOT=/code
ENV PATH /opt/miniconda/bin:/code/cmake-3.21.0-linux-x86_64/bin:$PATH
ENV LD_LIBRARY_PATH=/opt/miniconda/lib:/usr/lib:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH

ENV INTEL_OPENVINO_DIR=/opt/intel/openvino_2021.4.689
ENV INTEL_OPENVINO_DIR=/opt/intel/openvino_2021.4.752
ENV InferenceEngine_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/share
ENV IE_PLUGINS_PATH=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64
ENV LD_LIBRARY_PATH=/opt/intel/opencl:${INTEL_OPENVINO_DIR}/inference_engine/external/gna/lib:${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/external/mkltiny_lnx/lib:$INTEL_OPENVINO_DIR/deployment_tools/ngraph/lib:${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/external/omp/lib:${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/external/tbb/lib:${IE_PLUGINS_PATH}:${LD_LIBRARY_PATH}
Expand Down Expand Up @@ -54,7 +54,7 @@ RUN apt update -y && \
cd /etc/apt/sources.list.d && \
echo "deb https://apt.repos.intel.com/openvino/2021 all main">intel-openvino-2021.list && \
apt update -y && \
apt -y install intel-openvino-dev-ubuntu18-2021.4.689 && \
apt -y install intel-openvino-dev-ubuntu18-2021.4.752 && \
cd ${INTEL_OPENVINO_DIR}/install_dependencies && ./install_openvino_dependencies.sh -y && \
cd ${INTEL_OPENVINO_DIR} && rm -rf documentation data_processing && \
cd deployment_tools/ && rm -rf model_optimizer open_model_zoo demo tools && \
Expand Down Expand Up @@ -82,7 +82,7 @@ RUN apt update -y && \
cd ${MY_ROOT} && \
apt install -y gnupg ca-certificates && \
#apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF && \
curl http://download.mono-project.com/repo/xamarin.gpg | apt-key add - && \
curl https://download.mono-project.com/repo/xamarin.gpg | apt-key add - && \
echo "deb https://download.mono-project.com/repo/ubuntu stable-bionic main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.list && \
apt update -y && \
apt install -y mono-devel && \
Expand All @@ -97,13 +97,14 @@ RUN apt update -y && \
apt-get update -y &&\
apt-get install -y apt-transport-https && \
apt-get update -y && \
apt-get install -y dotnet-sdk-3.1 && \
apt-get install -y dotnet-sdk-5.0 && \
# Download and build ONNX Runtime
cd ${MY_ROOT} && \
git clone --recursive -b ${ONNXRUNTIME_BRANCH} ${ONNXRUNTIME_REPO} && \
/bin/sh onnxruntime/dockerfiles/scripts/install_common_deps.sh && \
pip install onnx==1.9 && \
cd ${MY_ROOT}/onnxruntime && ./build.sh --config Release --update --build --parallel --use_openvino ${DEVICE} --build_nuget --build_shared_lib && \
cp ${MY_ROOT}/onnxruntime/build/Linux/Release/Microsoft.ML.OnnxRuntime.Managed* ${MY_ROOT}/onnxruntime/build/Linux/Release/nuget-artifacts && \
mv ${MY_ROOT}/onnxruntime/build/Linux/Release/nuget-artifacts ${MY_ROOT} && \
# Clean-up unnecessary files
rm -rf ${MY_ROOT}/cmake* /opt/cmake ${MY_ROOT}/onnxruntime && \
Expand Down
Loading

0 comments on commit 31a4742

Please sign in to comment.