Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update #17

Merged
merged 70 commits into from
Jun 22, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
ff82523
[NPU] use SparseSoftmaxCrossEntropyWithLogits in npu kernel of softma…
zhiqiu Jun 15, 2021
e47c3f0
[XPU] Update cmake options for xpu. (#33450)
jiweibo Jun 15, 2021
b7a54fc
support convert core.Tensor to paddle.Tensor (#33430)
zhwesky2010 Jun 15, 2021
ec6d5ef
enhance the attribute constraint for pass,test=develop (#33568)
winter-wang Jun 16, 2021
969ad85
fix the error in batch_norm.pbtxt, test=develop (#33572)
winter-wang Jun 16, 2021
07197fb
add_op_extra: elementwise_add, mul (#33491)
Wangzheee Jun 16, 2021
294dfd2
[HybridParallel]Add SharedLayerDesc for PipelineParallel (#33578)
ForFishes Jun 16, 2021
67a4de6
Add return when input is tensor (#33570)
zhiqiu Jun 16, 2021
e6c5282
fix used before assign (#33519)
Jiangxinz Jun 16, 2021
78260ff
fix output_padding in conv (#33585)
jerrywgz Jun 16, 2021
32e3353
[Dy2Stat] Fix always copy by paddle.to_tensor from PR #33335(#33590)
Aurelius84 Jun 16, 2021
72d3697
[Feature] add paddle.trunc (#33371)
zhangbo9674 Jun 16, 2021
a50d129
add delta score, scale show (#33492)
Thunderbrook Jun 16, 2021
ecc0537
Add bitwise_and/or/xor/not OP/API and unittest (#33524)
zhwesky2010 Jun 16, 2021
b4f8287
modify reviewer, test=document_fix (#33593)
iducn Jun 16, 2021
78a9870
fix bad super call (#33533)
Jiangxinz Jun 16, 2021
0b4a7f1
del python2 code (#33556)
tianshuo78520a Jun 16, 2021
16099ab
fix new ci check errors (#33561)
zhiboniu Jun 16, 2021
debae94
update, test=develop (#33537)
Jun 16, 2021
a327369
bug fix, test=develop (#33594)
Jun 16, 2021
34b79d9
pass enhance: fix the sequence_conv.pbtxt error, test=develop (#33603)
winter-wang Jun 16, 2021
4ddd595
add compat check for skip_layernorm (#33505)
jiweibo Jun 16, 2021
f9ce1b1
[oneDNN] Further ops refactoring of oneDNN cache access (#33515)
jczaja Jun 16, 2021
9d6c8bd
Add lookup_table_v2 BF16 op (#33172)
wozna Jun 16, 2021
63b03cf
[Dy2Stat]Support non-tensor type in `input_spec` (#33464)
Aurelius84 Jun 17, 2021
bb1216f
fix trt convert fc_op'oss (#33566)
Wangzheee Jun 17, 2021
b0984c7
Fix the timeout problem of test_multi_precision_fp16_train UT. (#33596)
wzzju Jun 17, 2021
918aeb7
Add atan2 op and test (#33067)
ronny1996 Jun 17, 2021
3af1629
fix the error of qat unit test (#33574)
juncaipeng Jun 17, 2021
832a014
Add bf16 support for save and load ops (#33173)
wozna Jun 17, 2021
527c46a
update readme, test=document_fix (#33618)
CheQiXiao Jun 17, 2021
67bec55
[Inference Tensorrt] Add attr for trt engine and handle the input seq…
jiweibo Jun 17, 2021
a138b6c
fix import paddle error in windows for python3.8 and python3.9, test=…
wanghuancoder Jun 17, 2021
d9941c8
test=document_fix (#33623)
tianshuo78520a Jun 17, 2021
ab0272e
Relax the constraint of installed openblas from version==0.3.7 to >=0…
zhiqiu Jun 17, 2021
c7e3c91
[Inference] Update go inference api based on new capi. (#33113)
jiweibo Jun 17, 2021
4bf9e11
add compat precondition for matmul_transpose_reshape_fuse_pass, test=…
winter-wang Jun 17, 2021
6cacd63
fix image dataset bug (#33630)
kuizhiqing Jun 17, 2021
2800897
add compat precondition for cpu_quantize_squash_pass, test=develop (#…
winter-wang Jun 18, 2021
d3a2ba0
test=develop (#33576)
yingyibiao Jun 18, 2021
cca44c1
[XPU] Add xpu include and so into inference third_party (#33641)
jiweibo Jun 18, 2021
c3008e7
Add seqconv pass enhance (#33455)
tink2123 Jun 18, 2021
1e4e6a3
update py head file (#33653)
DannyIsFunny Jun 18, 2021
478ea78
add layernorm (#33610)
ceci3 Jun 18, 2021
6da6ff6
SimplifyWithBasicOpsPass (#33637)
b3602sss Jun 18, 2021
34c95ea
batch_norm_act_fuse_pass_init (#33636)
Jun 18, 2021
39556a4
polish windows ci (#32964)
zhwesky2010 Jun 18, 2021
930ca3f
pass enhance (#33661)
Wangzheee Jun 18, 2021
fc7e3e9
fix sgd unittest timeout (#33665)
wangxicoding Jun 21, 2021
0011450
fix the but that concat op can't support uint8 (#33666)
youth123 Jun 21, 2021
a6ba016
fix unexpected keyword arg (#33569)
Jiangxinz Jun 21, 2021
fa821ef
fix lack of self arg (#33598)
Jiangxinz Jun 21, 2021
c269a16
[NPU] flatten params and grads, fuse grad_clip and optimizer op (#33461)
zhiqiu Jun 21, 2021
4b9430a
fix undef val (#33562)
Jiangxinz Jun 21, 2021
79cbc8e
ELASTIC 1 : fault tolerance (#33369)
kuizhiqing Jun 21, 2021
0f7187a
Del six.PY code2 (#33607)
tianshuo78520a Jun 21, 2021
0905dee
test=pretest (#33573)
lelelelelez Jun 21, 2021
f88af20
Combine amp and qat (#33484)
juncaipeng Jun 21, 2021
f91dfe1
[NPU] optimize mul op, use BatchMatMul to realize (#33616)
pangyoki Jun 21, 2021
1681a2d
update fp16 gray_list for tensor parallel (#33660)
wangxicoding Jun 21, 2021
50f885f
add new api ci check file (#33609)
zhiboniu Jun 21, 2021
2d7ef7a
update trt version from major to full (#33690)
jiweibo Jun 21, 2021
e0e0c0f
add sync calc stream and add ut for fuse on gpu (#33580)
FeixLiu Jun 21, 2021
773aabc
Add AXPY oneDNN handler (#33632)
lidanqing-intel Jun 21, 2021
1b0c5ef
fix emb_eltwise_ln gpu_id bug (#33701)
cryoco Jun 21, 2021
2b6fc10
Dygraph post trainging quantization (#33445)
juncaipeng Jun 22, 2021
1828426
solve ANSI escape sequences print error in cmd and powershell (#33689)
thisjiang Jun 22, 2021
cf3ddd3
Pass compat of conv_transpose_bias_mkldnn_fuse_pass (#33708)
TeslaZhao Jun 22, 2021
20eafd7
Add squared_mat_sub_fuse_pass (#33597)
Jun 22, 2021
480b284
modified reduce_max, reduce_min, reduce_prod to higher_performance im…
AnnaTrainingG Jun 22, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ option(WITH_STRIP "Strip so files of Whl packages" OFF)

# PY_VERSION
if(NOT PY_VERSION)
set(PY_VERSION 2.7)
set(PY_VERSION 3.6)
endif()
set(PYBIND11_PYTHON_VERSION ${PY_VERSION})

Expand Down
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<p align="center">
<p align="center">
<img align="center" src="doc/imgs/logo.png", width=1600>
<p>

Expand Down Expand Up @@ -50,10 +50,9 @@ Now our developers can acquire Tesla V100 online computing resources for free. I
[Click here to learn more](https://github.com/PaddlePaddle/Fleet)


- **Accelerated High-Performance Inference over Ubiquitous Deployments**
- **High-Performance Inference Engines for Comprehensive Deployment Enviroments**

PaddlePaddle is not only compatible with other open-source frameworks for models training, but also works well on the ubiquitous developments, varying from platforms to devices. More specifically, PaddlePaddle accelerates the inference procedure with the fastest speed-up. Note that, a recent breakthrough of inference speed has been made by PaddlePaddle on Huawei's Kirin NPU, through the hardware/software co-optimization.
[Click here to learn more](https://github.com/PaddlePaddle/Paddle-Lite)
PaddlePaddle is not only compatible with models trained in 3rd party open-source frameworks , but also offers complete inference products for various production scenarios. Our inference product line includes [Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/product_introduction/summary.html): Native inference library for high performance server and cloud inference; [Paddle Serving](https://github.com/PaddlePaddle/Serving): A service-oriented framework suitable for distributed and pipeline productions; [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite): Ultra-Lightweight inference engine for mobile and IoT enviroments; [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs): A frontend inference engine for browser and mini apps. Futhermore, by great amounts of optimization with leading hardwares in each scenarios, Paddle inference engines outperform most of the other mainstream frameworks.


- **Industry-Oriented Models and Libraries with Open Source Repositories**
Expand Down
7 changes: 3 additions & 4 deletions README_cn.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@


<p align="center">
<img align="center" src="doc/imgs/logo.png", width=1600>
<p>
Expand Down Expand Up @@ -47,10 +47,9 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
[查看详情](https://github.com/PaddlePaddle/Fleet)


- **多端多平台部署的高性能推理引擎**
- **支持多端多平台的高性能推理部署工具**

飞桨不仅兼容其他开源框架训练的模型,还可以轻松地部署到不同架构的平台设备上。同时,飞桨的推理速度也是全面领先的。尤其经过了跟华为麒麟NPU的软硬一体优化,使得飞桨在NPU上的推理速度进一步突破。
[查看详情](https://github.com/PaddlePaddle/Paddle-Lite)
飞桨不仅广泛兼容第三方开源框架训练的模型部署,并且为不同的场景的生产环境提供了完备的推理引擎,包括适用于高性能服务器及云端推理的原生推理库 [Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/product_introduction/summary.html),面向分布式、流水线生产环境下自动上云、A/B测试等高阶功能的服务化推理框架 [Paddle Serving](https://github.com/PaddlePaddle/Serving),针对于移动端、物联网场景的轻量化推理引擎 [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite),以及在浏览器、小程序等环境下使用的前端推理引擎 [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs)。同时,透过与不同场景下的主流硬件高度适配优化及异构计算的支持, 飞桨的推理性能也领先绝大部分的主流实现。


- **面向产业应用,开源开放覆盖多领域的工业级模型库。**
Expand Down
2 changes: 1 addition & 1 deletion cmake/cblas.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ if(NOT DEFINED CBLAS_PROVIDER)
string(REGEX MATCH "OpenBLAS ([0-9]+\.[0-9]+\.[0-9]+)" tmp ${config_file})
string(REGEX MATCH "([0-9]+\.[0-9]+\.[0-9]+)" ver ${tmp})

if (${ver} VERSION_EQUAL "0.3.7")
if (${ver} VERSION_GREATER_EQUAL "0.3.7")
set(CBLAS_PROVIDER OPENBLAS)
set(CBLAS_INC_DIR ${OPENBLAS_INC_DIR} ${OPENBLAS_LAPACKE_INC_DIR})
set(CBLAS_LIBRARIES ${OPENBLAS_LIB})
Expand Down
30 changes: 22 additions & 8 deletions cmake/external/lite.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,21 @@ if(NOT LINUX)
return()
endif()

if(XPU_SDK_ROOT)
set(LITE_WITH_XPU ON)
include_directories("${XPU_SDK_ROOT}/XTDK/include")
include_directories("${XPU_SDK_ROOT}/XTCL/include")
if (LITE_WITH_XPU)
add_definitions(-DLITE_SUBGRAPH_WITH_XPU)
LINK_DIRECTORIES("${XPU_SDK_ROOT}/XTDK/shlib/")
LINK_DIRECTORIES("${XPU_SDK_ROOT}/XTDK/runtime/shlib/")
IF(WITH_AARCH64)
SET(XPU_SDK_ENV "kylin_aarch64")
ELSEIF(WITH_SUNWAY)
SET(XPU_SDK_ENV "deepin_sw6_64")
ELSEIF(WITH_BDCENTOS)
SET(XPU_SDK_ENV "bdcentos_x86_64")
ELSEIF(WITH_UBUNTU)
SET(XPU_SDK_ENV "ubuntu_x86_64")
ELSEIF(WITH_CENTOS)
SET(XPU_SDK_ENV "centos7_x86_64")
ELSE ()
SET(XPU_SDK_ENV "ubuntu_x86_64")
ENDIF()
endif()

if (NOT LITE_SOURCE_DIR OR NOT LITE_BINARY_DIR)
Expand Down Expand Up @@ -57,7 +65,8 @@ if (NOT LITE_SOURCE_DIR OR NOT LITE_BINARY_DIR)
-DWITH_TESTING=OFF
-DLITE_BUILD_EXTRA=ON
-DLITE_WITH_XPU=${LITE_WITH_XPU}
-DXPU_SDK_ROOT=${XPU_SDK_ROOT}
-DXPU_SDK_URL=${XPU_BASE_URL}
-DXPU_SDK_ENV=${XPU_SDK_ENV}
-DLITE_WITH_CODE_META_INFO=OFF
-DLITE_WITH_ARM=ON)
ExternalProject_Add(
Expand Down Expand Up @@ -99,7 +108,8 @@ if (NOT LITE_SOURCE_DIR OR NOT LITE_BINARY_DIR)
-DLITE_WITH_STATIC_CUDA=OFF
-DCUDA_ARCH_NAME=${CUDA_ARCH_NAME}
-DLITE_WITH_XPU=${LITE_WITH_XPU}
-DXPU_SDK_ROOT=${XPU_SDK_ROOT}
-DXPU_SDK_URL=${XPU_BASE_URL}
-DXPU_SDK_ENV=${XPU_SDK_ENV}
-DLITE_WITH_CODE_META_INFO=OFF
-DLITE_WITH_ARM=OFF)

Expand Down Expand Up @@ -147,6 +157,10 @@ message(STATUS "Paddle-lite BINARY_DIR: ${LITE_BINARY_DIR}")
message(STATUS "Paddle-lite SOURCE_DIR: ${LITE_SOURCE_DIR}")
include_directories(${LITE_SOURCE_DIR})
include_directories(${LITE_BINARY_DIR})
if(LITE_WITH_XPU)
include_directories(${LITE_BINARY_DIR}/third_party/install/xpu/xdnn/include/)
include_directories(${LITE_BINARY_DIR}/third_party/install/xpu/xre/include/)
endif()

function(external_lite_libs alias path)
add_library(${alias} SHARED IMPORTED GLOBAL)
Expand Down
5 changes: 4 additions & 1 deletion cmake/external/mkldnn.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,11 @@ ADD_DEPENDENCIES(mkldnn ${MKLDNN_PROJECT})
# it can be directly contained in wheel or capi
if(WIN32)
SET(MKLDNN_SHARED_LIB ${MKLDNN_INSTALL_DIR}/bin/mkldnn.dll)

file(TO_NATIVE_PATH ${MKLDNN_INSTALL_DIR} NATIVE_MKLDNN_INSTALL_DIR)
file(TO_NATIVE_PATH ${MKLDNN_SHARED_LIB} NATIVE_MKLDNN_SHARED_LIB)
ADD_CUSTOM_COMMAND(TARGET ${MKLDNN_PROJECT} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_INSTALL_DIR}/bin/dnnl.dll ${MKLDNN_SHARED_LIB})
COMMAND (copy ${NATIVE_MKLDNN_INSTALL_DIR}\\bin\\dnnl.dll ${NATIVE_MKLDNN_SHARED_LIB} /Y))
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
COMMAND dumpbin /exports ${MKLDNN_INSTALL_DIR}/bin/mkldnn.dll > ${MKLDNN_INSTALL_DIR}/bin/exports.txt)
add_custom_command(TARGET ${MKLDNN_PROJECT} POST_BUILD VERBATIM
Expand Down
11 changes: 5 additions & 6 deletions cmake/external/xpu.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,10 @@ ELSE ()
SET(XPU_XCCL_DIR_NAME "xccl-bdcentos_x86_64")
ENDIF()

SET(XPU_BASE_URL "https://baidu-kunlun-product.cdn.bcebos.com/KL-SDK/klsdk-dev/20210527")
IF(NOT XPU_BASE_URL)
SET(XPU_BASE_URL "https://baidu-kunlun-product.cdn.bcebos.com/KL-SDK/klsdk-dev/20210527")
ENDIF()

SET(XPU_XRE_URL "${XPU_BASE_URL}/${XPU_XRE_DIR_NAME}.tar.gz" CACHE STRING "" FORCE)
SET(XPU_XDNN_URL "${XPU_BASE_URL}/${XPU_XDNN_DIR_NAME}.tar.gz" CACHE STRING "" FORCE)
SET(XPU_XCCL_URL "${XPU_BASE_URL}/${XPU_XCCL_DIR_NAME}.tar.gz" CACHE STRING "" FORCE)
Expand Down Expand Up @@ -93,11 +96,7 @@ ELSE(WITH_XPU_BKCL)
TARGET_LINK_LIBRARIES(xpulib ${XPU_API_LIB} ${XPU_RT_LIB})
ENDIF(WITH_XPU_BKCL)

if(NOT XPU_SDK_ROOT)
ADD_DEPENDENCIES(xpulib ${XPU_PROJECT})
else()
ADD_CUSTOM_TARGET(extern_xpu DEPENDS xpulib)
endif()
ADD_DEPENDENCIES(xpulib ${XPU_PROJECT})

# Ensure that xpu/api.h can be included without dependency errors.
file(GENERATE OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/.xpu_headers_dummy.cc CONTENT "")
Expand Down
9 changes: 8 additions & 1 deletion cmake/inference_lib.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,13 @@ IF(WITH_GPU)
DSTS ${dst_dir})
ENDIF()

IF(WITH_XPU)
set(dst_dir "${PADDLE_INFERENCE_INSTALL_DIR}/third_party/install/xpu")
copy(inference_lib_dist
SRCS ${XPU_INC_DIR} ${XPU_LIB_DIR}
DSTS ${dst_dir} ${dst_dir})
ENDIF()

# CMakeCache Info
copy(inference_lib_dist
SRCS ${CMAKE_CURRENT_BINARY_DIR}/CMakeCache.txt
Expand Down Expand Up @@ -335,7 +342,7 @@ function(version version_file)
file(APPEND ${version_file} "CXX compiler version: ${CMAKE_CXX_COMPILER_VERSION}\n")
if(TENSORRT_FOUND)
file(APPEND ${version_file}
"WITH_TENSORRT: ${TENSORRT_FOUND}\n" "TensorRT version: v${TENSORRT_MAJOR_VERSION}\n")
"WITH_TENSORRT: ${TENSORRT_FOUND}\n" "TensorRT version: v${TENSORRT_MAJOR_VERSION}.${TENSORRT_MINOR_VERSION}.${TENSORRT_PATCH_VERSION}.${TENSORRT_BUILD_VERSION}\n")
endif()
if(WITH_LITE)
file(APPEND ${version_file} "WITH_LITE: ${WITH_LITE}\n" "LITE_GIT_TAG: ${LITE_GIT_TAG}\n")
Expand Down
2 changes: 1 addition & 1 deletion cmake/operators.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ function(op_library TARGET)
endif()

# Define operators that don't need pybind here.
foreach(manual_pybind_op "compare_all_op" "compare_op" "logical_op" "nccl_op"
foreach(manual_pybind_op "compare_all_op" "compare_op" "logical_op" "bitwise_op" "nccl_op"
"tensor_array_read_write_op" "tensorrt_engine_op" "conv_fusion_op"
"fusion_transpose_flatten_concat_op" "fusion_conv_inception_op"
"sync_batch_norm_op" "dgc_op" "fused_fc_elementwise_layernorm_op"
Expand Down
20 changes: 19 additions & 1 deletion cmake/tensorrt.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,23 @@ if(TENSORRT_FOUND)
file(READ ${TENSORRT_INCLUDE_DIR}/NvInfer.h TENSORRT_VERSION_FILE_CONTENTS)
string(REGEX MATCH "define NV_TENSORRT_MAJOR +([0-9]+)" TENSORRT_MAJOR_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
string(REGEX MATCH "define NV_TENSORRT_MINOR +([0-9]+)" TENSORRT_MINOR_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
string(REGEX MATCH "define NV_TENSORRT_PATCH +([0-9]+)" TENSORRT_PATCH_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
string(REGEX MATCH "define NV_TENSORRT_BUILD +([0-9]+)" TENSORRT_BUILD_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")

if("${TENSORRT_MAJOR_VERSION}" STREQUAL "")
file(READ ${TENSORRT_INCLUDE_DIR}/NvInferVersion.h TENSORRT_VERSION_FILE_CONTENTS)
string(REGEX MATCH "define NV_TENSORRT_MAJOR +([0-9]+)" TENSORRT_MAJOR_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
string(REGEX MATCH "define NV_TENSORRT_MINOR +([0-9]+)" TENSORRT_MINOR_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
string(REGEX MATCH "define NV_TENSORRT_PATCH +([0-9]+)" TENSORRT_PATCH_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
string(REGEX MATCH "define NV_TENSORRT_BUILD +([0-9]+)" TENSORRT_BUILD_VERSION
"${TENSORRT_VERSION_FILE_CONTENTS}")
endif()

if("${TENSORRT_MAJOR_VERSION}" STREQUAL "")
Expand All @@ -60,9 +72,15 @@ if(TENSORRT_FOUND)

string(REGEX REPLACE "define NV_TENSORRT_MAJOR +([0-9]+)" "\\1"
TENSORRT_MAJOR_VERSION "${TENSORRT_MAJOR_VERSION}")
string(REGEX REPLACE "define NV_TENSORRT_MINOR +([0-9]+)" "\\1"
TENSORRT_MINOR_VERSION "${TENSORRT_MINOR_VERSION}")
string(REGEX REPLACE "define NV_TENSORRT_PATCH +([0-9]+)" "\\1"
TENSORRT_PATCH_VERSION "${TENSORRT_PATCH_VERSION}")
string(REGEX REPLACE "define NV_TENSORRT_BUILD +([0-9]+)" "\\1"
TENSORRT_BUILD_VERSION "${TENSORRT_BUILD_VERSION}")

message(STATUS "Current TensorRT header is ${TENSORRT_INCLUDE_DIR}/NvInfer.h. "
"Current TensorRT version is v${TENSORRT_MAJOR_VERSION}. ")
"Current TensorRT version is v${TENSORRT_MAJOR_VERSION}.${TENSORRT_MINOR_VERSION}.${TENSORRT_PATCH_VERSION}.${TENSORRT_BUILD_VERSION} ")
include_directories(${TENSORRT_INCLUDE_DIR})
link_directories(${TENSORRT_LIBRARY})
add_definitions(-DPADDLE_WITH_TENSORRT)
Expand Down
56 changes: 0 additions & 56 deletions go/README_cn.md

This file was deleted.

81 changes: 0 additions & 81 deletions go/demo/mobilenet.go

This file was deleted.

Loading