Description
Bug Description
Hi, based on some past posts like this and this and this, I was able to get a bazel build working for windows for the c++ library, with both model compilation and inference, by building torch_tensorrt.dll, which I believe includes both the runtime and the compilation plugins.
I am wondering what it would take for official windows support. I've attached a full patch, I think the declspec may not be needed.
Once this builds, all one has to do to use torch_tensorrt in their project is, link to torch_tensorrt.dll.if.lib and ensure torch_tensorrt.dll is loaded up with LoadLibrary as mentioned in other issue reports.
HMODULE hLib = LoadLibrary(TEXT("torch_tensorrt"));
if (hLib == NULL) {
std::cerr << "Library torch_tensorrt.dll not found" << std::endl;
exit(1);
}
More info about exact versions of libraries used:
cuda_11.8.0_522.06_windows.exe
TensorRT-8.6.1.6.Windows10.x86_64.cuda-11.8
libtorch-win-shared-with-deps-2.0.1+cu118
cudnn-windows-x86_64-8.8.0.121_cuda11-archive
Build command:
bazel-6.3.2-windows-x86_64.exe build //:libtorchtrt --compilation_mode opt
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0): 8.6.1.6
- PyTorch Version (e.g. 1.0): 2.0.1
- CPU Architecture:
- OS (e.g., Linux): Windows
- How you installed PyTorch (
conda
,pip
,libtorch
, source): libtorch, source - Build command you used (if compiling from source): bazel
- Are you using local sources or building from archives: latest github?
- Python version:
- CUDA version: 11.8
- GPU models and configuration: 3080Ti
- Any other relevant information: