Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set GPU #9644

Closed
cqray1990 opened this issue Nov 2, 2021 · 7 comments
Closed

set GPU #9644

cqray1990 opened this issue Nov 2, 2021 · 7 comments
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@cqray1990
Copy link

Describe the bug
A clear and concise description of what the bug is. To avoid repetition please make sure this is not one of the known issues mentioned on the respective release page.

Urgency
If there are particular important use cases blocked by this or strict project-related timelines, please share more information and dates. If there are no hard deadlines, please specify none.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • ONNX Runtime installed from (source or binary):
  • ONNX Runtime version:
  • Python version:2.6
  • Visual Studio version (if applicable):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:10.2/8.0
  • GPU model and memory:

i set gpu by the way as https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html:

providers = [
('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
}),
'CPUExecutionProvider',
]

session = ort.InferenceSession(model_path, providers=providers)

but there is an errors:

Invoked with: <onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession object at 0x7fe93d9badc0>, [('CUDAExecutionProvider', {'device_id': 0, 'arena_extend_strategy': 'kNextPowerOfTwo', 'gpu_mem_limit': 2147483648, 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': True}), 'CPUExecutionProvider'], []

@pranavsharma
Copy link
Contributor

It's not possible for us to guess the error you've encountered. Can you paste it here?

@pranavsharma pranavsharma added the ep:CUDA issues related to the CUDA execution provider label Nov 2, 2021
@cqray1990
Copy link
Author

self.providers = [('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
}),'CPUExecutionProvider' ]
self.sess = rt.InferenceSession(MODEL_PATH, providers=self.providers)

and run then here is the errors:
File "/home/siit/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 206, in init
self._create_inference_session(providers, provider_options)
File "/home/siit/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 231, in _create_inference_session
sess.initialize_session(providers or [], provider_options or [])
TypeError: initialize_session(): incompatible function arguments. The following argument types are supported:
1. (self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: List[Dict[str, str]]) -> None

Invoked with: <onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession object at 0x7fd1b2ae11b8>, ['CUDAExecutionProvider'], [{'device_id': 1, 'gpu_mem_limit': 2147483648}]

i saw the doc in offcial link as https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html:

init InferenceSession exception

@pranavsharma
Copy link
Contributor

pranavsharma commented Nov 2, 2021

I tried the exact same program and don't see any issue.

(pranav_py8) pranav@Machine:~$ cat issue_9644.py
import onnxruntime as rt
MODEL_PATH="squeezenet.onnx"
providers = [('CUDAExecutionProvider', {
    'device_id': 0,
    'arena_extend_strategy': 'kNextPowerOfTwo',
    'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
    'cudnn_conv_algo_search': 'EXHAUSTIVE',
    'do_copy_in_default_stream': True,
    }),'CPUExecutionProvider' ]
sess = rt.InferenceSession(MODEL_PATH, providers=providers)
for x in sess.get_inputs():
    print(x)
(pranav_py8) pranav@Machine:~$ python issue_9644.py
NodeArg(name='data_0', type='tensor(float)', shape=[1, 3, 224, 224])

@cqray1990
Copy link
Author

cqray1990 commented Nov 3, 2021

@pranavsharma my onnxruntime version is 1.6.0 GPU,I really test my code as your advices

@pranavsharma
Copy link
Contributor

Please use the latest version of ORT.

@cqray1990
Copy link
Author

cqray1990 commented Nov 3, 2021

my cuda is 10.2 and it only need 1.6.0 as official suggestion,and i build the master branch code with cuda 10.2,but error as issues #9631

@pranavsharma
Copy link
Contributor

Unfortunately the documentation at this link points to the latest version. The following should work for 1.6.0.

import onnxruntime as rt
MODEL_PATH="squeezenet.onnx"
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
provider_options = [{
    'device_id': 0,
    'arena_extend_strategy': 'kNextPowerOfTwo',
    'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
    'cudnn_conv_algo_search': 'EXHAUSTIVE',
    'do_copy_in_default_stream': True}, {}]
sess = rt.InferenceSession(MODEL_PATH)
sess.set_providers(providers, provider_options)
for x in sess.get_inputs():
    print(x)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

2 participants