You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
The text was updated successfully, but these errors were encountered:
Thank you for reporting this issue! The problem comes in when onnxruntime-gpu is installed as that has multiple providers. Logic has been added to set providers based on the available providers.
This will be available when 4.0 is released. It can also be installed from source right now.
Thank you for reporting this issue! The problem comes in when onnxruntime-gpu is installed as that has multiple providers. Logic has been added to set providers based on the available providers.
This will be available when 4.0 is released. It can also be installed from source right now.
embeddings = onnx("sentence-transformers/paraphrase-MiniLM-L6-v2", "pooling", "embeddings.onnx", quantize=True)
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
The text was updated successfully, but these errors were encountered: