You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The bug is that the semantics of running python scripts prioritize the directory containing the script by prepending to sys.path. To have similar semantics, we should prepend code_dir to sys.path rather than append here:
The current appending behavior would cause an issue for a customer put a filename in code_dir that clashed with an installed package. If the customer ran their inference script locally, it would load their file due to prepend semantics, but when deploying to MME with this toolkit's handler, it would prioritize the installed package instead.
The single-model endpoint case is already prepended:
Describe the bug
The bug is that the semantics of running python scripts prioritize the directory containing the script by prepending to
sys.path
. To have similar semantics, we should prependcode_dir
tosys.path
rather than append here:sagemaker-pytorch-inference-toolkit/src/sagemaker_pytorch_serving_container/handler_service.py
Lines 46 to 49 in 6610a41
Here's a quick example showing the prepend semantics when running a script from the command line.
The current appending behavior would cause an issue for a customer put a filename in
code_dir
that clashed with an installed package. If the customer ran their inference script locally, it would load their file due to prepend semantics, but when deploying to MME with this toolkit's handler, it would prioritize the installed package instead.The single-model endpoint case is already prepended:
sagemaker-pytorch-inference-toolkit/src/sagemaker_pytorch_serving_container/torchserve.py
Lines 112 to 120 in fb65d8a
Other sagemaker inference toolkits already prepend, as well. See how
sagemaker-huggingface-inference-toolkit
handles this (https://github.com/aws/sagemaker-huggingface-inference-toolkit/blob/2f1fae5cbb3b68299e73cc591c0a912b7cccee29/src/sagemaker_huggingface_inference_toolkit/handler_service.py#L72-L73), as well as howsagemaker-inference-toolkit
andsagemaker-mxnet-inference-toolkit
try to handle this (though they have their own bug in this part of the code--see aws/sagemaker-mxnet-inference-toolkit#135).The text was updated successfully, but these errors were encountered: