-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JVM detect the CPU count as 1 when more CPUs are available for the container. #99
Comments
@zoran-hristov Did you find any resolution to the same issue? I am also facing the same problem. |
Yes, I found the solution. One part is noted in subsequent Deep Learning containers release notes, but there is no fix in the images(see Known issues). it is related with with OMP_NUM_THREADS parameter. I suggest to assign value to it numberOfCPUs/2 or less. It is regulate environment variables like OMP_NUM_THREADS. The other part is to make the enable the container support for cpu detection, especially for the JVM. We are setting this in the code, as the config.properties is not used in the image. I have no explanation why they abandoned the use of config.properties Here is a way to do it, with overwriting in Dockerfile:
|
Thanks @zoran-hristov it helped me to resolve the issue. |
Describe the bug
This issue is related to the issue JVM bug 82 in sagemaker-inference-toolkit
To reproduce
Clone the SaeMaker example
Deploy the model using the same endpoint.
Check CloudWatch logs and the number of CPU cores detected will be like Number of CPUs: 1
JVM detect the CPU count as 1 when more CPUs are available for the container.
Expected behavior
The CPU count from CloudWatch should match the CPU count for the used instance. For example, 4 if the instance is ml.m4.xlarge
System information
Container: pytorch-inference:1.7-cpu-py3 and pytorch-inference:1.7-gpu-py3
SageMaker inference v1.1.2
Additional context
This clearly does not allow the usage of all CPUs on the instance for Sagemaker Inference
The text was updated successfully, but these errors were encountered: