-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak using python3.8 and boto3 client #3088
Comments
This is the stack trace on tracemalloc
|
Hi @saturnt, Thanks for your post, I'm looking into this. I'm not sure why you would experience this switching from Python 2 to 3. Your I couldn't repro with Python 3.8 and your stated version of |
Hi @kdaily, I also updated to the latest version '1.20.20' and tried and see the same issue. Let me change the receive_message to some other sqs call and see if it happens. Also, btw how long did you try to be in the state, you would need to wait atleast 30-60 min to see the issue. |
Thanks for checking! I'll let it run and see. I'm using |
ok so i replaced client.receive_message to client.list_queues() and still see the issue, below is the script in my test.
In my case it seems that the anon is increasing, below is the diff.
This happened on changing the client api call as well, so may be we cant refute the what tracemalloc says ? |
Hi @saturnt, Thanks for the updates. Running overnight on an AL2 EC2 and I'm static at 256224K of process memory usage. I'll update my test case with your new example. I'm not too familiar with the output of I'm wondering if you could tell me a couple more things:
If possible I'd like to reproduce this myself instead of give you a barrage of things to try! However, here's the things I would do next (in isolation):
Thanks for working with me on this. |
Hi @kdaily, Thanks for looking into this. Below is the info i was able to gather from our devops team.
Below are rest of the packages on the system installed using pip(Note we already upgraded boto3 and botocore to the latest):
We are running inside ec2 instance. Its not a container. In the last few hours i did some extra tests as well: I tried the client.get_queue_url suggestion, problem still persists. Tejas |
Thanks! Knowing that it isn't limited to the SQS client is useful information. Are you using one of the CentOS AMIs listed here? That would help me replicate better. https://wiki.centos.org/Cloud/AWS#Official_and_current_CentOS_Public_Images |
Let me ask our devops team, will get back. I created a new instance with ubuntu and it does not seem to leak with the same program. Will try with different variations now. Let me get back. |
Since we haven't heard back for a few months I'm going to close this. If you're still experiencing the issue please let us know. Thanks! |
Describe the bug
We upgraded from python2.7 to python3.8, we are using boto3-1.18.31.dist-info, botocore-1.23.20.dist-info
uname -a output is: Linux 3.10.0-1160.45.1.el7.x86_64 #1 SMP Wed Oct 13 17:20:51 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux.
Steps to reproduce
Below is the snippet we are using:
On examining the process memory usage, it continuously leaks, we are doing the following command to check(Note that the leak is slow and ultimately linux kernel oom kills the process.)
The current memory usage as shown above is 5.1, this started from 0.1 and over the course of few hours it reached 5.1, ultimately it would go to 100.
Expected behavior
Confirmed with python2.7 on the same host that issue does not happen, the "receive_message" is the culprit here.
Debug logs
Let me know if more info is needed.
The text was updated successfully, but these errors were encountered: