Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many workers #43

Closed
mistalaba opened this issue Aug 5, 2015 · 8 comments
Closed

Too many workers #43

mistalaba opened this issue Aug 5, 2015 · 8 comments

Comments

@mistalaba
Copy link

Hi again (sorry for bothering you!)

I just pushed django-q to my external server, and saw that with the configuration set to 1 worker, django-q launches five processes. How can I reduce this, since it's a shared server with limited memory?

Thank you!

@mistalaba
Copy link
Author

Sorry, I realized I wasn't too clear. So, I'm running python manage.py qcluster. Checking the processes, it has five processes running, each using around 90mb. Right now, I'm only using django-q for sending out mails in the background and it doesn't matter if it's fast or not, I primary need to reduce the memory footprint. I hope this makes sense!

Cheers!

@Koed00
Copy link
Owner

Koed00 commented Aug 5, 2015

What you are seeing are not workers, but 4 auxiliary processes + 1 worker.
90 Mb each sounds a bit high, most of that should be shared memory, so it should be more like 100 total.
Can you do a `top -p' followed by the cluster pids separated by commas and paste the output here?

@mistalaba
Copy link
Author

Sure!

Here's the result:

top - 19:26:04 up 299 days, 23:44, 7 users, load average: 5.47, 4.29, 4.25
Tasks: 5 total, 0 running, 5 sleeping, 0 stopped, 0 zombie
Cpu(s): 54.6%us, 11.3%sy, 0.0%ni, 33.4%id, 0.3%wa, 0.0%hi, 0.4%si, 0.0%st
Mem: 32782372k total, 32319312k used, 463060k free, 2303340k buffers
Swap: 33554416k total, 12926276k used, 20628140k free, 11125700k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
997335 fysiogra 20 0 379m 87m 2488 S 0.3 0.3 0:00.12 python
997286 fysiogra 20 0 377m 95m 10m S 0.0 0.3 0:01.54 python
997336 fysiogra 20 0 379m 86m 1016 S 0.0 0.3 0:00.00 python
997337 fysiogra 20 0 379m 86m 1016 S 0.0 0.3 0:00.00 python
997338 fysiogra 20 0 379m 86m 1220 S 0.0 0.3 0:00.01 python

Edit: These results are from a freshly restarted process

@Koed00
Copy link
Owner

Koed00 commented Aug 5, 2015

Ok, this is why linux memory management is so confusing. Even though it reports around 90Mb per process, these are all forked child processes and do in fact use the same memory space through copy-on-write. The actual memory it consumes will more likely be around 100Mb total. Check how much memory is freed up when you stop the cluster.

@Koed00
Copy link
Owner

Koed00 commented Aug 5, 2015

Another thing you can do, is install smem and run smem -P qcluster.
The PSS column shows the Proportional Set Size, which is a more realistic representation of the actual memory used. If you add those numbers up, it should be close to the amount of memory you see freed up when you stop the cluster.

@Koed00
Copy link
Owner

Koed00 commented Aug 5, 2015

Some background info on what to expect:

When the cluster is started it will use about the same memory as your Django project, since it is just a copy of it and all the processes (still) share the same memory. There will be a little overhead from spawning the workers. Usually about +5 %.
When the workers start doing work, they will add the memory they use for the jobs to the total, since this is not shared memory. So the more workers and jobs you have, the faster it goes up. Unfortunately the workers will never release the extra memory they use, so you have to recycle workers after a set number of tasks.

It looks like you have a quite a large Django project (around 85Mb). Realistically, with 2-4 workers and quite a lot of emails, your memory footprint should not exceed 150Mb over time, if you set the recycle parameter correctly.

The highest memory usage I've seen during testing, was about 260Mb for an 8 worker cluster, with a recycle of 1000. This was after a 100.000 tasks complex math load test heating up my cpu's.

@mistalaba
Copy link
Author

Hi again, sorry for the delay (sitting on a desolated island in Thailand right now, so it's the complete wrong timezone :))
First of all, thank you so much for the excellent explanation! It makes so much more sense to me now.
I really want to use django-q, it's the most intuitive task queuer I've used so far, but unfortunately Webfaction doesn't use the same technique to measure memory usage (see below), so the only thing I can think of to do is to try and minimize the memory footprint in the Django project, reduze the number of workers and see if that cuts their limit of 512mb. And you're right, it's quite a big project, but I'm sure I can drop the usage a bit at least. Let's see how it goes!

Thank you very much! I look forward following this project!

Your total allowed memory is 512MB and your current memory usage is 1405MB.

User - Memory - Elapsed Time - Pid - Command:
--------------------------------------------
user - 1MB - 6 days, 2:28:19 - 140611 - nginx: master process /home/user/nginx/sbin/nginx
user - 1MB - 6 days, 2:28:19 - 140614 - nginx: worker process
user - 1MB - 6 days, 2:28:19 - 140615 - nginx: worker process
user - 1MB - 6 days, 2:20:29 - 153025 - /home/user/bin/redis-server *:26550
user - 7MB - 49 days, 6:39:21 - 294611 - /usr/local/bin/python2.7 /home/user/bin/supervisord -c /home/user/etc/supervisord.conf
user - 1MB - 1:55:38 - 727134 - sshd: user@pts/7
user - 2MB - 1:55:38 - 727146 - -bash
user - 1MB - 0:17:37 - 888105 - sshd: user@pts/14
user - 2MB - 0:17:36 - 888115 - -bash
user - 13MB - 0:12:42 - 896243 - /usr/local/bin/python2.7 /home/user/bin/supervisorctl
user - 13MB - 0:09:37 - 901337 - /home/user/.virtualenvs/user/bin/python2.7 /home/user/.virtualenvs/user/bin/gunicorn wsgi:application -w 3 --max-requests 500 --timeout 600 --bind=127.0.0.1:20318 --pid /home/user/tmp/user.pid --log-level=info --log-file=/home/user/user_logs/gunicorn_user.log
user - 103MB - 0:09:37 - 901350 - /home/user/.virtualenvs/user/bin/python2.7 /home/user/.virtualenvs/user/bin/gunicorn wsgi:application -w 3 --max-requests 500 --timeout 600 --bind=127.0.0.1:20318 --pid /home/user/tmp/user.pid --log-level=info --log-file=/home/user/user_logs/gunicorn_user.log
user - 101MB - 0:09:37 - 901353 - /home/user/.virtualenvs/user/bin/python2.7 /home/user/.virtualenvs/user/bin/gunicorn wsgi:application -w 3 --max-requests 500 --timeout 600 --bind=127.0.0.1:20318 --pid /home/user/tmp/user.pid --log-level=info --log-file=/home/user/user_logs/gunicorn_user.log
user - 101MB - 0:09:37 - 901356 - /home/user/.virtualenvs/user/bin/python2.7 /home/user/.virtualenvs/user/bin/gunicorn wsgi:application -w 3 --max-requests 500 --timeout 600 --bind=127.0.0.1:20318 --pid /home/user/tmp/user.pid --log-level=info --log-file=/home/user/user_logs/gunicorn_user.log
user - 95MB - 0:09:11 - 902097 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 87MB - 0:09:09 - 902172 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 89MB - 0:09:09 - 902173 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 89MB - 0:09:09 - 902174 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 89MB - 0:09:09 - 902175 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 89MB - 0:09:09 - 902177 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 86MB - 0:09:09 - 902178 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 86MB - 0:09:09 - 902179 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 86MB - 0:09:09 - 902180 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 86MB - 0:09:09 - 902181 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 89MB - 0:09:09 - 902183 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster
user - 86MB - 0:09:09 - 902184 - /home/user/.virtualenvs/user/bin/python /home/user/projects/manage.py qcluster

@Koed00
Copy link
Owner

Koed00 commented Aug 6, 2015

Maybe you should look into opening a support ticket with Webfaction. I host most of my projects on Heroku, which has the same 512Mb limit. My largest project is about 75Mb and 4 workers don't use more than about 125Mb according to Heroku's own memory reporter. Even my 4 worker Gunicorn setup uses only 150Mb. That said; I do run Django-Q on a separate machine.
Good luck with your project and thanks for the valuable feedback.

msabatier pushed a commit to msabatier/django-q that referenced this issue Jan 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants