-
Notifications
You must be signed in to change notification settings - Fork 695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing module issues #206
Comments
it is not easy to give you an answer as both multiprocessing and uWSGI abuse fork() features. I can only suggest you to add --lazy-apps to avoid forking too early in uWSGI (that could make mess if it is not taken into account) |
doesn't work for me (( I need thread, inside of uwsgi master process, what will look for external data updates, and will update local variables for each worker. so I need one process and easy data sharing method, between this process and workers (big lists objects about 5MB each) so lazy mode will not work for this all I see now, it's to start in post fork hook - thread for each worker, what will look for uwsgi.cache from time to time for updates, and if data was updated, it will update local variables inside of worker. I will need also separate process attached with "attach-daemon", what will check for remote updates and write updates to cache data. but maybe there are easier solution, to exchange data between threadsin master process and workers? p.s.: also if attach-daemon, will restart daemon if it will die? |
ah ok, so you were using multiprocessing model but with thread pool. Yes in such a case postfork is a good approach (remember to add --enable-threads). The easiest and fastest way to share data is using the uwsgi caching framework (i know the name is misleading but it is basically a thread-safe shared dictionary) |
ok, but I need to get access to uwsgi.cache from some external script. or maybe, I can start thread in master process? is this good idea? |
if those external scripts are in python you can attach them to mules, in that way they can access the uWSGI api (and so the cache) By the way processes attached with the --attach-daemon-* functions are autonatically respawned if they die |
thanks seems cache will not work either: looks like I need some other way, to exchange data. if uwsgi has something for so big volumes of data? or I should use some external tool? |
you can tune it as you want (check docs) but use the cache from 1.9, it is a lot better (and tunable), from your exception it looks like you are using 1.4 |
And "cache from 1.9" means using |
thanks |
I have some weired crashes, when trying to read/write manager(proxy) objects
between processes and parent process (http://docs.python.org/2/library/multiprocessing.html#Proxy Objects).
and lot of so errors - comming non stop.
it happens, only if uwsgi was reloaded, but if it was restarted, it
works pretty well.
do you have any idea?
The text was updated successfully, but these errors were encountered: