-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flags changes are not reflected if ldclient is initialized in uWSGI pre-fork worker #174
Comments
Hi. Besides the underlying issue, it sounds like there may be a documentation gap - the fact that the SDK won't work if you initialize it prior to forking is a long-standing known issue and I thought we had documented it, so I'll look into that. But, back to the main problem: it's a little more of a fundamental problem than you might think, because it's not just about the threads, it's about the basic functionality of how streaming updates work. The SDK opens a long-lived streaming HTTP connection to receive updates. There is no way for that one socket to be inherited by multiple forked processes, regardless of what thread might be reading it. |
This probably isn't a surprise, but I just tested this on gunicorn with |
Agreed, and the New Relic python client doesn't handle this either (it has a similar background collector thread setup). Newrelic's python agent addresses this in their docs though:
Maybe the change to the launch darkly client docs should be similar to that section? |
I'm surfacing this issue, do you have any recommendations on how to fix? Where should I initialize the SDK for flag refreshing to work properly? i.e when should I call Should I consider disabling streaming to rely on polling? |
update CONTRIBUTING.md and provide make targets
@eli-darkly, I work with @pb-dod at Included Health. One of the things we observed in this process is that the worker process would stop working if we tried to initialize a new client. We would initialize a new client in the worker process with My best theory is that any threadpool connected to the client can't be stopped after the fork so the |
@mblayman I can't say off the top of my head whether it is feasible to make In the nearer term, we will focus on making the documentation clearer. It looks like we did some work to address that in Ruby, but not in Python. |
We have updated the Python documentation to more clearly detail out the issues with pre-forking. |
Describe the bug
Flag changes will not be reflected on forked web workers if the Launch Darkly client is initialized before the process is forked by uWSGI.
I think this is happening because the Launch Darkly client will spawn a thread for monitoring updates to flags, but when it's forked that thread will no longer work.
This can happen if the client initialization is triggered when a module is imported when the app is started (rather than initialized when it's first used inside of an endpoint).
The workaround for this requires you to avoid initializing the Launch Darkly client before forking occurs, but this can be a frustrating issue to debug if you're not aware of what is happening.
To reproduce
See this repo with a minimal example and instructions to reproduce the issue: https://github.com/pb-dod/launch_darkly_python_prefork#issue-reproduction-steps
Expected behavior
Maybe it's possible to detect forking and require the thread for monitoring flag updates to be re-initialized?
Or, maybe it should throw an exception if you try to initialize the Launch Darkly client inside of a pre-fork uWSGI worker? For example: https://gist.github.com/pb-dod/231b1b5dac9ea296918346a9288a598a
A warning might need to be added to the docs too.
Logs
n/a
SDK version
See example
Language version, developer tools
See example
OS/platform
See example
Additional context
n/a
The text was updated successfully, but these errors were encountered: