Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thread safety of Makara::Context #83

Closed
sax opened this issue May 15, 2015 · 5 comments
Closed

Thread safety of Makara::Context #83

sax opened this issue May 15, 2015 · 5 comments

Comments

@sax
Copy link
Contributor

sax commented May 15, 2015

Whenever Makara::Context.get_current is used, it sets an instance variable on the Makara::Context class. This is shared between all threads. We think that a side effect is that when sticky: true is used, any thread that sticks to master causes all threads to stick to master.

This will cause us problems in Sidekiq.

I was thinking of switching out all the instance variables in that class with thread local variables using Thread.current.thread_variable_set and Thread.current.thread_variable_get. Do you see any issues with this?

@sax
Copy link
Contributor Author

sax commented May 15, 2015

/cc @mattcamuto

@mnelson
Copy link
Contributor

mnelson commented May 16, 2015

That all sounds good.

Side note: At TaskRabbit we only communicate with master in our worker environment. This is to ensure that replication lag doesn't cause a failed fetch, count, etc in our workers. We've thrown around the idea of adding the makara context to the queuing of all our workers, then utilizing that to choose the correct initial connection when a worker is queued. Have you guys worked through that situation?

@sax
Copy link
Contributor Author

sax commented May 16, 2015

I think we do too much background work to have reads all hit master. We do have a dedicated read replica for sidekiq, so that its disk cache is optimized to those queries. Some of the jobs are more critical to have up-to-date data, so we just stick to master at the beginning of those. In our code, the worker declares the need for the master connection.

Sent from my iPhone

On May 15, 2015, at 7:05 PM, Mike Nelson notifications@github.com wrote:

That all sounds good.

Side note: At TaskRabbit we only communicate with master in our worker environment. This is to ensure that replication lag doesn't cause a failed fetch, count, etc in our workers. We've thrown around the idea of adding the makara context to the queuing of all our workers, then utilizing that to choose the correct initial connection when a worker is queued. Have you guys worked through that situation?


Reply to this email directly or view it on GitHub.

@mattcamuto
Copy link

Another option, in addition to forcing master inline with your code would be to add a sidekiq_options such as:

sidekiq_options force_master_db: true

Then you could write a little middleware for sidkiq that forces master based on this option.

We also have a sidekiq middle ware that will always clear the context as well, so other jobs do not force master by accident.

@mattcamuto
Copy link

@mnelson We have created a pull for this. #84 We are excited to try this in our sidekiqs soon. :)

@mnelson mnelson closed this as completed Jun 3, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants