-
-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FR: DNS cache sync between multiple blocky instances #344
Comments
Hey, which cache do you mean: cache with black/whitelists or cache with DNS responses (positive/negative)? |
Hi, |
I think, this would be a nice feature, but is should also run without redis. I don't like current cache implementation (which I forked and patched: https://github.com/0xERR0R/go-cache), maybe it is possible to run something redis compatible in memory for single instance and optionally with external redis for multiple instances to implement only one API. |
My idea was to keep the current cache and add the new redis cache as seperate(optional) resolver between cache and parallel_best. An alternative solution may be to let blocky broadcast cache insertions to other instances. |
I think, at runtime only one cache should be there, either external redis or internal. My idea was to use something like "embedded redis", maybe a kind of in-memory cache, which is compatible to redis API (I'm not sure something exists, but I hope so). In this case we could implement caching against redis API and user can either configure external redis or to use "internal" one. |
This would have the contrary effect to my proposal. 😅 Speed in my home environment where DNS resolution is done locally behind blocky:
Replacing the internal blocky cache would slow responses down as network request take ~3ms. Currently it takes a few hours to populate the blocky cache enough to get the ~7ms times. |
wow, interesting infrastructure! off-topic, just curious: 3 blocky instances are running on different pieces of hardware for redundancy/loadbalancing? So each client has 3 DNS resolver (blocky instances) configured? And why you are using unbound and not external upstream resolver in blocky? I think, in your case, using redis will not improve your speed. Redis can improve blocky's startup, but redis cache must be maintained at runtime and this will bring some overhead. |
Everything is dockerized in a swarm environment with 3 managers. Every manager got a blocky container on them. All three blocky instances are distributed as DNS resolvers for other hardware in the network. The whole setup ishould provide high resolution speed and little downtime as possible. For comparison Google(8.8.8.8) and Cloudflare(1.1.1.1) resolution speed tends to be ~45ms. |
@0xERR0R Every cache insertion would be broadcasted parallel to the insertion itself. Pro:
Con:
|
What do you mean with "broadcast channel"? Do you want to "connect" blocky instances to each other? |
Currently considering a UDP socket as its designed for just that. For example: The instances itself wouldn't know of each other or how many other instances are listening. |
ok, understand. Why not redis with Pub/Sub? It could be managed by redis, all subscribed blockys will get cache insertion propagation. If one instance restarts, it can get the cache from redis. Your approach needs own protocol and it relies on network infrastructure (all blocky instances are in the same subnet). |
The sync could surely be done with redis. I really wouldn't like running another service in the blocky container or missing the cache inside it. It seems like I'm a little stuck there. |
Ok, these are my thoughts, this should be verified (maybe it doesn't work this way): Blocky 1 -------- redis --------- blocky 2 (or even more) Blocky 1 inserts a key in the cache and propagates is (async) to redis (publish over channel "cache") On instance startup, blocky loads cache from redis. Redis is optional. If not configured, each blocky instance is Independent. We can use redis pub/sub also for other things, for example disabling of blocking: REST request receives blocky 1, blocky 1 disables blocking and propagates the change to redis. All other blocky instances disables blocking too. |
Ah ok i get it. |
Edit 1: I looked further into it and changed my point of view. I will try implementing the pub/sub approach. Edit 2: Started development in repository 344. |
If blocky is deployed on multiple instances for concurrency and/or failsafe the cache most likely will differ.
This will cause spikes in response time during instance switches.
I'd like to propose an external second level cache for blocky.
Redis would be logical solution as it's already been used in similar scenarios(unbound cache db).
If activated this feature would include:
The text was updated successfully, but these errors were encountered: