You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 24, 2020. It is now read-only.
@apoydence I was thinking of doing something similar to what the loggregator agent does.
Accept a DNS name in config.
At runtime query DNS to get A or AAAA records for that name.
Use these IP addresses.
Periodically query DNS to refresh the list of IP addresses.
This would be compatible with a LOT of service discovery systems, including kube-dns and bosh-dns.
Alternatively, you can query for SRV records which allows for port, weighting, transport and service name to be discovered. kube-dns supports this but I am not sure about bosh-dns.
I would want to do more experiments with multiple schedulers after this is completed to see if there is any thrashing. I suspect each scheduler would be looking at a different subset of log cache nodes and would be instructing them differently.
That is a possibility. We can have an algo that is resistant to sudden changes to prevent nodes dropping out when they come right back. Something like a TTL for a node to stop being scheduled to. This would help with thrashing, allow for nodes that are truely gone to expire, and allow for new nodes to come online.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
This would allow for log-cache instances to come and go and the cluster to adapt dynamically to scaling events, outages, etc.
The text was updated successfully, but these errors were encountered: