-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi Cluster Example / Pattern #2755
Comments
This one could be interesting for you: #2746 (although not implemented yet) |
Yes, this pattern still applies and is pretty simple to use - here is an example for2 cluster setup. You will only need to correctly use both clients and be conscious about that. However, at some point I’ve seen problems with integration tests using envtest, while deploying 2 clusters in parallel (slack message). Still needed to be tested on latter versions, maybe it is no longer an issue. |
I am trying to implement the following: But this old code doesn't match up anymore... Not sure how I am supposed to get the cache of other clusters so I can mirror CRDs created in other clusters etc.
I imagine something changed that I can't find . |
Looks like this behavior changed in d6a053f . Looked at https://github.com/k8ssandra/k8ssandra-operator/blob/main/controllers/control/k8ssandratask_controller.go#L373 but it's using an older version of controller-runtime. This PR is what removed the functionality that is being used by k8ssandra https://github.com/kubernetes-sigs/controller-runtime/pull/2120/files#diff-54e8061fb2925948c62f36b19e08785ec1fb90b349cfe48c73239f4cca8c6ef5L71 Reading through it, it's not obvious to me how to do it, possible skill issue :P I'm not sure I see the correct way to configure watches in other clusters, a pointer / example would be much appreciated. I guess at this point I'll just state what problem I'm trying to solve: Id like to keep a set of CRDs sync'd between many clusters. How should 1 of the clusters update itself if the CRD is created or updated in another cluster? I'd like to have Cluster A, Watch for updates in B C Hopefully this will result in a Fault Tolerance when querying the value of the CRD. Does this sound like a sane approach? Any gotchas with doing this in K8S? |
Looks like after taking another look you can use "WatchesRawSource" to create a source and pop them on Mgr. I was able to apply from the other cluster and see the Reconcile loop get invoked.
|
While going through the design / testing and implementation I've come across the following to solve the leader election for multi-cluster: If we are running a controller that is going to act upon multiple clusters, we can elect a leader in 1 of the clusters to take actions. The default settings in the manager package of controller-runtime doesn't support this but does support providing our own logic for handling it via "LeaderElectionResourceLockInterface resourcelock.Interface" The following interface exposes which controller will be the leader.
|
Probably too much for what you need. But in Cluster API we create additional caches/clients per cluster that we want to communicate with. Maybe there's something useful for you in this code: https://github.com/kubernetes-sigs/cluster-api/blob/main/controllers/remote/cluster_cache_tracker.go |
@sbueringer Thanks, some nice things in there. Do you by chance know if there is a simple way to not dump when one of the cluster caches time out? I saw you made a accessor which brought them in and out, is that required or is there anything in the manager than helps with this? My expectation is that cluster will come up eventually, removing it would require a config change in my scenario. Will be experimenting with this more. I was testing with this and firewalling off the k8s api. Still resulted in failure: Looks like when set to 0 we default to 2 minutes. Need to investigate
|
No I don't know. We create a separate cache and client per cluster we communicate with. We don't use the "Cluster" struct. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I'm starting to write a controller that will need to span clusters, and I am not sure if what was documented 4 years ago is still the way to move forward. https://github.com/kubernetes-sigs/controller-runtime/blob/main/designs/move-cluster-specific-code-out-of-manager.md
I've read through: #745
Is there an example / pattern people should use when writing a controller that spans more than 1 cluster?
The text was updated successfully, but these errors were encountered: