-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move the registration work into k8ssandra-client #39
Move the registration work into k8ssandra-client #39
Conversation
6076920
to
eea7f97
Compare
|
||
"github.com/k8ssandra/k8ssandra-client/pkg/registration" | ||
configapi "github.com/k8ssandra/k8ssandra-operator/apis/config/v1beta1" | ||
"github.com/k8ssandra/k8ssandra-operator/pkg/result" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: Do not include pkg from k8ssandra-operator as we want to avoid circular dependencies. This project is intended to be used by the cass-operator / k8ssandra-operator / other operators / software.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comments for the first round of review.
…sure IsNotFound is checked prior to Create.
@burmanm I've implemented most of your requested changes. The main place I'm pushing back is in your desire to remove all panics. I don't think that makes sense for a CLI application as a panic is the most straightforward way to shut things down if an error is non-recoverable. Most errors in this logic are non-recoverable, we don't want to really handle anything beyond ensuring the user has good feedback (and a stack trace is ideally part of that). For consistency, I've done as you've asked in the test code and used require.NoError etc. but even there, I'd probably rather just panic unless we can get something more out of the test by allowing it to continue. Let me know if I've missed anything else. |
Meeting notes:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Issue: it seems like we're losing the context names in the generated kubeconfigs:
name: cluster
contexts:
- context:
cluster: cluster
user: cluster
name: cluster
Everything gets turned into cluster
, and I think that's going to create conflicts with multiple dataplanes being registered. Internally our APIs will list the contexts by their names, not by the clientconfig name.
Alex is seeing an error when trying to create a K8ssandraCluster using clientConfigs generated by this process:
I've reproduced it using this registration command There is a clientConfig as below:
It references a valid secret in the same namespace
Given the nature of the error, it seems likely that we're hitting this. However, when I set the current context field, it doesn't appear to resolve the issue and I get the same error. Going back to Alex' idea, he believes this is caused by using a static context name "cluster" for user, context and context name fields. I don't think this is the problem since (AFAIK) the server name, user name and context name are simply arbitrary values which are there to bind users to particular servers within a context. Despite that, I've tried to emulate the way the original script does things here by including the original context name back into the new kubeconfig. We end up with something like the below:
This has both the default context set and uses the original source context name to refer to the server and user, however, it still generates the same error in k8ssandra-operator. As near as I can tell, this is exactly the same as what would be produced by the script. For completeness, I then tried setting the secret and clientConfig names equal to the source context name. I then modified the missioncontrolCluster so that the k8sContext field pointed to the new names. That also produced the same result using this command (after modifying the code so that the resource names default to the sanitized context name):
Finally, I tried creating the clientConfig and secret in the cluster's namespace, instead of the operator's namespace. The operator emitted the same error. So I went back to the original script and ran:
Obtaining:
This looks almost identical to what we had above, and has a name that matches, so it should be picked up by the MissionControlCluster spec's k8sContext field. However, after restarting the k8ssandra-operator pod, we still see the same error:
At least the behaviour is consistent. |
OK, it looks like the problem here was that the contextName needs to match both the k8sContext within the K8ssandraCluster, then also the map keys for the context, server and user. Running this command against an MC cluster appears to result in a successful deployment now:
I've made additional changes which fix problems relating to already exists type errors. We now default to a sanitized source context name for the contextName and meta.name fields in the clientConfig too, propagating the same value into the kubeconfig within the secret. @adejanovski probably ready for another test now I think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything is working fine now. Thanks!
Moves the work so far on data plane registration over the k8ssandra-client.
Fixes: #38