-
Notifications
You must be signed in to change notification settings - Fork 307
Implement automatically saving / loading standalone cluster config #614
Conversation
Removes standalone delete -f flag
this looks great... i'm now wondering since we have all these config files if we shouldn't do something like groups all the files per clusters by like throwing all these files into a folder by cluster name OR maybe in the next iteration, take all of these files and drop them into a secret on the cluster itself for easy access. maybe even do both. |
Yeah - that's a great idea. The |
yea.... thinking about it some more, the only thing we couldn't put on the cluster is that init object we use to create the cluster. |
A negative to storing stuff on the cluster is you need to be able to access/log into the cluster in order to delete it. If the cluster is bonked or it's network dies, that might not be a good thing. Deletion would have to occur by hand. Trade offs. |
@jpmcb I think in the first example, you have a typo on the delete. :-D I would still have the --file for deletion because the AWS credentials might change between creation and deletion, and hence there would be no way, unless you know the magic on how delete works, on use a different file, or it could prompt the user if this circumstance happens. Not sure if there are other things in the clusterconfig file that are relevant and that could potentially change that would make deletion not work properly. Things to consider: The clusterconfig file should not be deleted if for any reason the deletion process failed. It should be moved maybe to a folder of .deleted, or kept as history somewhere (at least the n last clusters). I've had one time where I deleted the clusterconfig of an existing cluster and couldn't easily delete it. I would also advocate to have the init object in the cluster as an alternate method to delete a cluster, because there's the other problem of: what happens if my clusterconfig is local to my disk, I create a standalone cluster in AWS and delete my local clusterconfig? Would I be able to delete it? This would expand the experience to:
Some of these could be converted to issues in GH for future reference and discussion :-D I see a world where for creation and deletion of clusters (management and workload) we can use the kickstart UI so that all these logic that would otherwise need to happen with config files and parameters could be easily done for mere mortals though a UI. But that's just a dream of the future :-D |
Good catch!! Fixed 🙈
Very good point. I suppose if someone rolled their IAM user (and thus, their access / secret keys), they wouldn't be able to use a previously saved cluster config file and would need to specify a different one. I can change this to create a optional flag.
Yes. If this should be how it currently works! |
This should be good to go now - added the optional flag |
Validated all works well with AWS. Trying with CAPD now. |
What this PR does / why we need it
During
standalone-cluster create
, the cluster config used will be saved in the user's$HOME/.tanzu/tce/configs/
directory, being named{cluster-name}_ClusterConfig.yaml
. Then, duringstandalone-cluster delete
, the cluster config file is automatically loaded from the same directory (based on the given cluster name). This way, the user does not need to specify the-f
cluster config file during deletion, so the flag was removed.FYI - this is related to this comment
Describe testing done for PR
On docker:
On AWS:
Does this PR introduce a user-facing change?