-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Schema incompatibility on updating from 1.26.0 to 1.28.0 #2
Comments
Update. The error above only occurs when the
If it is set to a fixed list, the error goes away, and the following error is produced, for yet another schema change:
I don't have Still I guess this is something to be handled in the provider. There should be a clean upgrade path. |
Another update. When |
So in the end I saved raw state to a file using
I will also try to create a clean new cluster using 1.28.0. Will post an update on how it went (update: at least |
...yet another update :) I did a string replacement:
So at this point I guess it's safe to say that upgrading/migration from eddycharly/kops v1.26.0-alpha.1 to clayrisser/kops v1.28.0 is not viable. Creating new clusters, however, should work (but I have yet to try to apply the plan). |
The config_store {
base = var.config_store_base
} Also, this is not exactly the same as the |
That's a different thing.
The The |
...can confirm that creating a clean new cluster from scratch works fine. Kubernetes 1.28.2 all right. |
I recommend looking at the following for reference, since I was able to get it to work. https://gitlab.com/bitspur/rock8s/rock8s-cluster/-/blob/main/main/cluster.tf?ref_type=heads#L65 Because I forked this project and am not the original author, I'm not really able to spend a ton of time other than getting the basics to work. Any support or pull requests from the community is much appreciated. |
Yeah I understand. On my part, I guess I lack (immediate) knowledge to dig deep into this and understand it. Either way, creating new clusters from scratch work, although with some cosmetic stuff that TF wants to update on second invocation. After second invocation it all runs fine. At this point I'm more worried about the future of the TF kops provider(s) in general, since the only(?) existing one seems to be abandoned now, and there doesn't seem to be much enthusiasm in the community to take it over. Probably the proper strategy would be to manage clusters without kops, with pure TF. Will see. |
@shapirus this project is core to my work, so I don't mind being the defacto maintainer until someone else takes it over. |
I've also been actively working on my fork of the provider. The changes I've made have mostly been around better support for GCP so not likely to be of interest to either of you immediately. I've also contributed some fixes and features to kops itself (for GCP), and my TF provider fork is using my custom branch of kops until those are merged. However, my company deploys kops clusters to both AWS and GC. Up until this point the AWS deploys have been using the Has anyone been contact with eddycharly and heard officially that the project is abandoned? |
@sl1pm4t yeah, I would feel much more comfortable doing it that way. I just created the organization and add you as an owner. https://github.com/terraform-kops We can either move, fork, or create it brand new. If you want me to move this project (might be nice because it already has issues on it), then I need a few days because terraform is using this repo for publishing. |
I'll also be adding some of my team members as members of the organization. They will help maintain it. |
I hope @eddycharly doesn't mind if I add him to the organization also. If he happens to come back and contribute, it would be very welcome. |
Great thanks for doing that @clayrisser. |
...this is how history is made :) |
Hey folks 👋 I'm interested in trying out this fork and possibly contributing to the new org. Any chance I can get added? |
That'd be great @mmckeen - are you using kops at Fastly? |
@mmckeen just added you. I should have this project migrated over by the end of this month. |
Indeed we are! |
Hey folks 👋 @clayrisser i see the org but no repo ? Did something happen ? I just posted this on the kops-dev slack channel https://kubernetes.slack.com/archives/C8MKE2G5P/p1708107317601559 |
Nothing happened. I haven't moved it yet. I will move it next week. |
Cool, happy to see the project is not completely dead :) |
Definitely not dead. Just got super busy with other things. Thank you for the reminder. |
Hi all - I created a fork in the org - https://github.com/terraform-kops/terraform-provider-kops |
So I tried to change my TF kops provider from eddycharly/kops v1.26.0-alpha1 to clayrisser/kops v1.28.0. After updatind the obvious changes reported by TF (such as the root_volume attributes moved from separate attributes to a dedicated
root_volume
block) I am getting the following obscure error onterraform plan
:On further research, looking at the output of
terraform providers schema -json
, I found the following:resource_schemas/kops_cluster/block
:resource_schemas/kops_cluster/block/block_types
:Apparently both are produced from
api_spec
'saccess
field, but tf plan fails because of the schema change: theaccess
attribute moved from its ownkubernetes_api_access
definition to a field in theapi
block.What is the best way of migrating/upgrading in this case? Can it be handled in the provider?
The text was updated successfully, but these errors were encountered: