-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster-api v1beta1 adoption ? #1024
Comments
I guess that would be very good. I failed to try cluster-api the first time today because the released version v1.0.0 fails with an error when using cluster-api-provider-openstack. Just for your reference, this is the corresponding (closed) issue: kubernetes-sigs/cluster-api#5425 |
The v1beta1 versions of the Cluster API core resources won’t adopt v1alpha4 resources from infrastructure providers. So this needs sorting one way or another, either by this provider moving to v1beta1 or by Cluster API core resources not expecting infrastructure resources to match their version. |
The current situation means we can’t upgrade the core Cluster API components to benefit from changes there. |
from this kubernetes-sigs/cluster-api#5425 (comment)
I don't fully understand this ... are you suggesting that even upgrade to v1beta1, CAPO still can't work with cluster-api 1.0.x ? |
@jichenjc When I tried Cluster API v1.0.0, the v1beta1 Cluster would not adopt a v1alpha4 OpenStackCluster as it’s infrastructure provider. The logs of the CAPI core provider were complaining about mismatching contracts because the version of the infrastructure cluster was not v1beta1. |
So all I’m saying is we now need to release a v1beta1 version to match the core resources. I was kind of expecting the v1alpha4 CAPO resources to still work with the v1beta1 CAPI ones. It seems odd to require the infrastructure providers, which are independent of the core, to evolve at the same rate. |
usually provider version has some kind of coresponding vresion of CAPI version |
@jichenjc Yes, you don't have to call it v1beta1. It can absolutely be It's probably helpful to take a look at what other providers did and the doc here: https://cluster-api.sigs.k8s.io/developer/providers/v1alpha4-to-v1beta1.html Usually it's:
|
Thanks for the clarification - all I observed is that the v1alpha4 resources didn’t work with the v1beta1 Cluster API resources with an error about contracts. If we just need to issue a new version with an updated supported contract value, that sounds sensible. More generally, it would be good to know what would be needed for the OpenStack provider to go to v1beta1 as some people are nervous about using it in production until then. |
I can't really answer that. I think that's something that the CAPO community has to discuss. In general I think with v1beta1 comes a greater degree of API stability which the community should commit to. |
@sbueringer I didn’t expect you personally to answer it! It definitely needs to be a community discussion. What I am saying is that the discussion needs to happen ASAP. I don’t think we want to be lagging too far behind and, in my experience so far, the provider is pretty reliable. If we just need to freeze the API and say this is v1beta1 then we do need to have a discussion about anything else that we want in the API to be able to do that. We already have customers now saying “Cluster API is production ready - can we use it?” and we still have to say “well actually the OpenStack provider is still in alpha”, which they are less happy with. We can’t stay in alpha forever so I think we as a community need to come up with a roadmap to beta ASAP. Ideally with a fairly short timeframe! |
IMHO, going to As soon as we are on v1beta1 I think we should follow the "deprecate and remove in release + x" pattern. Similar for "cut a v0.5.0" or "cut a 1.0.0". However for capi compatibility I identified at least the following todos:
From migration docs:
|
Sorry late to join discussion. Current PRs which would break API are:
I understand we can release
|
I understand the v1alpahx to v1betax is mostly related to API stability (of course code quality/functions also important) , so might need some discussion and criteria on such decision |
I'd be totally fine with going to |
So would I (for now), as long as I can use the v1beta1 core resources. I do think we should have a plan for getting to beta in the medium term though, e.g. O(12 months). We can't stay in alpha forever. |
Thanks @hidekazuna for the change summary! Of those I think the only one which strictly needs a version bump is #1028, as all the rest are adding new fields with backwards-compatible defaults. I would very much like to fully tidy up Networks/Ports before committing to a more stable API. Specifically I think we should aim to remove Networks entirely and keep only Ports. We could also do with reducing the |
While we're at it, I just started working on RootVolume and I notice that it has a few problems:
This just seems like a collection of foot guns to me. I'd like to reduce RootVolume to:
i.e. Remove the fields which the user can only reasonably use to break it[1], and don't require the user to specify the image differently for boot from volume. This would also be a breaking change. [1] That is assuming we don't have a use case for boot from volume snapshot? I can't think of one. |
Created cut a v0.5.0(#1029) from #1024 (comment). Thanks @chrischdi . |
snapshot seems not a big use case for us? mostly should be provisioning VM |
Incidentally, a volume snapshot in this case would be used very much the same way as a glance image: it would just be stored in Cinder instead of Glance. However, I'm not aware of anybody using it this way. CoreOS specifically isn't designed to be used this way at all. I'm very happy not to maintain 'accidental' support for it, although I'd be happy to add it deliberately if somebody came to us with a use case. I suspect that the only reason the API looks like this is that the implementation naively followed the Nova block device mapping API. |
I guess so and I agree with your suggestions above, thanks :) |
I am going to create v1alpha5 API with CAPI v1beta1(#1033). But now I am thinking that without creating new API version, only create v0.5.0 with CAPI v1beta1 is better option. We actively changing API now and our issue is that we need release with CAPI v1beta1. If we agree that release v0.5.0 only updating CAPI v1beta1, we will create release-0.4 branch, and create v0.5.0 release in the next week as much as possible. |
So the question here is how to sync the changes from r-04 to 0.5 we will create soon |
If API version is the same, no reason to stay release-0.4, right? |
After all, I will do the followings:
Steps:
I will start from 17th 00:00 UTC. |
the approach sounds good, thanks :) |
@hidekazuna given the great PR you made, I think we are ok to close this? |
@jichenjc Yes, let7s close this PR. |
/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
I heard cluster-api v1beta1 is going to be out soon ... should we start to consider the adoption
and corresponding v1beta1 version of CAPO as well?
update:
https://kubernetes.io/blog/2021/10/08/capi-clusterclass-and-managed-topologies/ might be something
related as well, need further anaylsis
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The text was updated successfully, but these errors were encountered: