-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about developing an operator that responds to as many Kubernetes versions as possible. #5577
Comments
Hi @smartkuk, The versions of k8s deps used in the CLI or scaffolded by it is not exactly what will drive the supportability statement of your Operator. Note that k8s API has some API removals, see: https://kubernetes.io/docs/reference/using-api/deprecation-guide/ Then, for example, if your Operator is using APIs which were removed in the K8s version 1.22 that means that your project will not work on this one and upper versions. So, let's use the CRD API as an example: If your project is using the API v1 for the CRDs then, your project can work on from k8s versions 1.16. However, you will be unable to make a project mainly integrated with OLM that works in versions < 1.16 and upper or equals 1.22 (Openshft 4.9). You might be able to support more versions if you try to create logics that will work accordingly with the cluster versions, or better, the APIs available on the cluster. See: Why not couple an Operator's logic to a specific Kubernetes platform? If your project is integrated with OLM (If you are distributing your project via OperatorHub.io, Openshift, etc): OLM will try to apply the CRDs to initialized the Operator. Note that the API v1 for CRDs was introduced with the k8s version 1.16 and in its release 1.22, the v1beta1 was removed. Then, technically it is not achievable to build a bundle that supports However, if you publish new versions where they have any logic to create resources on runtime according to cluster apis that would be possible and work with OLM assuming that:
If you are distributing your solution via Openshift Catalogs such as RedHat Community We also recommend, for those who would like to still ship bundles that use these removed APIs on 1.22/OCP 4.9:
For further information see: https://docs.openshift.com/container-platform/4.8/operators/operator_sdk/osdk-working-bundle-images.html#osdk-control-compat_osdk-working-bundle-images However, it is important to highlight that all OCP/OLM versions which are actually supported would work with the upper versions of these APIs. (See that OCP 4.4 is no longer supported at all) |
Hi @smartkuk, It shows that further and detailed information was provided above already. So, I am closing this one as sorted out. However, please, feel free to re-open or raise new issues as you see fit. |
Type of question
Best practices
General operator-related help
Question
What did you do?
Hello.
There is an operator I developed based on the operator-sdk v0.17.2 version.
Based on this project, in other Kubernetes versions, the crd generated output is not correct, so I am thinking about creating a project for each Kubernetes version.
People who have had CRD problems have had other issue as well.
So I checked the docs and saw a guide saying that the operator-sdk version and the kubernetes version match.
So, to check which Kubernetes version operator-sdk is fully compatible with, I download the binary linked to the github release menu and run the operator-sdk version command. Can I judge the version by ?
I want to install and test the operator I developed from version 1.17 to the most recent version of Kubernetes.
Please note that this inquiry is the result of processing with Google Translate.
What did you expect to see?
What did you see instead? Under which circumstances?
Environment
Operator type:
Kubernetes cluster type:
$ operator-sdk version
$ go version
(if language is Go)$ kubectl version
Additional context
The text was updated successfully, but these errors were encountered: