diff --git a/ATTRIBUTION.txt b/ATTRIBUTION.txt index 6221c0167f43..023b9db23604 100644 --- a/ATTRIBUTION.txt +++ b/ATTRIBUTION.txt @@ -131,7 +131,10 @@ https://github.com/modern-go/concurrent ** github.com/modern-go/reflect2; version v1.0.2 -- https://github.com/modern-go/reflect2 -** github.com/nutanix-cloud-native/prism-go-client; version v0.3.0 -- +** github.com/nutanix-cloud-native/cluster-api-provider-nutanix/api/v1beta1; version v1.1.3 -- +https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix + +** github.com/nutanix-cloud-native/prism-go-client; version v0.3.4 -- https://github.com/nutanix-cloud-native/prism-go-client ** github.com/opencontainers/go-digest; version v1.0.0 -- @@ -236,7 +239,7 @@ https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack ** sigs.k8s.io/cluster-api-provider-vsphere/api/v1beta1; version v1.0.1 -- https://github.com/kubernetes-sigs/cluster-api-provider-vsphere -** sigs.k8s.io/cluster-api/test/infrastructure/docker/api/v1beta1; version v1.0.0 -- +** sigs.k8s.io/cluster-api/test/infrastructure/docker/api/v1beta1; version v1.3.3 -- https://github.com/kubernetes-sigs/cluster-api ** sigs.k8s.io/controller-runtime; version v0.13.1 -- @@ -1289,7 +1292,7 @@ https://github.com/ProtonMail/go-crypto ** github.com/vmware/govmomi/vim25/xml; version v0.29.0 -- https://github.com/vmware/govmomi -** golang.org/go; version go1.19.6 -- +** golang.org/go; version go1.19.7 -- https://github.com/golang/go ** golang.org/x/crypto; version v0.5.0 -- @@ -2235,6 +2238,378 @@ Exhibit B - “Incompatible With Secondary Licenses” Notice the Mozilla Public License, v. 2.0. +------ + +** github.com/hashicorp/go-cleanhttp; version v0.5.2 -- +https://github.com/hashicorp/go-cleanhttp + + * Package github.com/hashicorp/go-cleanhttp's source code may be found at: + https://github.com/hashicorp/go-cleanhttp/tree/v0.5.2 + +Mozilla Public License, version 2.0 + +1. Definitions + +1.1. "Contributor" + + means each individual or legal entity that creates, contributes to the + creation of, or owns Covered Software. + +1.2. "Contributor Version" + + means the combination of the Contributions of others (if any) used by a + Contributor and that particular Contributor's Contribution. + +1.3. "Contribution" + + means Covered Software of a particular Contributor. + +1.4. "Covered Software" + + means Source Code Form to which the initial Contributor has attached the + notice in Exhibit A, the Executable Form of such Source Code Form, and + Modifications of such Source Code Form, in each case including portions + thereof. + +1.5. "Incompatible With Secondary Licenses" + means + + a. that the initial Contributor has attached the notice described in + Exhibit B to the Covered Software; or + + b. that the Covered Software was made available under the terms of + version 1.1 or earlier of the License, but not also under the terms of + a Secondary License. + +1.6. "Executable Form" + + means any form of the work other than Source Code Form. + +1.7. "Larger Work" + + means a work that combines Covered Software with other material, in a + separate file or files, that is not Covered Software. + +1.8. "License" + + means this document. + +1.9. "Licensable" + + means having the right to grant, to the maximum extent possible, whether + at the time of the initial grant or subsequently, any and all of the + rights conveyed by this License. + +1.10. "Modifications" + + means any of the following: + + a. any file in Source Code Form that results from an addition to, + deletion from, or modification of the contents of Covered Software; or + + b. any new file in Source Code Form that contains any Covered Software. + +1.11. "Patent Claims" of a Contributor + + means any patent claim(s), including without limitation, method, + process, and apparatus claims, in any patent Licensable by such + Contributor that would be infringed, but for the grant of the License, + by the making, using, selling, offering for sale, having made, import, + or transfer of either its Contributions or its Contributor Version. + +1.12. "Secondary License" + + means either the GNU General Public License, Version 2.0, the GNU Lesser + General Public License, Version 2.1, the GNU Affero General Public + License, Version 3.0, or any later versions of those licenses. + +1.13. "Source Code Form" + + means the form of the work preferred for making modifications. + +1.14. "You" (or "Your") + + means an individual or a legal entity exercising rights under this + License. For legal entities, "You" includes any entity that controls, is + controlled by, or is under common control with You. For purposes of this + definition, "control" means (a) the power, direct or indirect, to cause + the direction or management of such entity, whether by contract or + otherwise, or (b) ownership of more than fifty percent (50%) of the + outstanding shares or beneficial ownership of such entity. + + +2. License Grants and Conditions + +2.1. Grants + + Each Contributor hereby grants You a world-wide, royalty-free, + non-exclusive license: + + a. under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or + as part of a Larger Work; and + + b. under Patent Claims of such Contributor to make, use, sell, offer for + sale, have made, import, and otherwise transfer either its + Contributions or its Contributor Version. + +2.2. Effective Date + + The licenses granted in Section 2.1 with respect to any Contribution + become effective for each Contribution on the date the Contributor first + distributes such Contribution. + +2.3. Limitations on Grant Scope + + The licenses granted in this Section 2 are the only rights granted under + this License. No additional rights or licenses will be implied from the + distribution or licensing of Covered Software under this License. + Notwithstanding Section 2.1(b) above, no patent license is granted by a + Contributor: + + a. for any code that a Contributor has removed from Covered Software; or + + b. for infringements caused by: (i) Your and any other third party's + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + + c. under Patent Claims infringed by Covered Software in the absence of + its Contributions. + + This License does not grant any rights in the trademarks, service marks, + or logos of any Contributor (except as may be necessary to comply with + the notice requirements in Section 3.4). + +2.4. Subsequent Licenses + + No Contributor makes additional grants as a result of Your choice to + distribute the Covered Software under a subsequent version of this + License (see Section 10.2) or under the terms of a Secondary License (if + permitted under the terms of Section 3.3). + +2.5. Representation + + Each Contributor represents that the Contributor believes its + Contributions are its original creation(s) or it has sufficient rights to + grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use + + This License is not intended to limit any rights You have under + applicable copyright doctrines of fair use, fair dealing, or other + equivalents. + +2.7. Conditions + + Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in + Section 2.1. + + +3. Responsibilities + +3.1. Distribution of Source Form + + All distribution of Covered Software in Source Code Form, including any + Modifications that You create or to which You contribute, must be under + the terms of this License. You must inform recipients that the Source + Code Form of the Covered Software is governed by the terms of this + License, and how they can obtain a copy of this License. You may not + attempt to alter or restrict the recipients' rights in the Source Code + Form. + +3.2. Distribution of Executable Form + + If You distribute Covered Software in Executable Form then: + + a. such Covered Software must also be made available in Source Code Form, + as described in Section 3.1, and You must inform recipients of the + Executable Form how they can obtain a copy of such Source Code Form by + reasonable means in a timely manner, at a charge no more than the cost + of distribution to the recipient; and + + b. You may distribute such Executable Form under the terms of this + License, or sublicense it under different terms, provided that the + license for the Executable Form does not attempt to limit or alter the + recipients' rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + + You may create and distribute a Larger Work under terms of Your choice, + provided that You also comply with the requirements of this License for + the Covered Software. If the Larger Work is a combination of Covered + Software with a work governed by one or more Secondary Licenses, and the + Covered Software is not Incompatible With Secondary Licenses, this + License permits You to additionally distribute such Covered Software + under the terms of such Secondary License(s), so that the recipient of + the Larger Work may, at their option, further distribute the Covered + Software under the terms of either this License or such Secondary + License(s). + +3.4. Notices + + You may not remove or alter the substance of any license notices + (including copyright notices, patent notices, disclaimers of warranty, or + limitations of liability) contained within the Source Code Form of the + Covered Software, except that You may alter any license notices to the + extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + + You may choose to offer, and to charge a fee for, warranty, support, + indemnity or liability obligations to one or more recipients of Covered + Software. However, You may do so only on Your own behalf, and not on + behalf of any Contributor. You must make it absolutely clear that any + such warranty, support, indemnity, or liability obligation is offered by + You alone, and You hereby agree to indemnify every Contributor for any + liability incurred by such Contributor as a result of warranty, support, + indemnity or liability terms You offer. You may include additional + disclaimers of warranty and limitations of liability specific to any + jurisdiction. + +4. Inability to Comply Due to Statute or Regulation + + If it is impossible for You to comply with any of the terms of this License + with respect to some or all of the Covered Software due to statute, + judicial order, or regulation then You must: (a) comply with the terms of + this License to the maximum extent possible; and (b) describe the + limitations and the code they affect. Such description must be placed in a + text file included with all distributions of the Covered Software under + this License. Except to the extent prohibited by statute or regulation, + such description must be sufficiently detailed for a recipient of ordinary + skill to be able to understand it. + +5. Termination + +5.1. The rights granted under this License will terminate automatically if You + fail to comply with any of its terms. However, if You become compliant, + then the rights granted under this License from a particular Contributor + are reinstated (a) provisionally, unless and until such Contributor + explicitly and finally terminates Your grants, and (b) on an ongoing + basis, if such Contributor fails to notify You of the non-compliance by + some reasonable means prior to 60 days after You have come back into + compliance. Moreover, Your grants from a particular Contributor are + reinstated on an ongoing basis if such Contributor notifies You of the + non-compliance by some reasonable means, this is the first time You have + received notice of non-compliance with this License from such + Contributor, and You become compliant prior to 30 days after Your receipt + of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent + infringement claim (excluding declaratory judgment actions, + counter-claims, and cross-claims) alleging that a Contributor Version + directly or indirectly infringes any patent, then the rights granted to + You by any and all Contributors for the Covered Software under Section + 2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user + license agreements (excluding distributors and resellers) which have been + validly granted by You or Your distributors under this License prior to + termination shall survive termination. + +6. Disclaimer of Warranty + + Covered Software is provided under this License on an "as is" basis, + without warranty of any kind, either expressed, implied, or statutory, + including, without limitation, warranties that the Covered Software is free + of defects, merchantable, fit for a particular purpose or non-infringing. + The entire risk as to the quality and performance of the Covered Software + is with You. Should any Covered Software prove defective in any respect, + You (not any Contributor) assume the cost of any necessary servicing, + repair, or correction. This disclaimer of warranty constitutes an essential + part of this License. No use of any Covered Software is authorized under + this License except under this disclaimer. + +7. Limitation of Liability + + Under no circumstances and under no legal theory, whether tort (including + negligence), contract, or otherwise, shall any Contributor, or anyone who + distributes Covered Software as permitted above, be liable to You for any + direct, indirect, special, incidental, or consequential damages of any + character including, without limitation, damages for lost profits, loss of + goodwill, work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses, even if such party shall have been + informed of the possibility of such damages. This limitation of liability + shall not apply to liability for death or personal injury resulting from + such party's negligence to the extent applicable law prohibits such + limitation. Some jurisdictions do not allow the exclusion or limitation of + incidental or consequential damages, so this exclusion and limitation may + not apply to You. + +8. Litigation + + Any litigation relating to this License may be brought only in the courts + of a jurisdiction where the defendant maintains its principal place of + business and such litigation shall be governed by laws of that + jurisdiction, without reference to its conflict-of-law provisions. Nothing + in this Section shall prevent a party's ability to bring cross-claims or + counter-claims. + +9. Miscellaneous + + This License represents the complete agreement concerning the subject + matter hereof. If any provision of this License is held to be + unenforceable, such provision shall be reformed only to the extent + necessary to make it enforceable. Any law or regulation which provides that + the language of a contract shall be construed against the drafter shall not + be used to construe this License against a Contributor. + + +10. Versions of the License + +10.1. New Versions + + Mozilla Foundation is the license steward. Except as provided in Section + 10.3, no one other than the license steward has the right to modify or + publish new versions of this License. Each version will be given a + distinguishing version number. + +10.2. Effect of New Versions + + You may distribute the Covered Software under the terms of the version + of the License under which You originally received the Covered Software, + or under the terms of any subsequent version published by the license + steward. + +10.3. Modified Versions + + If you create software not governed by this License, and you want to + create a new license for such software, you may create and use a + modified version of this License if you rename the license and remove + any references to the name of the license steward (except to note that + such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary + Licenses If You choose to distribute Source Code Form that is + Incompatible With Secondary Licenses under the terms of this version of + the License, the notice described in Exhibit B of this License must be + attached. + +Exhibit A - Source Code Form License Notice + + This Source Code Form is subject to the + terms of the Mozilla Public License, v. + 2.0. If a copy of the MPL was not + distributed with this file, You can + obtain one at + http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular file, +then You may include the notice in a location (such as a LICENSE file in a +relevant directory) where a recipient would be likely to look for such a +notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - "Incompatible With Secondary Licenses" Notice + + This Source Code Form is "Incompatible + With Secondary Licenses", as defined by + the Mozilla Public License, v. 2.0. + + ------ ** github.com/hashicorp/go-multierror; version v1.1.1 -- diff --git a/Makefile b/Makefile index fc229d37c5c3..223df65faaa6 100644 --- a/Makefile +++ b/Makefile @@ -295,6 +295,10 @@ build-cross-platform: eks-a-cross-platform eks-a-tool: ## Build eks-a-tool $(GO) build -o bin/eks-a-tool github.com/aws/eks-anywhere/cmd/eks-a-tool +.PHONY: docgen +docgen: eks-a-tool ## generate eksctl anywhere commands doc from code + bin/eks-a-tool docgen + .PHONY: eks-a-cluster-controller eks-a-cluster-controller: ## Build eks-a-cluster-controller $(GO) build -ldflags "-s -w -buildid='' -extldflags -static" -o bin/manager ./manager @@ -501,7 +505,7 @@ capd-test-%: build-all-test-binaries ## Run CAPD tests ./bin/e2e.test -test.v -test.run TestDockerKubernetes$*SimpleFlow -PACKAGES_E2E_TESTS ?= TestDockerKubernetes121CuratedPackagesSimpleFlow +PACKAGES_E2E_TESTS ?= TestDockerKubernetes125CuratedPackagesSimpleFlow ifeq ($(PACKAGES_E2E_TESTS),all) PACKAGES_E2E_TESTS='Test.*CuratedPackages' endif @@ -646,7 +650,7 @@ build-integration-test-binary: .PHONY: conformance conformance: $(MAKE) e2e-tests-binary E2E_TAGS=conformance_e2e - ./bin/e2e.test -test.v -test.run 'TestVSphereKubernetes121ThreeWorkersConformanc.*' + ./bin/e2e.test -test.v -test.run 'TestVSphereKubernetes.*ThreeWorkersConformanceFlow' .PHONY: conformance-tests conformance-tests: build-eks-a-for-e2e build-integration-test-binary ## Build e2e conformance tests diff --git a/cmd/eks-a-tool/cmd/docgen.go b/cmd/eks-a-tool/cmd/docgen.go new file mode 100644 index 000000000000..ea5f78470edc --- /dev/null +++ b/cmd/eks-a-tool/cmd/docgen.go @@ -0,0 +1,58 @@ +package cmd + +import ( + "fmt" + "path" + "path/filepath" + "strings" + + "github.com/spf13/cobra" + "github.com/spf13/cobra/doc" + + anywhere "github.com/aws/eks-anywhere/cmd/eksctl-anywhere/cmd" +) + +const fmTemplate = `--- +title: "%s" +linkTitle: "%s" +--- + +` + +var cmdDocPath string + +var docgenCmd = &cobra.Command{ + Use: "docgen", + Short: "Generate the documentation for the CLI commands", + Long: "Use eks-a-tool docgen to auto generate CLI commands documentation", + Hidden: true, + RunE: docgenCmdRun, +} + +func init() { + docgenCmd.Flags().StringVar(&cmdDocPath, "path", "./docs/content/en/docs/reference/eksctl", "Path to write the generated documentation to") + rootCmd.AddCommand(docgenCmd) +} + +func docgenCmdRun(_ *cobra.Command, _ []string) error { + anywhereRootCmd := anywhere.RootCmd() + anywhereRootCmd.DisableAutoGenTag = true + if err := doc.GenMarkdownTreeCustom(anywhereRootCmd, cmdDocPath, filePrepender, linkHandler); err != nil { + return fmt.Errorf("error generating markdown doc from eksctl-anywhere root cmd: %v", err) + } + return nil +} + +func filePrepender(filename string) string { + name := filepath.Base(filename) + base := strings.TrimSuffix(name, path.Ext(name)) + title := strings.Replace(base, "_", " ", -1) + return fmt.Sprintf(fmTemplate, title, title) +} + +func linkHandler(name string) string { + base := strings.TrimSuffix(name, path.Ext(name)) + base = strings.Replace(base, "(", "", -1) + base = strings.Replace(base, ")", "", -1) + return "../" + strings.ToLower(base) + "/" +} diff --git a/cmd/eksctl-anywhere/cmd/root.go b/cmd/eksctl-anywhere/cmd/root.go index c72ec56d3aa9..abb7822a3f9e 100644 --- a/cmd/eksctl-anywhere/cmd/root.go +++ b/cmd/eksctl-anywhere/cmd/root.go @@ -64,3 +64,8 @@ func initLogger() error { func Execute() error { return rootCmd.ExecuteContext(context.Background()) } + +// RootCmd returns the eksctl-anywhere root cmd. +func RootCmd() *cobra.Command { + return rootCmd +} diff --git a/config/crd/bases/anywhere.eks.amazonaws.com_clusters.yaml b/config/crd/bases/anywhere.eks.amazonaws.com_clusters.yaml index 7d3ff1b89331..018a1d526c51 100644 --- a/config/crd/bases/anywhere.eks.amazonaws.com_clusters.yaml +++ b/config/crd/bases/anywhere.eks.amazonaws.com_clusters.yaml @@ -71,6 +71,12 @@ spec: allowed between pods. Accepted values are default, always, never. type: string + skipUpgrade: + default: false + description: SkipUpgrade indicicates that Cilium maintenance + should be skipped during upgrades. This can be used + when operators wish to self manage the Cilium installation. + type: boolean type: object kindnetd: type: object @@ -340,8 +346,7 @@ spec: insecureSkipVerify: description: InsecureSkipVerify skips the registry certificate verification. Only use this solution for isolated testing or - in a tightly controlled, air-gapped environment. Currently only - supported for snow provider + in a tightly controlled, air-gapped environment. type: boolean ociNamespaces: description: OCINamespaces defines the mapping from an upstream diff --git a/config/crd/bases/anywhere.eks.amazonaws.com_snowmachineconfigs.yaml b/config/crd/bases/anywhere.eks.amazonaws.com_snowmachineconfigs.yaml index 909fce0db3a9..dcf047dedae0 100644 --- a/config/crd/bases/anywhere.eks.amazonaws.com_snowmachineconfigs.yaml +++ b/config/crd/bases/anywhere.eks.amazonaws.com_snowmachineconfigs.yaml @@ -69,6 +69,61 @@ spec: items: type: string type: array + hostOSConfiguration: + description: HostOSConfiguration provides OS specific configurations + for the machine + properties: + bottlerocketConfiguration: + description: BottlerocketConfiguration defines the Bottlerocket + configuration on the host OS. These settings only take effect + when the `osFamily` is bottlerocket. + properties: + kernel: + description: Kernel defines the kernel settings for bottlerocket. + properties: + sysctlSettings: + additionalProperties: + type: string + description: SysctlSettings defines the kernel sysctl + settings to set for bottlerocket nodes. + type: object + type: object + kubernetes: + description: Kubernetes defines the Kubernetes settings on + the host OS. + properties: + allowedUnsafeSysctls: + description: AllowedUnsafeSysctls defines the list of + unsafe sysctls that can be set on a node. + items: + type: string + type: array + clusterDNSIPs: + description: ClusterDNSIPs defines IP addresses of the + DNS servers. + items: + type: string + type: array + maxPods: + description: MaxPods defines the maximum number of pods + that can run on a node. + type: integer + type: object + type: object + ntpConfiguration: + description: NTPConfiguration defines the NTP configuration on + the host OS. + properties: + servers: + description: Servers defines a list of NTP servers to be configured + on the host OS. + items: + type: string + type: array + required: + - servers + type: object + type: object instanceType: description: InstanceType is the type of instance to create. type: string diff --git a/config/crd/bases/anywhere.eks.amazonaws.com_tinkerbellmachineconfigs.yaml b/config/crd/bases/anywhere.eks.amazonaws.com_tinkerbellmachineconfigs.yaml index 1b2667ce488f..e486d55cc270 100644 --- a/config/crd/bases/anywhere.eks.amazonaws.com_tinkerbellmachineconfigs.yaml +++ b/config/crd/bases/anywhere.eks.amazonaws.com_tinkerbellmachineconfigs.yaml @@ -53,6 +53,16 @@ spec: configuration on the host OS. These settings only take effect when the `osFamily` is bottlerocket. properties: + kernel: + description: Kernel defines the kernel settings for bottlerocket. + properties: + sysctlSettings: + additionalProperties: + type: string + description: SysctlSettings defines the kernel sysctl + settings to set for bottlerocket nodes. + type: object + type: object kubernetes: description: Kubernetes defines the Kubernetes settings on the host OS. diff --git a/config/crd/bases/anywhere.eks.amazonaws.com_vspheremachineconfigs.yaml b/config/crd/bases/anywhere.eks.amazonaws.com_vspheremachineconfigs.yaml index e6aca8e157e7..ce2a56da887a 100644 --- a/config/crd/bases/anywhere.eks.amazonaws.com_vspheremachineconfigs.yaml +++ b/config/crd/bases/anywhere.eks.amazonaws.com_vspheremachineconfigs.yaml @@ -59,6 +59,16 @@ spec: configuration on the host OS. These settings only take effect when the `osFamily` is bottlerocket. properties: + kernel: + description: Kernel defines the kernel settings for bottlerocket. + properties: + sysctlSettings: + additionalProperties: + type: string + description: SysctlSettings defines the kernel sysctl + settings to set for bottlerocket nodes. + type: object + type: object kubernetes: description: Kubernetes defines the Kubernetes settings on the host OS. diff --git a/config/manifest/eksa-components.yaml b/config/manifest/eksa-components.yaml index 2e2adb3b27eb..64e0833f49f4 100644 --- a/config/manifest/eksa-components.yaml +++ b/config/manifest/eksa-components.yaml @@ -3642,6 +3642,12 @@ spec: allowed between pods. Accepted values are default, always, never. type: string + skipUpgrade: + default: false + description: SkipUpgrade indicicates that Cilium maintenance + should be skipped during upgrades. This can be used + when operators wish to self manage the Cilium installation. + type: boolean type: object kindnetd: type: object @@ -3911,8 +3917,7 @@ spec: insecureSkipVerify: description: InsecureSkipVerify skips the registry certificate verification. Only use this solution for isolated testing or - in a tightly controlled, air-gapped environment. Currently only - supported for snow provider + in a tightly controlled, air-gapped environment. type: boolean ociNamespaces: description: OCINamespaces defines the mapping from an upstream @@ -5047,6 +5052,61 @@ spec: items: type: string type: array + hostOSConfiguration: + description: HostOSConfiguration provides OS specific configurations + for the machine + properties: + bottlerocketConfiguration: + description: BottlerocketConfiguration defines the Bottlerocket + configuration on the host OS. These settings only take effect + when the `osFamily` is bottlerocket. + properties: + kernel: + description: Kernel defines the kernel settings for bottlerocket. + properties: + sysctlSettings: + additionalProperties: + type: string + description: SysctlSettings defines the kernel sysctl + settings to set for bottlerocket nodes. + type: object + type: object + kubernetes: + description: Kubernetes defines the Kubernetes settings on + the host OS. + properties: + allowedUnsafeSysctls: + description: AllowedUnsafeSysctls defines the list of + unsafe sysctls that can be set on a node. + items: + type: string + type: array + clusterDNSIPs: + description: ClusterDNSIPs defines IP addresses of the + DNS servers. + items: + type: string + type: array + maxPods: + description: MaxPods defines the maximum number of pods + that can run on a node. + type: integer + type: object + type: object + ntpConfiguration: + description: NTPConfiguration defines the NTP configuration on + the host OS. + properties: + servers: + description: Servers defines a list of NTP servers to be configured + on the host OS. + items: + type: string + type: array + required: + - servers + type: object + type: object instanceType: description: InstanceType is the type of instance to create. type: string @@ -5295,6 +5355,16 @@ spec: configuration on the host OS. These settings only take effect when the `osFamily` is bottlerocket. properties: + kernel: + description: Kernel defines the kernel settings for bottlerocket. + properties: + sysctlSettings: + additionalProperties: + type: string + description: SysctlSettings defines the kernel sysctl + settings to set for bottlerocket nodes. + type: object + type: object kubernetes: description: Kubernetes defines the Kubernetes settings on the host OS. @@ -5661,6 +5731,16 @@ spec: configuration on the host OS. These settings only take effect when the `osFamily` is bottlerocket. properties: + kernel: + description: Kernel defines the kernel settings for bottlerocket. + properties: + sysctlSettings: + additionalProperties: + type: string + description: SysctlSettings defines the kernel sysctl + settings to set for bottlerocket nodes. + type: object + type: object kubernetes: description: Kubernetes defines the Kubernetes settings on the host OS. @@ -6024,7 +6104,7 @@ rules: - apiGroups: - packages.eks.amazonaws.com resources: - - packagebundlecontrollers + - packages verbs: - create - delete diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml index 841ba435400b..a82612d77aa1 100644 --- a/config/rbac/role.yaml +++ b/config/rbac/role.yaml @@ -213,7 +213,7 @@ rules: - apiGroups: - packages.eks.amazonaws.com resources: - - packagebundlecontrollers + - packages verbs: - create - delete diff --git a/controllers/cluster_controller_legacy.go b/controllers/cluster_controller_legacy.go index 82737e0b742b..aa16dc26ac3a 100644 --- a/controllers/cluster_controller_legacy.go +++ b/controllers/cluster_controller_legacy.go @@ -62,11 +62,10 @@ func NewClusterReconcilerLegacy(client client.Client, log logr.Logger, scheme *r // +kubebuilder:rbac:groups=bmc.tinkerbell.org,resources=machines;machines/status,verbs=get;list;watch // // For the full cluster lifecycle to support Curated Packages, the controller -// must be able to create, delete, update, and patch package bundle controller -// resources, which will trigger the curated packages controller to do the -// rest. +// must be able to create, delete, update, and patch package resources, which +// will trigger the curated packages controller to do the rest. // -// +kubebuilder:rbac:groups=packages.eks.amazonaws.com,resources=packagebundlecontrollers,verbs=create;delete;get;list;patch;update;watch; +// +kubebuilder:rbac:groups=packages.eks.amazonaws.com,resources=packages,verbs=create;delete;get;list;patch;update;watch; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. diff --git a/docs/content/en/docs/concepts/cluster-topologies.md b/docs/content/en/docs/concepts/cluster-topologies.md index 9ee6aa1ca171..8c42d6c90ea1 100644 --- a/docs/content/en/docs/concepts/cluster-topologies.md +++ b/docs/content/en/docs/concepts/cluster-topologies.md @@ -33,6 +33,15 @@ This shows examples of a management cluster that deploys and manages multiple wo ![Management clusters can create and manage multiple workload clusters](/images/eks-a_cluster_management.png) +With the management cluster in place, you have a choice of tools for creating, upgrading, and deleting workload clusters. +Check each provider to see which tools it currently supports. +Supported workload cluster creation, upgrade and deletion tools include: + +* `eksctl` CLI +* Terraform +* GitOps +* `kubectl` CLI to communicate with the Kubernetes API + ## What’s the difference between a management cluster and a bootstrap cluster for EKS Anywhere? A management cluster is a long-lived entity you have to actively operate. diff --git a/docs/content/en/docs/concepts/clusterworkflow.md b/docs/content/en/docs/concepts/clusterworkflow.md index cd30005d459c..164b73398919 100644 --- a/docs/content/en/docs/concepts/clusterworkflow.md +++ b/docs/content/en/docs/concepts/clusterworkflow.md @@ -8,8 +8,9 @@ description: > --- Each EKS Anywhere cluster is built from a cluster specification file, with the structure of the configuration file based on the target provider for the cluster. -Currently, Bare Metal, CloudStack, and VMware vSphere are the recommended providers for supported EKS Anywhere clusters. -We step through the cluster creation workflow for those providers here. +Currently, Bare Metal, CloudStack, Nutanix, Snow, and VMware vSphere are the recommended providers for supported EKS Anywhere clusters. +Docker is available as an unsupported provider. +We step through the cluster creation workflow for Bare Metal and vSphere providers here. ## Management and workload clusters diff --git a/docs/content/en/docs/getting-started/install/_index.md b/docs/content/en/docs/getting-started/install/_index.md index 8d3b7acf9e2b..da241aaf600e 100644 --- a/docs/content/en/docs/getting-started/install/_index.md +++ b/docs/content/en/docs/getting-started/install/_index.md @@ -65,8 +65,9 @@ sudo mv /tmp/eksctl /usr/local/bin/ Install the `eksctl-anywhere` plugin. ```bash -export EKSA_RELEASE="0.14.3" OS="$(uname -s | tr A-Z a-z)" RELEASE_NUMBER=30 -curl "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \ +RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion") +EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri") +curl $EKS_ANYWHERE_TARBALL_URL \ --silent --location \ | tar xz ./eksctl-anywhere sudo mv ./eksctl-anywhere /usr/local/bin/ @@ -90,7 +91,7 @@ If you installed `eksctl-anywhere` via homebrew you can upgrade the binary with ```bash brew update -brew upgrade eks-anywhere +brew upgrade aws/tap/eks-anywhere ``` If you installed `eksctl-anywhere` manually you should follow the installation steps to download the latest release. diff --git a/docs/content/en/docs/getting-started/local-environment/_index.md b/docs/content/en/docs/getting-started/local-environment/_index.md index c8f1f2a57468..c713971831a6 100644 --- a/docs/content/en/docs/getting-started/local-environment/_index.md +++ b/docs/content/en/docs/getting-started/local-environment/_index.md @@ -323,6 +323,11 @@ Follow these steps to have your management cluster create and manage separate wo * **GitOps**: See [Manage separate workload clusters with GitOps]({{< relref "../../tasks/cluster/cluster-flux.md#manage-separate-workload-clusters-using-gitops" >}}) * **Terraform**: See [Manage separate workload clusters with Terraform]({{< relref "../../tasks/cluster/cluster-terraform.md#manage-separate-workload-clusters-using-terraform" >}}) + + * **Kubernetes CLI**: The cluster lifecycle feature lets you use `kubectl` to manage a workload cluster. For example: + ```bash + kubectl apply -f eksa-w01-cluster.yaml + ``` * **eksctl CLI**: Useful for temporary cluster configurations. To create a workload cluster with `eksctl`, do one of the following. For a regular cluster create (with internet access), type the following: diff --git a/docs/content/en/docs/getting-started/production-environment/snow-getstarted.md b/docs/content/en/docs/getting-started/production-environment/snow-getstarted.md index e1784323fc7c..b7aa6ebae079 100644 --- a/docs/content/en/docs/getting-started/production-environment/snow-getstarted.md +++ b/docs/content/en/docs/getting-started/production-environment/snow-getstarted.md @@ -204,6 +204,10 @@ Follow these steps if you want to use your initial cluster to create and manage ``` As noted earlier, adding the `--kubeconfig` option tells `eksctl` to use the management cluster identified by that kubeconfig file to create a different workload cluster. + * **kubectl CLI**: The cluster lifecycle feature lets you use `kubectl`, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use `kubectl`, run: + ```bash + kubectl apply -f eksa-w01-cluster.yaml + ``` 1. Check the workload cluster: You can now use the workload cluster as you would any Kubernetes cluster. diff --git a/docs/content/en/docs/getting-started/production-environment/vsphere-getstarted.md b/docs/content/en/docs/getting-started/production-environment/vsphere-getstarted.md index bf93dbd83360..a452e14467b8 100644 --- a/docs/content/en/docs/getting-started/production-environment/vsphere-getstarted.md +++ b/docs/content/en/docs/getting-started/production-environment/vsphere-getstarted.md @@ -211,6 +211,11 @@ Follow these steps if you want to use your initial cluster to create and manage ``` As noted earlier, adding the `--kubeconfig` option tells `eksctl` to use the management cluster identified by that kubeconfig file to create a different workload cluster. + * **kubectl CLI**: The cluster lifecycle feature lets you use `kubectl`, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use `kubectl`, run: + ```bash + kubectl apply -f eksa-w01-cluster.yaml + ``` + 1. To check the workload cluster, get the workload cluster credentials and run a [test workload:]({{< relref "../../tasks/workload/test-app" >}}) * If your workload cluster was created with `eksctl`, diff --git a/docs/content/en/docs/reference/artifacts.md b/docs/content/en/docs/reference/artifacts.md index b2cc0a3a8e51..cf8c64d49ac2 100644 --- a/docs/content/en/docs/reference/artifacts.md +++ b/docs/content/en/docs/reference/artifacts.md @@ -30,19 +30,25 @@ However, see [Building node images]({{< relref "#building-node-images">}}) for i Bottlerocket vends its Baremetal variant Images using a secure distribution tool called `tuftool`. Please refer to [Download Bottlerocket node images]({{< relref "#download-bottlerocket-node-images">}}) to download Bottlerocket image. You can also get the download URIs for Bottlerocket Baremetal images from the bundle release by running the following command: ```bash -curl -s https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/manifest.yaml | yq ".spec.versionsBundles[].eksD.raw.bottlerocket.uri" +LATEST_EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") +BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") +curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[].eksD.raw.bottlerocket.uri" ``` ### HookOS (kernel and initial ramdisk) for Bare Metal kernel: ```bash -https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/hook/6d43b8b331c7a389f3ffeaa388fa9aa98248d7a2/vmlinuz-x86_64 +LATEST_EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") +BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") +curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].tinkerbell.tinkerbellStack.hook.vmlinuz.amd.uri" ``` initial ramdisk: ```bash -https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/hook/6d43b8b331c7a389f3ffeaa388fa9aa98248d7a2/initramfs-x86_64 +LATEST_EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") +BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") +curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].tinkerbell.tinkerbellStack.hook.initramfs.amd.uri" ``` ## vSphere artifacts @@ -51,7 +57,9 @@ https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/hook/6d4 Bottlerocket vends its VMware variant OVAs using a secure distribution tool called `tuftool`. Please refer [Download Bottlerocket node images]({{< relref "#download-bottlerocket-node-images">}}) to download Bottlerocket OVA. You can also get the download URIs for Bottlerocket OVAs from the bundle release by running the following command: ```bash -curl -s https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/manifest.yaml | yq ".spec.versionsBundles[].eksD.ova.bottlerocket.uri" +LATEST_EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") +BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") +curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[].eksD.ova.bottlerocket.uri" ``` #### Bottlerocket Template Tags @@ -124,73 +132,95 @@ export KUBEVERSION="1.25" ## Building node images The `image-builder` CLI lets you build your own Ubuntu-based vSphere OVAs, Nutanix qcow2 images, RHEL-based qcow2 images, or Bare Metal gzip images to use in EKS Anywhere clusters. -When you run `image-builder` it will pull in all components needed to create images to use for nodes in an EKS Anywhere cluster, including the lastest operating system, Kubernetes, and EKS Distro security updates, bug fixes, and patches. -With this tool, when you build an image you get to choose: +When you run `image-builder`, it will pull in all components needed to build images to be used as Kubernetes nodes in an EKS Anywhere cluster, including the latest operating system, Kubernetes control plane components, and EKS Distro security updates, bug fixes, and patches. +When building an image using this tool, you get to choose: -* Operating system type (for example, ubuntu) -* Provider (vsphere, cloudstack, baremetal, ami, nutanix, snow) +* Operating system type (for example, ubuntu, redhat) +* Provider (vsphere, cloudstack, baremetal, ami, nutanix) * Release channel for EKS Distro (generally aligning with Kubernetes releases) -* vSphere only: configuration file providing information needed to access your vSphere setup -* CloudStack only: configuration file providing information needed to access your Cloudstack setup -* AMI only: configuration file providing information needed to customize your AMI build parameters -* Nutanix only: configuration file providing information needed to access Prism Central +* **vSphere only:** configuration file providing information needed to access your vSphere setup +* **CloudStack only:** configuration file providing information needed to access your CloudStack setup +* **Snow AMI only:** configuration file providing information needed to customize your Snow AMI build parameters +* **Nutanix only:** configuration file providing information needed to access Nutanix Prism Central Because `image-builder` creates images in the same way that the EKS Anywhere project does for their own testing, images built with that tool are supported. -The following procedure describes how to use `image-builder` to build images for EKS Anywhere on a vSphere, Bare Metal, Nutanix, or Snow provider. + +The table below shows the support matrix for the hypervisor and OS combinations that `image-builder` supports. + +| | vSphere | Baremetal | CloudStack | Nutanix | Snow | +|:----------:|:-------:|:---------:|:----------:|:-------:|:----:| +| **Ubuntu** | ✓ | ✓ | | ✓ | ✓ | +| **RHEL** | ✓ | ✓ | ✓ | | | + ### Prerequisites -To use `image-builder` you must meet the following prerequisites: - -* Run on Ubuntu 22.04 or later (for Ubuntu images) or RHEL 8.4 or later (for RHEL images) -* Machine requirements: - * AMD 64-bit architecture - * 50 GB disk space - * 2 vCPUs - * 8 GB RAM - * Bare Metal only: Run on a bare metal machine with virtualization enabled -* Network access to: - * vCenter endpoint (vSphere only) - * CloudStack endpoint (CloudStack only) - * Prism Central endpoint (Nutanix only) - * public.ecr.aws (to download container images from EKS Anywhere) - * anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs) - * distro.eks.amazonaws.com (to download EKS Distro binaries and manifests) - * d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images) - * api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region) - * d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container) -* vSphere only: - * Required vSphere user permissions: - * Inventory: - * Create new - * Configuration: - * Change configuration - * Add new disk - * Add or remove device - * Change memory - * Change settings - * Set annotation - * Interaction: - * Power on - * Power off - * Console interaction - * Configure CD media - * Device connection - * Snapshot management: - * Create snapshot - * Provisioning - * Mark as template - * Resource Pool - * Assign vm to resource pool - * Datastore - * Allocate space - * Browse data - * Low level file operations - * Network - * Assign network to vm -* CloudStack only: See [CloudStack Permissions for CAPC](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/blob/main/docs/book/src/topics/cloudstack-permissions.md) for required CloudStack user permissions. -* AMI only: Packer will require prior authentication with your AWS account to launch EC2 instances for the AMI build. See [Authentication guide for Amazon EBS Packer builder](https://developer.hashicorp.com/packer/plugins/builders/amazon#authentication) for possible modes of authentication. We recommend that you run `image-builder` on a pre-existing Ubuntu EC2 instance and use an [IAM instance role with the required permissions](https://developer.hashicorp.com/packer/plugins/builders/amazon#iam-task-or-instance-role). -* Nutanix only: Prism Admin permissions +To use `image-builder`, you must meet the following prerequisites: + +#### System requirements + +`image-builder` has been tested on Ubuntu, RHEL and Amazon Linux 2 machines. The following system requirements should be met for the machine on which `image-builder` is run: +* AMD 64-bit architecture +* 50 GB disk space +* 2 vCPUs +* 8 GB RAM +* **Baremetal only:** Run on a bare metal machine with virtualization enabled + +#### Network connectivity requirements +* public.ecr.aws (to download container images from EKS Anywhere) +* anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere artifacts such as binaries, manifests and OS images) +* distro.eks.amazonaws.com (to download EKS Distro binaries and manifests) +* d2glxqk2uabbnd.cloudfront.net (to pull the EKS Anywhere and EKS Distro ECR container images) +* api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region) +* d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container) +* github.com (to download binaries and tools required for image builds from GitHub releases) +* releases.hashicorp.com (to download Packer binary for image builds) +* galaxy.ansible.com (to download Ansible packages from Ansible Galaxy) +* **vSphere only:** VMware vCenter endpoint +* **CloudStack only:** Apache CloudStack endpoint +* **Nutanix only:** Nutanix Prism Central endpoint +* **Red Hat only:** dl.fedoraproject.org (to download RPMs and GPG keys for RHEL image builds) +* **Ubuntu only:** cdimage.ubuntu.com (to download Ubuntu server ISOs for Ubuntu image builds) + +#### vSphere requirements +The following vSphere permissions are required to build OVA images using `image-builder`: +* Inventory: + * Create new +* Configuration: + * Change configuration + * Add new disk + * Add or remove device + * Change memory + * Change settings + * Set annotation +* Interaction: + * Power on + * Power off + * Console interaction + * Configure CD media + * Device connection +* Snapshot management: + * Create snapshot +* Provisioning + * Mark as template +* Resource Pool + * Assign VM to resource pool +* Datastore + * Allocate space + * Browse data + * Low level file operations +* Network + * Assign network to VM + +#### CloudStack requirements +Refer to the [CloudStack Permissions for CAPC](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/blob/main/docs/book/src/topics/cloudstack-permissions.md) doc for required CloudStack user permissions. + +#### Snow AMI requirements +Packer will require prior authentication with your AWS account to launch EC2 instances for the Snow AMI build. Refer to the [Authentication guide for Amazon EBS Packer builder](https://developer.hashicorp.com/packer/plugins/builders/amazon#authentication) for possible modes of authentication. We recommend that you run `image-builder` on a pre-existing Ubuntu EC2 instance and use an [IAM instance role with the required permissions](https://developer.hashicorp.com/packer/plugins/builders/amazon#iam-task-or-instance-role). + +#### Nutanix permissions + +Prism Central Administrator permissions are required to build a Nutanix image using `image-builder`. ### Optional Proxy configuration You can use a proxy server to route outbound requests to the internet. To configure `image-builder` tool to use a proxy server, export these proxy environment variables: @@ -202,7 +232,7 @@ You can use a proxy server to route outbound requests to the internet. To config ### Build vSphere OVA node images -These steps use `image-builder` to create an Ubuntu-based or RHEL-based image for vSphere. +These steps use `image-builder` to create an Ubuntu-based or RHEL-based image for vSphere. Before proceeding, ensure that the above system-level, network-level and vSphere-specific [prerequisites]({{< relref "#prerequisites">}}) have been met. 1. Create a linux user for running image-builder. ```bash @@ -227,9 +257,11 @@ These steps use `image-builder` to create an Ubuntu-based or RHEL-based image fo 1. Get `image-builder`: ```bash cd /tmp - wget https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/image-builder/0.1.2/image-builder-linux-amd64.tar.gz - tar xvf image-builder*.tar.gz - sudo cp image-builder /usr/local/bin + LATEST_EKSA_RELEASE_VERSION=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") + BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") + IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri") + curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder + sudo cp ./image-builder /usr/local/bin cd - ``` 1. Get the latest version of `govc`: @@ -248,12 +280,12 @@ These steps use `image-builder` to create an Ubuntu-based or RHEL-based image fo "create_snapshot": "", "datacenter": "", "datastore": "", - "folder": "", + "folder": "", "insecure_connection": "true", "linked_clone": "false", "network": "", "password": "", - "resource_pool": "", + "resource_pool": "", "username": "", "vcenter_server": "", "vsphere_library_name": "" @@ -293,7 +325,7 @@ These steps use `image-builder` to create an Ubuntu-based or RHEL-based image fo image-builder build --os redhat --hypervisor vsphere --release-channel 1-25 --vsphere-config vsphere-connection.json ``` ### Build Bare Metal node images -These steps use `image-builder` to create an Ubuntu-based or RHEL-based image for Bare Metal. +These steps use `image-builder` to create an Ubuntu-based or RHEL-based image for Bare Metal. Before proceeding, ensure that the above system-level, network-level and baremetal-specific [prerequisites]({{< relref "#prerequisites">}}) have been met. 1. Create a linux user for running image-builder. ```bash @@ -319,12 +351,15 @@ These steps use `image-builder` to create an Ubuntu-based or RHEL-based image fo echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config ``` 1. Get `image-builder`: - ```bash - cd /tmp - curl -#o- https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/image-builder/0.1.2/image-builder-linux-amd64.tar.gz | \ - sudo tar -xzC /usr/local/bin ./image-builder - cd - - ``` + ```bash + cd /tmp + LATEST_EKSA_RELEASE_VERSION=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") + BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") + IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri") + curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder + sudo cp ./image-builder /usr/local/bin + cd - + ``` 1. Create an Ubuntu or Red Hat image. @@ -381,7 +416,7 @@ These steps use `image-builder` to create an Ubuntu-based or RHEL-based image fo ### Build CloudStack node images -These steps use `image-builder` to create a RHEL-based image for CloudStack. +These steps use `image-builder` to create a RHEL-based image for CloudStack. Before proceeding, ensure that the above system-level, network-level and CloudStack-specific [prerequisites]({{< relref "#prerequisites">}}) have been met. 1. Create a linux user for running image-builder. ```bash @@ -407,13 +442,15 @@ These steps use `image-builder` to create a RHEL-based image for CloudStack. echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config ``` 1. Get `image-builder`: - ```bash - cd /tmp - wget https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/image-builder/0.1.2/image-builder-linux-amd64.tar.gz - tar xvf image-builder*.tar.gz - sudo cp image-builder /usr/local/bin - cd - - ``` + ```bash + cd /tmp + LATEST_EKSA_RELEASE_VERSION=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") + BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") + IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri") + curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder + sudo cp ./image-builder /usr/local/bin + cd - + ``` 1. Create a CloudStack configuration file (for example, `cloudstack.json`) to identify the location of a Red Hat Enterprise Linux 8 ISO image and related checksum and Red Hat subscription information: ```json { @@ -437,11 +474,11 @@ These steps use `image-builder` to create a RHEL-based image for CloudStack. image-builder build --os redhat --hypervisor cloudstack --release-channel 1-25 --cloudstack-config cloudstack.json ``` -1. To consume the resulting RHEL-based image, add it as a template to your CloudStack setup as described in [Preparing Cloudstack]({{< relref "./cloudstack/cloudstack-preparation.md" >}}). +1. To consume the resulting RHEL-based image, add it as a template to your CloudStack setup as described in [Preparing CloudStack]({{< relref "./cloudstack/cloudstack-preparation.md" >}}). ### Build Snow node images -These steps use `image-builder` to create an Ubuntu-based Amazon Machine Image (AMI) that is backed by EBS volumes for Snow. +These steps use `image-builder` to create an Ubuntu-based Amazon Machine Image (AMI) that is backed by EBS volumes for Snow. Before proceeding, ensure that the above system-level, network-level and AMI-specific [prerequisites]({{< relref "#prerequisites">}}) have been met 1. Create a linux user for running image-builder. ```bash @@ -465,12 +502,14 @@ These steps use `image-builder` to create an Ubuntu-based Amazon Machine Image ( ``` 1. Get `image-builder`: ```bash - cd /tmp - wget https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/image-builder/0.1.2/image-builder-linux-amd64.tar.gz - tar xvf image-builder*.tar.gz - sudo cp image-builder /usr/local/bin - cd - - ``` + cd /tmp + LATEST_EKSA_RELEASE_VERSION=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") + BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") + IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri") + curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder + sudo cp ./image-builder /usr/local/bin + cd - + ``` 1. Create an AMI configuration file (for example, `ami.json`) that contains various AMI parameters. ```json { @@ -494,11 +533,11 @@ These steps use `image-builder` to create an Ubuntu-based Amazon Machine Image ( * `--os`: `ubuntu` * `--hypervisor`: For AMI, use `ami` - * `--release-channel`: Supported EKS Distro releases include 1-21, 1-22, 1-23, 1-24 and 1-25. + * `--release-channel`: Supported EKS Distro releases include 1-21, 1-22, 1-23 and 1-24. * `--ami-config`: AMI configuration file (`ami.json` in this example) ```bash - image-builder build --os ubuntu --hypervisor ami --release-channel 1-25 --ami-config ami.json + image-builder build --os ubuntu --hypervisor ami --release-channel 1-24 --ami-config ami.json ``` 1. After the build, the Ubuntu AMI will be available in your AWS account in the AWS region specified in your AMI configuration file. If you wish to export it as a Raw image, you can achieve this using the AWS CLI. ``` @@ -514,7 +553,7 @@ These steps use `image-builder` to create an Ubuntu-based Amazon Machine Image ( ### Build Nutanix node images -These steps use `image-builder` to create a Ubuntu-based image for Nutanix AHV and import it into the AOS Image Service. +These steps use `image-builder` to create a Ubuntu-based image for Nutanix AHV and import it into the AOS Image Service. Before proceeding, ensure that the above system-level, network-level and Nutanix-specific [prerequisites]({{< relref "#prerequisites">}}) have been met. 1. Download an [Ubuntu cloud image](https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img) for the build and upload it to the AOS Image Service using Prism. You will need to specify this image name as the `source_image_name` in the `nutanix-connection.json` config file specified below. @@ -539,13 +578,15 @@ These steps use `image-builder` to create a Ubuntu-based image for Nutanix AHV a echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config ``` 1. Get `image-builder`: - ```bash - cd /tmp - wget https://anywhere-assets.eks.amazonaws.com/releases/bundles/29/artifacts/image-builder/0.1.2/image-builder-linux-amd64.tar.gz - tar xvf image-builder*.tar.gz - sudo cp image-builder /usr/local/bin - cd - - ``` + ```bash + cd /tmp + LATEST_EKSA_RELEASE_VERSION=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion") + BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$LATEST_EKSA_RELEASE_VERSION\").bundleManifestUrl") + IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri") + curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder + sudo cp ./image-builder /usr/local/bin + cd - + ``` 1. Create a `nutanix-connection.json` config file. More details on values can be found in the [image-builder documentation](https://image-builder.sigs.k8s.io/capi/providers/nutanix.html). See example below: ```json { diff --git a/docs/content/en/docs/reference/changelog.md b/docs/content/en/docs/reference/changelog.md index fe1f29fe8bb2..8a4de39eeddf 100644 --- a/docs/content/en/docs/reference/changelog.md +++ b/docs/content/en/docs/reference/changelog.md @@ -6,6 +6,13 @@ weight: 35 ## Unreleased +## [v0.14.5](https://github.com/aws/eks-anywhere/releases/tag/v0.14.5) + +### Fixed + +- Fix kubectl get call to point to full API name ([#5342](https://github.com/aws/eks-anywhere/pull/5342)) +- Expand all kubectl calls to fully qualified names ([#5347](https://github.com/aws/eks-anywhere/pull/5347)) + ## [v0.14.4](https://github.com/aws/eks-anywhere/releases/tag/v0.14.4) ### Added @@ -13,7 +20,7 @@ weight: 35 - `--no-timeouts` flag in create and upgrade commands to disable timeout for all wait operations - Management resources backup procedure with clusterctl -## [v1.14.3](https://github.com/aws/eks-anywhere/releases/tag/v0.14.3) +## [v0.14.3](https://github.com/aws/eks-anywhere/releases/tag/v0.14.3) ### Added diff --git a/docs/content/en/docs/reference/clusterspec/nutanix.md b/docs/content/en/docs/reference/clusterspec/nutanix.md index 4951e8332a1f..e303bf56ce5f 100644 --- a/docs/content/en/docs/reference/clusterspec/nutanix.md +++ b/docs/content/en/docs/reference/clusterspec/nutanix.md @@ -75,6 +75,9 @@ spec: name: vm-network type: name systemDiskSize: 40Gi + project: + type: name + name: my-project users: - name: eksa sshAuthorizedKeys: @@ -100,6 +103,9 @@ spec: name: vm-network type: name systemDiskSize: 40Gi + project: + type: name + name: my-project users: - name: eksa sshAuthorizedKeys: @@ -185,6 +191,17 @@ The PEM encoded CA trust bundle. The `additionalTrustBundle` needs to be populated with the PEM-encoded x509 certificate of the Root CA that issued the certificate for Prism Central. Suggestions on how to obtain this certificate are [here]({{< relref "../nutanix/nutanix-prereq/#prepare-a-nutanix-environment" >}}). +__Example__:
+```yaml + additionalTrustBundle: | + -----BEGIN CERTIFICATE----- + + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + + -----END CERTIFICATE----- +``` + ## NutanixMachineConfig Fields ### cluster @@ -238,6 +255,19 @@ Amount of vCPU sockets. (Default: `2`) ### vcpusPerSocket Amount of vCPUs per socket. (Default: `1`) +### project (optional) +Reference to an existing project used for the virtual machines. + +### project.type +Type to identify the project. (Permitted values: `name` or `uuid`) + +### project.name (`name` or `UUID` required) +Name of the project + +### project.uuid (`name` or `UUID` required) +UUID of the project + + ### users (optional) The users you want to configure to access your virtual machines. Only one is permitted at this time. diff --git a/docs/content/en/docs/reference/clusterspec/optional/registrymirror.md b/docs/content/en/docs/reference/clusterspec/optional/registrymirror.md index bfd102b73652..883cff6acf8d 100644 --- a/docs/content/en/docs/reference/clusterspec/optional/registrymirror.md +++ b/docs/content/en/docs/reference/clusterspec/optional/registrymirror.md @@ -36,13 +36,15 @@ spec: * __Description__: IP address or hostname of the private registry for pulling images * __Type__: string * __Example__: ```endpoint: 192.168.0.1``` + ### __port__ (optional) -* __Description__: Port for the private registry. This is an optional field. If a port +* __Description__: port for the private registry. This is an optional field. If a port is not specified, the default HTTPS port `443` is used * __Type__: string * __Example__: ```port: 443``` + ### __caCertContent__ (optional) -* __Description__: Certificate Authority (CA) Certificate for the private registry . When using +* __Description__: certificate Authority (CA) Certificate for the private registry . When using self-signed certificates it is necessary to pass this parameter in the cluster spec. This __must__ be the individual public CA cert used to sign the registry certificate. This will be added to the cluster nodes so that they are able to pull images from the private registry. It is also possible to configure CACertContent by exporting an environment variable:
@@ -57,9 +59,10 @@ spec: es6RXmsCj... -----END CERTIFICATE----- ``` + ### __authenticate__ (optional) -* __Description__: Optional field to authenticate with a private registry. When using private registries that +* __Description__: optional field to authenticate with a private registry. When using private registries that require authentication, it is necessary to set this parameter to ```true``` in the cluster spec. * __Type__: boolean * __Example__: ```authenticate: true``` @@ -70,6 +73,10 @@ export REGISTRY_USERNAME= export REGISTRY_PASSWORD= ``` +### __insecureSkipVerify__ (optional) +* __Description__: optional field to skip the registry certificate verification. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. Currenly only supported for Ubuntu OS. +* __Type__: boolean + ## Import images into a private registry You can use the `download images` and `import images` commands to pull images from `public.ecr.aws` and push them to your private registry. diff --git a/docs/content/en/docs/reference/eksctl/_index.md b/docs/content/en/docs/reference/eksctl/_index.md index 574040db9a94..676ffcfc3bb8 100644 --- a/docs/content/en/docs/reference/eksctl/_index.md +++ b/docs/content/en/docs/reference/eksctl/_index.md @@ -13,10 +13,10 @@ Use this page as a reference to useful `eksctl anywhere` command examples for wo Available `eksctl anywhere` commands include: * `create cluster` To create an EKS Anywhere cluster +* `upgrade` To upgrade a workload cluster * `delete cluster` To delete an EKS Anywhere cluster -* `generate` [`clusterconfig` | `support-bundle` | `support-bundle-config`] To generate cluster and support configs +* `generate` [`clusterconfig` | `support-bundle` | `support-bundle-config` | `packages` | `hardware`] To generate cluster, support configs, package configs, and tinkerbell hardware files * `help` To get help information -* `upgrade` To upgrade a workload cluster * `version` To get the EKS Anywhere version Options used with multiple commands include: diff --git a/docs/content/en/docs/reference/eksctl/anywhere.md b/docs/content/en/docs/reference/eksctl/anywhere.md new file mode 100644 index 000000000000..8b5ac8ea27d1 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere.md @@ -0,0 +1,38 @@ +--- +title: "anywhere" +linkTitle: "anywhere" +--- + +## anywhere + +Amazon EKS Anywhere + +### Synopsis + +Use eksctl anywhere to build your own self-managing cluster on your hardware with the best of Amazon EKS + +### Options + +``` + -h, --help help for anywhere + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere apply](../anywhere_apply/) - Apply resources +* [anywhere check-images](../anywhere_check-images/) - Check images used by EKS Anywhere do exist in the target registry +* [anywhere copy](../anywhere_copy/) - Copy resources +* [anywhere create](../anywhere_create/) - Create resources +* [anywhere delete](../anywhere_delete/) - Delete resources +* [anywhere describe](../anywhere_describe/) - Describe resources +* [anywhere download](../anywhere_download/) - Download resources +* [anywhere exp](../anywhere_exp/) - experimental commands +* [anywhere generate](../anywhere_generate/) - Generate resources +* [anywhere get](../anywhere_get/) - Get resources +* [anywhere import](../anywhere_import/) - Import resources +* [anywhere install](../anywhere_install/) - Install resources to the cluster +* [anywhere list](../anywhere_list/) - List resources +* [anywhere upgrade](../anywhere_upgrade/) - Upgrade resources +* [anywhere version](../anywhere_version/) - Get the eksctl anywhere version + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_apply.md b/docs/content/en/docs/reference/eksctl/anywhere_apply.md new file mode 100644 index 000000000000..91f5aed74387 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_apply.md @@ -0,0 +1,30 @@ +--- +title: "anywhere apply" +linkTitle: "anywhere apply" +--- + +## anywhere apply + +Apply resources + +### Synopsis + +Use eksctl anywhere apply to apply resources + +### Options + +``` + -h, --help help for apply +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere apply package(s)](../anywhere_apply_packages/) - Apply curated packages + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_apply_package(s).md b/docs/content/en/docs/reference/eksctl/anywhere_apply_package(s).md new file mode 100644 index 000000000000..e8b1796db3f4 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_apply_package(s).md @@ -0,0 +1,35 @@ +--- +title: "anywhere apply package(s)" +linkTitle: "anywhere apply package(s)" +--- + +## anywhere apply package(s) + +Apply curated packages + +### Synopsis + +Apply Curated Packages Custom Resources to the cluster + +``` +anywhere apply package(s) [flags] +``` + +### Options + +``` + -f, --filename string Filename that contains curated packages custom resources to apply + -h, --help help for package(s) + --kubeconfig string Path to an optional kubeconfig file to use. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere apply](../anywhere_apply/) - Apply resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_check-images.md b/docs/content/en/docs/reference/eksctl/anywhere_check-images.md new file mode 100644 index 000000000000..2aa9561a075e --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_check-images.md @@ -0,0 +1,34 @@ +--- +title: "anywhere check-images" +linkTitle: "anywhere check-images" +--- + +## anywhere check-images + +Check images used by EKS Anywhere do exist in the target registry + +### Synopsis + +This command is used to check images used by EKS-Anywhere for cluster provisioning do exist in the target registry + +``` +anywhere check-images [flags] +``` + +### Options + +``` + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for check-images +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_copy.md b/docs/content/en/docs/reference/eksctl/anywhere_copy.md new file mode 100644 index 000000000000..4c52c3d5a36b --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_copy.md @@ -0,0 +1,30 @@ +--- +title: "anywhere copy" +linkTitle: "anywhere copy" +--- + +## anywhere copy + +Copy resources + +### Synopsis + +Copy EKS Anywhere resources and artifacts + +### Options + +``` + -h, --help help for copy +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere copy packages](../anywhere_copy_packages/) - Copy curated package images and charts from a source to a destination + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_copy_packages.md b/docs/content/en/docs/reference/eksctl/anywhere_copy_packages.md new file mode 100644 index 000000000000..a84a8278cecf --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_copy_packages.md @@ -0,0 +1,39 @@ +--- +title: "anywhere copy packages" +linkTitle: "anywhere copy packages" +--- + +## anywhere copy packages + +Copy curated package images and charts from a source to a destination + +### Synopsis + +Copy all the EKS Anywhere curated package images and helm charts from a source to a destination. + +``` +anywhere copy packages [flags] +``` + +### Options + +``` + --aws-region string Region to copy images from + -b, --bundle string EKS-A bundle file to read artifact dependencies from + --dry-run Dry run copy to print images that would be copied + --dst-cert string TLS certificate for destination registry + -h, --help help for packages + --insecure Skip TLS verification while copying images and charts + --src-cert string TLS certificate for source registry +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere copy](../anywhere_copy/) - Copy resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_create.md b/docs/content/en/docs/reference/eksctl/anywhere_create.md new file mode 100644 index 000000000000..44a4be57d352 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_create.md @@ -0,0 +1,31 @@ +--- +title: "anywhere create" +linkTitle: "anywhere create" +--- + +## anywhere create + +Create resources + +### Synopsis + +Use eksctl anywhere create to create resources, such as clusters + +### Options + +``` + -h, --help help for create +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere create cluster](../anywhere_create_cluster/) - Create workload cluster +* [anywhere create package(s)](../anywhere_create_packages/) - Create curated packages + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_create_cluster.md b/docs/content/en/docs/reference/eksctl/anywhere_create_cluster.md new file mode 100644 index 000000000000..ce0c47680602 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_create_cluster.md @@ -0,0 +1,42 @@ +--- +title: "anywhere create cluster" +linkTitle: "anywhere create cluster" +--- + +## anywhere create cluster + +Create workload cluster + +### Synopsis + +This command is used to create workload clusters + +``` +anywhere create cluster -f [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -f, --filename string Filename that contains EKS-A cluster configuration + --force-cleanup Force deletion of previously created bootstrap cluster + -z, --hardware-csv string Path to a CSV file containing hardware data. + -h, --help help for cluster + --install-packages string Location of curated packages configuration files to install to the cluster + --kubeconfig string Management cluster kubeconfig file + --no-timeouts Disable timeout for all wait operations + --skip-ip-check Skip check for whether cluster control plane ip is in use + --tinkerbell-bootstrap-ip string Override the local tinkerbell IP in the bootstrap cluster +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere create](../anywhere_create/) - Create resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_create_package(s).md b/docs/content/en/docs/reference/eksctl/anywhere_create_package(s).md new file mode 100644 index 000000000000..2a2da8c35443 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_create_package(s).md @@ -0,0 +1,35 @@ +--- +title: "anywhere create package(s)" +linkTitle: "anywhere create package(s)" +--- + +## anywhere create package(s) + +Create curated packages + +### Synopsis + +Create Curated Packages Custom Resources to the cluster + +``` +anywhere create package(s) [flags] +``` + +### Options + +``` + -f, --filename string Filename that contains curated packages custom resources to create + -h, --help help for package(s) + --kubeconfig string Path to an optional kubeconfig file to use. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere create](../anywhere_create/) - Create resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_delete.md b/docs/content/en/docs/reference/eksctl/anywhere_delete.md new file mode 100644 index 000000000000..915dc97eceab --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_delete.md @@ -0,0 +1,31 @@ +--- +title: "anywhere delete" +linkTitle: "anywhere delete" +--- + +## anywhere delete + +Delete resources + +### Synopsis + +Use eksctl anywhere delete to delete clusters + +### Options + +``` + -h, --help help for delete +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere delete cluster](../anywhere_delete_cluster/) - Workload cluster +* [anywhere delete package(s)](../anywhere_delete_packages/) - Delete package(s) + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_delete_cluster.md b/docs/content/en/docs/reference/eksctl/anywhere_delete_cluster.md new file mode 100644 index 000000000000..543639c54d66 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_delete_cluster.md @@ -0,0 +1,38 @@ +--- +title: "anywhere delete cluster" +linkTitle: "anywhere delete cluster" +--- + +## anywhere delete cluster + +Workload cluster + +### Synopsis + +This command is used to delete workload clusters created by eksctl anywhere + +``` +anywhere delete cluster (|-f ) [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -f, --filename string Filename that contains EKS-A cluster configuration, required if is not provided + --force-cleanup Force deletion of previously created bootstrap cluster + -h, --help help for cluster + --kubeconfig string kubeconfig file pointing to a management cluster + -w, --w-config string Kubeconfig file to use when deleting a workload cluster +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere delete](../anywhere_delete/) - Delete resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_delete_package(s).md b/docs/content/en/docs/reference/eksctl/anywhere_delete_package(s).md new file mode 100644 index 000000000000..1e110765f4c7 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_delete_package(s).md @@ -0,0 +1,35 @@ +--- +title: "anywhere delete package(s)" +linkTitle: "anywhere delete package(s)" +--- + +## anywhere delete package(s) + +Delete package(s) + +### Synopsis + +This command is used to delete the curated packages custom resources installed in the cluster + +``` +anywhere delete package(s) [flags] +``` + +### Options + +``` + --cluster string Cluster for package deletion. + -h, --help help for package(s) + --kubeconfig string Path to an optional kubeconfig file to use. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere delete](../anywhere_delete/) - Delete resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_describe.md b/docs/content/en/docs/reference/eksctl/anywhere_describe.md new file mode 100644 index 000000000000..dc83ddb0293d --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_describe.md @@ -0,0 +1,30 @@ +--- +title: "anywhere describe" +linkTitle: "anywhere describe" +--- + +## anywhere describe + +Describe resources + +### Synopsis + +Use eksctl anywhere describe to show details of a specific resource or group of resources + +### Options + +``` + -h, --help help for describe +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere describe package(s)](../anywhere_describe_packages/) - Describe curated packages in the cluster + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_describe_package(s).md b/docs/content/en/docs/reference/eksctl/anywhere_describe_package(s).md new file mode 100644 index 000000000000..bf34850f62ee --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_describe_package(s).md @@ -0,0 +1,31 @@ +--- +title: "anywhere describe package(s)" +linkTitle: "anywhere describe package(s)" +--- + +## anywhere describe package(s) + +Describe curated packages in the cluster + +``` +anywhere describe package(s) [flags] +``` + +### Options + +``` + --cluster string Cluster to describe packages. + -h, --help help for package(s) + --kubeconfig string Path to an optional kubeconfig file to use. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere describe](../anywhere_describe/) - Describe resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_download.md b/docs/content/en/docs/reference/eksctl/anywhere_download.md new file mode 100644 index 000000000000..5de1102652a3 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_download.md @@ -0,0 +1,31 @@ +--- +title: "anywhere download" +linkTitle: "anywhere download" +--- + +## anywhere download + +Download resources + +### Synopsis + +Use eksctl anywhere download to download artifacts (manifests, bundles) used by EKS Anywhere + +### Options + +``` + -h, --help help for download +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere download artifacts](../anywhere_download_artifacts/) - Download EKS Anywhere artifacts/manifests to a tarball on disk +* [anywhere download images](../anywhere_download_images/) - Download all eks-a images to disk + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_download_artifacts.md b/docs/content/en/docs/reference/eksctl/anywhere_download_artifacts.md new file mode 100644 index 000000000000..fda1bb2dc396 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_download_artifacts.md @@ -0,0 +1,38 @@ +--- +title: "anywhere download artifacts" +linkTitle: "anywhere download artifacts" +--- + +## anywhere download artifacts + +Download EKS Anywhere artifacts/manifests to a tarball on disk + +### Synopsis + +This command is used to download the S3 artifacts from an EKS Anywhere bundle manifest and package them into a tarball + +``` +anywhere download artifacts [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -d, --download-dir string Directory to download the artifacts to (default "eks-anywhere-downloads") + --dry-run Print the manifest URIs without downloading them + -f, --filename string [Deprecated] Filename that contains EKS-A cluster configuration + -h, --help help for artifacts + -r, --retain-dir Do not delete the download folder after creating a tarball +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere download](../anywhere_download/) - Download resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_download_images.md b/docs/content/en/docs/reference/eksctl/anywhere_download_images.md new file mode 100644 index 000000000000..5cdd8408fb79 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_download_images.md @@ -0,0 +1,39 @@ +--- +title: "anywhere download images" +linkTitle: "anywhere download images" +--- + +## anywhere download images + +Download all eks-a images to disk + +### Synopsis + +Creates a tarball containing all necessary images +to create an eks-a cluster for any of the supported +Kubernetes versions. + +``` +anywhere download images [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -h, --help help for images + --include-packages this flag no longer works, use copy packages instead (DEPRECATED: use copy packages command) + --insecure Flag to indicate skipping TLS verification while downloading helm charts + -o, --output string Output tarball containing all downloaded images +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere download](../anywhere_download/) - Download resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp.md b/docs/content/en/docs/reference/eksctl/anywhere_exp.md new file mode 100644 index 000000000000..cf8273b77531 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp.md @@ -0,0 +1,31 @@ +--- +title: "anywhere exp" +linkTitle: "anywhere exp" +--- + +## anywhere exp + +experimental commands + +### Synopsis + +Use eksctl anywhere experimental commands + +### Options + +``` + -h, --help help for exp +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere exp validate](../anywhere_exp_validate/) - Validate resource or action +* [anywhere exp vsphere](../anywhere_exp_vsphere/) - Utility vsphere operations + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp_validate.md b/docs/content/en/docs/reference/eksctl/anywhere_exp_validate.md new file mode 100644 index 000000000000..48fda5a661d9 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp_validate.md @@ -0,0 +1,30 @@ +--- +title: "anywhere exp validate" +linkTitle: "anywhere exp validate" +--- + +## anywhere exp validate + +Validate resource or action + +### Synopsis + +Use eksctl anywhere validate to validate a resource or action + +### Options + +``` + -h, --help help for validate +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere exp](../anywhere_exp/) - experimental commands +* [anywhere exp validate create](../anywhere_exp_validate_create/) - Validate create resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp_validate_create.md b/docs/content/en/docs/reference/eksctl/anywhere_exp_validate_create.md new file mode 100644 index 000000000000..460fc9d957ce --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp_validate_create.md @@ -0,0 +1,30 @@ +--- +title: "anywhere exp validate create" +linkTitle: "anywhere exp validate create" +--- + +## anywhere exp validate create + +Validate create resources + +### Synopsis + +Use eksctl anywhere validate create to validate the create action on resources, such as cluster + +### Options + +``` + -h, --help help for create +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere exp validate](../anywhere_exp_validate/) - Validate resource or action +* [anywhere exp validate create cluster](../anywhere_exp_validate_create_cluster/) - Validate create cluster + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp_validate_create_cluster.md b/docs/content/en/docs/reference/eksctl/anywhere_exp_validate_create_cluster.md new file mode 100644 index 000000000000..58619758b537 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp_validate_create_cluster.md @@ -0,0 +1,36 @@ +--- +title: "anywhere exp validate create cluster" +linkTitle: "anywhere exp validate create cluster" +--- + +## anywhere exp validate create cluster + +Validate create cluster + +### Synopsis + +Use eksctl anywhere validate create cluster to validate the create cluster action + +``` +anywhere exp validate create cluster -f [flags] +``` + +### Options + +``` + -f, --filename string Filename that contains EKS-A cluster configuration + -z, --hardware-csv string Path to a CSV file containing hardware data. + -h, --help help for cluster + --tinkerbell-bootstrap-ip string Override the local tinkerbell IP in the bootstrap cluster +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere exp validate create](../anywhere_exp_validate_create/) - Validate create resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere.md b/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere.md new file mode 100644 index 000000000000..cc75328ed122 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere.md @@ -0,0 +1,30 @@ +--- +title: "anywhere exp vsphere" +linkTitle: "anywhere exp vsphere" +--- + +## anywhere exp vsphere + +Utility vsphere operations + +### Synopsis + +Use eksctl anywhere vsphere to perform utility operations on vsphere + +### Options + +``` + -h, --help help for vsphere +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere exp](../anywhere_exp/) - experimental commands +* [anywhere exp vsphere setup](../anywhere_exp_vsphere_setup/) - Setup vSphere objects + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere_setup.md b/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere_setup.md new file mode 100644 index 000000000000..06c43e94d7ed --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere_setup.md @@ -0,0 +1,30 @@ +--- +title: "anywhere exp vsphere setup" +linkTitle: "anywhere exp vsphere setup" +--- + +## anywhere exp vsphere setup + +Setup vSphere objects + +### Synopsis + +Use eksctl anywhere vsphere setup to configure vSphere objects + +### Options + +``` + -h, --help help for setup +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere exp vsphere](../anywhere_exp_vsphere/) - Utility vsphere operations +* [anywhere exp vsphere setup user](../anywhere_exp_vsphere_setup_user/) - Setup vSphere user + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere_setup_user.md b/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere_setup_user.md new file mode 100644 index 000000000000..a271c79cdd8f --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_exp_vsphere_setup_user.md @@ -0,0 +1,36 @@ +--- +title: "anywhere exp vsphere setup user" +linkTitle: "anywhere exp vsphere setup user" +--- + +## anywhere exp vsphere setup user + +Setup vSphere user + +### Synopsis + +Use eksctl anywhere vsphere setup user to configure EKS Anywhere vSphere user + +``` +anywhere exp vsphere setup user -f [flags] +``` + +### Options + +``` + -f, --filename string Filename containing vsphere setup configuration + --force Force flag. When set, setup user will proceed even if the group and role objects already exist. Mutually exclusive with --password flag, as it expects the user to already exist. default: false + -h, --help help for user + -p, --password string Password for creating new user +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere exp vsphere setup](../anywhere_exp_vsphere_setup/) - Setup vSphere objects + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_generate.md b/docs/content/en/docs/reference/eksctl/anywhere_generate.md new file mode 100644 index 000000000000..07a3bb53d188 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_generate.md @@ -0,0 +1,34 @@ +--- +title: "anywhere generate" +linkTitle: "anywhere generate" +--- + +## anywhere generate + +Generate resources + +### Synopsis + +Use eksctl anywhere generate to generate resources, such as clusterconfig yaml + +### Options + +``` + -h, --help help for generate +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere generate clusterconfig](../anywhere_generate_clusterconfig/) - Generate cluster config +* [anywhere generate hardware](../anywhere_generate_hardware/) - Generate hardware files +* [anywhere generate packages](../anywhere_generate_packages/) - Generate package(s) configuration +* [anywhere generate support-bundle](../anywhere_generate_support-bundle/) - Generate a support bundle +* [anywhere generate support-bundle-config](../anywhere_generate_support-bundle-config/) - Generate support bundle config + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_generate_clusterconfig.md b/docs/content/en/docs/reference/eksctl/anywhere_generate_clusterconfig.md new file mode 100644 index 000000000000..1b5b7215696c --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_generate_clusterconfig.md @@ -0,0 +1,34 @@ +--- +title: "anywhere generate clusterconfig" +linkTitle: "anywhere generate clusterconfig" +--- + +## anywhere generate clusterconfig + +Generate cluster config + +### Synopsis + +This command is used to generate a cluster config yaml for the create cluster command + +``` +anywhere generate clusterconfig (max 80 chars) [flags] +``` + +### Options + +``` + -h, --help help for clusterconfig + -p, --provider string Provider to use (vsphere or tinkerbell or docker) +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere generate](../anywhere_generate/) - Generate resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_generate_hardware.md b/docs/content/en/docs/reference/eksctl/anywhere_generate_hardware.md new file mode 100644 index 000000000000..f92fb49d2a81 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_generate_hardware.md @@ -0,0 +1,37 @@ +--- +title: "anywhere generate hardware" +linkTitle: "anywhere generate hardware" +--- + +## anywhere generate hardware + +Generate hardware files + +### Synopsis + + +Generate Kubernetes hardware YAML manifests for each Hardware entry in the source. + + +``` +anywhere generate hardware [flags] +``` + +### Options + +``` + -z, --hardware-csv string Path to a CSV file containing hardware data. + -h, --help help for hardware + -o, --output string Path to output hardware YAML. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere generate](../anywhere_generate/) - Generate resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_generate_packages.md b/docs/content/en/docs/reference/eksctl/anywhere_generate_packages.md new file mode 100644 index 000000000000..84aa8df08c18 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_generate_packages.md @@ -0,0 +1,37 @@ +--- +title: "anywhere generate packages" +linkTitle: "anywhere generate packages" +--- + +## anywhere generate packages + +Generate package(s) configuration + +### Synopsis + +Generates Kubernetes configuration files for curated packages + +``` +anywhere generate packages [flags] package +``` + +### Options + +``` + --cluster string Name of cluster for package generation + -h, --help help for packages + --kube-version string Kubernetes Version of the cluster to be used. Format . + --kubeconfig string Path to an optional kubeconfig file to use. + --registry string Used to specify an alternative registry for package generation +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere generate](../anywhere_generate/) - Generate resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_generate_support-bundle-config.md b/docs/content/en/docs/reference/eksctl/anywhere_generate_support-bundle-config.md new file mode 100644 index 000000000000..6905aa3ecf16 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_generate_support-bundle-config.md @@ -0,0 +1,34 @@ +--- +title: "anywhere generate support-bundle-config" +linkTitle: "anywhere generate support-bundle-config" +--- + +## anywhere generate support-bundle-config + +Generate support bundle config + +### Synopsis + +This command is used to generate a default support bundle config yaml + +``` +anywhere generate support-bundle-config [flags] +``` + +### Options + +``` + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for support-bundle-config +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere generate](../anywhere_generate/) - Generate resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_generate_support-bundle.md b/docs/content/en/docs/reference/eksctl/anywhere_generate_support-bundle.md new file mode 100644 index 000000000000..114154bc55d9 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_generate_support-bundle.md @@ -0,0 +1,38 @@ +--- +title: "anywhere generate support-bundle" +linkTitle: "anywhere generate support-bundle" +--- + +## anywhere generate support-bundle + +Generate a support bundle + +### Synopsis + +This command is used to create a support bundle to troubleshoot a cluster + +``` +anywhere generate support-bundle -f my-cluster.yaml [flags] +``` + +### Options + +``` + --bundle-config string Bundle Config file to use when generating support bundle + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for support-bundle + --since string Collect pod logs in the latest duration like 5s, 2m, or 3h. + --since-time string Collect pod logs after a specific datetime(RFC3339) like 2021-06-28T15:04:05Z + -w, --w-config string Kubeconfig file to use when creating support bundle for a workload cluster +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere generate](../anywhere_generate/) - Generate resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_get.md b/docs/content/en/docs/reference/eksctl/anywhere_get.md new file mode 100644 index 000000000000..7d90294f0a30 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_get.md @@ -0,0 +1,32 @@ +--- +title: "anywhere get" +linkTitle: "anywhere get" +--- + +## anywhere get + +Get resources + +### Synopsis + +Use eksctl anywhere get to display one or many resources + +### Options + +``` + -h, --help help for get +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere get package(s)](../anywhere_get_packages/) - Get package(s) +* [anywhere get packagebundle(s)](../anywhere_get_packagebundles/) - Get packagebundle(s) +* [anywhere get packagebundlecontroller(s)](../anywhere_get_packagebundlecontrollers/) - Get packagebundlecontroller(s) + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_get_package(s).md b/docs/content/en/docs/reference/eksctl/anywhere_get_package(s).md new file mode 100644 index 000000000000..543beebc5599 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_get_package(s).md @@ -0,0 +1,36 @@ +--- +title: "anywhere get package(s)" +linkTitle: "anywhere get package(s)" +--- + +## anywhere get package(s) + +Get package(s) + +### Synopsis + +This command is used to display the curated packages installed in the cluster + +``` +anywhere get package(s) [flags] +``` + +### Options + +``` + --cluster string Cluster to get list of packages. + -h, --help help for package(s) + --kubeconfig string Path to an optional kubeconfig file. + -o, --output string Specifies the output format (valid option: json, yaml) +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere get](../anywhere_get/) - Get resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_get_packagebundle(s).md b/docs/content/en/docs/reference/eksctl/anywhere_get_packagebundle(s).md new file mode 100644 index 000000000000..e202ab692b9c --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_get_packagebundle(s).md @@ -0,0 +1,35 @@ +--- +title: "anywhere get packagebundle(s)" +linkTitle: "anywhere get packagebundle(s)" +--- + +## anywhere get packagebundle(s) + +Get packagebundle(s) + +### Synopsis + +This command is used to display the currently supported packagebundles + +``` +anywhere get packagebundle(s) [flags] +``` + +### Options + +``` + -h, --help help for packagebundle(s) + --kubeconfig string Path to an optional kubeconfig file. + -o, --output string Specifies the output format (valid option: json, yaml) +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere get](../anywhere_get/) - Get resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_get_packagebundlecontroller(s).md b/docs/content/en/docs/reference/eksctl/anywhere_get_packagebundlecontroller(s).md new file mode 100644 index 000000000000..9e9f4ffac6ea --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_get_packagebundlecontroller(s).md @@ -0,0 +1,35 @@ +--- +title: "anywhere get packagebundlecontroller(s)" +linkTitle: "anywhere get packagebundlecontroller(s)" +--- + +## anywhere get packagebundlecontroller(s) + +Get packagebundlecontroller(s) + +### Synopsis + +This command is used to display the current packagebundlecontrollers + +``` +anywhere get packagebundlecontroller(s) [flags] +``` + +### Options + +``` + -h, --help help for packagebundlecontroller(s) + --kubeconfig string Path to an optional kubeconfig file. + -o, --output string Specifies the output format (valid option: json, yaml) +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere get](../anywhere_get/) - Get resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_import.md b/docs/content/en/docs/reference/eksctl/anywhere_import.md new file mode 100644 index 000000000000..64e165a838a5 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_import.md @@ -0,0 +1,30 @@ +--- +title: "anywhere import" +linkTitle: "anywhere import" +--- + +## anywhere import + +Import resources + +### Synopsis + +Use eksctl anywhere import to import resources, such as images and helm charts + +### Options + +``` + -h, --help help for import +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere import images](../anywhere_import_images/) - Import images and charts to a registry from a tarball + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_import_images.md b/docs/content/en/docs/reference/eksctl/anywhere_import_images.md new file mode 100644 index 000000000000..2dbde4547bc2 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_import_images.md @@ -0,0 +1,39 @@ +--- +title: "anywhere import images" +linkTitle: "anywhere import images" +--- + +## anywhere import images + +Import images and charts to a registry from a tarball + +### Synopsis + +Import all the images and helm charts necessary for EKS Anywhere clusters into a registry. +Use this command in conjunction with download images, passing it output tarball as input to this command. + +``` +anywhere import images [flags] +``` + +### Options + +``` + -b, --bundles string Bundles file to read artifact dependencies from + -h, --help help for images + --include-packages Flag to indicate inclusion of curated packages in imported images (DEPRECATED: use copy packages command) + -i, --input string Input tarball containing all images and charts to import + --insecure Flag to indicate skipping TLS verification while pushing helm charts + -r, --registry string Registry where to import images and charts +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere import](../anywhere_import/) - Import resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_install.md b/docs/content/en/docs/reference/eksctl/anywhere_install.md new file mode 100644 index 000000000000..b0d599f7386b --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_install.md @@ -0,0 +1,31 @@ +--- +title: "anywhere install" +linkTitle: "anywhere install" +--- + +## anywhere install + +Install resources to the cluster + +### Synopsis + +Use eksctl anywhere install to install resources into a cluster + +### Options + +``` + -h, --help help for install +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere install package](../anywhere_install_package/) - Install package +* [anywhere install packagecontroller](../anywhere_install_packagecontroller/) - Install packagecontroller on the cluster + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_install_package.md b/docs/content/en/docs/reference/eksctl/anywhere_install_package.md new file mode 100644 index 000000000000..279d98917fec --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_install_package.md @@ -0,0 +1,39 @@ +--- +title: "anywhere install package" +linkTitle: "anywhere install package" +--- + +## anywhere install package + +Install package + +### Synopsis + +This command is used to Install a curated package. Use list to discover curated packages + +``` +anywhere install package [flags] package +``` + +### Options + +``` + --cluster string Target cluster for installation. + -h, --help help for package + --kube-version string Kubernetes Version of the cluster to be used. Format . + --kubeconfig string Path to an optional kubeconfig file to use. + -n, --package-name string Custom name of the curated package to install + --registry string Used to specify an alternative registry for discovery + --set stringArray Provide custom configurations for curated packages. Format key:value +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere install](../anywhere_install/) - Install resources to the cluster + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_install_packagecontroller.md b/docs/content/en/docs/reference/eksctl/anywhere_install_packagecontroller.md new file mode 100644 index 000000000000..99386a54116a --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_install_packagecontroller.md @@ -0,0 +1,35 @@ +--- +title: "anywhere install packagecontroller" +linkTitle: "anywhere install packagecontroller" +--- + +## anywhere install packagecontroller + +Install packagecontroller on the cluster + +### Synopsis + +This command is used to Install the packagecontroller on to an existing cluster + +``` +anywhere install packagecontroller [flags] +``` + +### Options + +``` + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for packagecontroller + --kubeConfig string Management cluster kubeconfig file +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere install](../anywhere_install/) - Install resources to the cluster + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_list.md b/docs/content/en/docs/reference/eksctl/anywhere_list.md new file mode 100644 index 000000000000..2a715bc5d402 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_list.md @@ -0,0 +1,32 @@ +--- +title: "anywhere list" +linkTitle: "anywhere list" +--- + +## anywhere list + +List resources + +### Synopsis + +Use eksctl anywhere list to list images and artifacts used by EKS Anywhere + +### Options + +``` + -h, --help help for list +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere list images](../anywhere_list_images/) - Generate a list of images used by EKS Anywhere +* [anywhere list ovas](../anywhere_list_ovas/) - List the OVAs that are supported by current version of EKS Anywhere +* [anywhere list packages](../anywhere_list_packages/) - Lists curated packages available to install + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_list_images.md b/docs/content/en/docs/reference/eksctl/anywhere_list_images.md new file mode 100644 index 000000000000..75501a810a7c --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_list_images.md @@ -0,0 +1,35 @@ +--- +title: "anywhere list images" +linkTitle: "anywhere list images" +--- + +## anywhere list images + +Generate a list of images used by EKS Anywhere + +### Synopsis + +This command is used to generate a list of images used by EKS-Anywhere for cluster provisioning + +``` +anywhere list images [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for images +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere list](../anywhere_list/) - List resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_list_ovas.md b/docs/content/en/docs/reference/eksctl/anywhere_list_ovas.md new file mode 100644 index 000000000000..750dd947f3bb --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_list_ovas.md @@ -0,0 +1,35 @@ +--- +title: "anywhere list ovas" +linkTitle: "anywhere list ovas" +--- + +## anywhere list ovas + +List the OVAs that are supported by current version of EKS Anywhere + +### Synopsis + +This command is used to list the vSphere OVAs from the EKS Anywhere bundle manifest for the current version of the EKS Anywhere CLI + +``` +anywhere list ovas [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for ovas +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere list](../anywhere_list/) - List resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_list_packages.md b/docs/content/en/docs/reference/eksctl/anywhere_list_packages.md new file mode 100644 index 000000000000..419f0addf5e7 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_list_packages.md @@ -0,0 +1,33 @@ +--- +title: "anywhere list packages" +linkTitle: "anywhere list packages" +--- + +## anywhere list packages + +Lists curated packages available to install + +``` +anywhere list packages [flags] +``` + +### Options + +``` + --cluster string Name of cluster for package list. + -h, --help help for packages + --kube-version string Kubernetes version . of the packages to list, for example: "1.23". + --kubeconfig string Path to a kubeconfig file to use when source is a cluster. + --registry string Specifies an alternative registry for packages discovery. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere list](../anywhere_list/) - List resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_upgrade.md b/docs/content/en/docs/reference/eksctl/anywhere_upgrade.md new file mode 100644 index 000000000000..2cd55c3bc464 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_upgrade.md @@ -0,0 +1,32 @@ +--- +title: "anywhere upgrade" +linkTitle: "anywhere upgrade" +--- + +## anywhere upgrade + +Upgrade resources + +### Synopsis + +Use eksctl anywhere upgrade to upgrade resources, such as clusters + +### Options + +``` + -h, --help help for upgrade +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere +* [anywhere upgrade cluster](../anywhere_upgrade_cluster/) - Upgrade workload cluster +* [anywhere upgrade packages](../anywhere_upgrade_packages/) - Upgrade all curated packages to the latest version +* [anywhere upgrade plan](../anywhere_upgrade_plan/) - Provides information for a resource upgrade + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_upgrade_cluster.md b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_cluster.md new file mode 100644 index 000000000000..7eac21ffed1c --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_cluster.md @@ -0,0 +1,40 @@ +--- +title: "anywhere upgrade cluster" +linkTitle: "anywhere upgrade cluster" +--- + +## anywhere upgrade cluster + +Upgrade workload cluster + +### Synopsis + +This command is used to upgrade workload clusters + +``` +anywhere upgrade cluster [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -f, --filename string Filename that contains EKS-A cluster configuration + --force-cleanup Force deletion of previously created bootstrap cluster + -z, --hardware-csv string Path to a CSV file containing hardware data. + -h, --help help for cluster + --kubeconfig string Management cluster kubeconfig file + --no-timeouts Disable timeout for all wait operations + -w, --w-config string Kubeconfig file to use when upgrading a workload cluster +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere upgrade](../anywhere_upgrade/) - Upgrade resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_upgrade_packages.md b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_packages.md new file mode 100644 index 000000000000..2eb441aceccc --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_packages.md @@ -0,0 +1,32 @@ +--- +title: "anywhere upgrade packages" +linkTitle: "anywhere upgrade packages" +--- + +## anywhere upgrade packages + +Upgrade all curated packages to the latest version + +``` +anywhere upgrade packages [flags] +``` + +### Options + +``` + --bundle-version string Bundle version to use + --cluster string Cluster to upgrade. + -h, --help help for packages + --kubeconfig string Path to an optional kubeconfig file to use. +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere upgrade](../anywhere_upgrade/) - Upgrade resources + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_upgrade_plan.md b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_plan.md new file mode 100644 index 000000000000..520aa4ad2e6d --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_plan.md @@ -0,0 +1,30 @@ +--- +title: "anywhere upgrade plan" +linkTitle: "anywhere upgrade plan" +--- + +## anywhere upgrade plan + +Provides information for a resource upgrade + +### Synopsis + +Use eksctl anywhere upgrade plan to get information for a resource upgrade + +### Options + +``` + -h, --help help for plan +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere upgrade](../anywhere_upgrade/) - Upgrade resources +* [anywhere upgrade plan cluster](../anywhere_upgrade_plan_cluster/) - Provides new release versions for the next cluster upgrade + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_upgrade_plan_cluster.md b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_plan_cluster.md new file mode 100644 index 000000000000..21abcf23e591 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_upgrade_plan_cluster.md @@ -0,0 +1,37 @@ +--- +title: "anywhere upgrade plan cluster" +linkTitle: "anywhere upgrade plan cluster" +--- + +## anywhere upgrade plan cluster + +Provides new release versions for the next cluster upgrade + +### Synopsis + +Provides a list of target versions for upgrading the core components in the workload cluster + +``` +anywhere upgrade plan cluster [flags] +``` + +### Options + +``` + --bundles-override string Override default Bundles manifest (not recommended) + -f, --filename string Filename that contains EKS-A cluster configuration + -h, --help help for cluster + --kubeconfig string Management cluster kubeconfig file + -o, --output string Output format: text|json (default "text") +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere upgrade plan](../anywhere_upgrade_plan/) - Provides information for a resource upgrade + diff --git a/docs/content/en/docs/reference/eksctl/anywhere_version.md b/docs/content/en/docs/reference/eksctl/anywhere_version.md new file mode 100644 index 000000000000..3735d0169aa3 --- /dev/null +++ b/docs/content/en/docs/reference/eksctl/anywhere_version.md @@ -0,0 +1,33 @@ +--- +title: "anywhere version" +linkTitle: "anywhere version" +--- + +## anywhere version + +Get the eksctl anywhere version + +### Synopsis + +This command prints the version of eksctl anywhere + +``` +anywhere version [flags] +``` + +### Options + +``` + -h, --help help for version +``` + +### Options inherited from parent commands + +``` + -v, --verbosity int Set the log level verbosity +``` + +### SEE ALSO + +* [anywhere](../anywhere/) - Amazon EKS Anywhere + diff --git a/docs/content/en/docs/reference/nutanix/domains.md b/docs/content/en/docs/reference/nutanix/domains.md index ed287a41356d..ba6b95fb1a50 100644 --- a/docs/content/en/docs/reference/nutanix/domains.md +++ b/docs/content/en/docs/reference/nutanix/domains.md @@ -8,4 +8,3 @@ * api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region) * d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images) * api.github.com (only if GitOps is enabled) - diff --git a/docs/content/en/docs/tasks/cluster/cluster-terraform.md b/docs/content/en/docs/tasks/cluster/cluster-terraform.md index 8b8d690eed34..f6024fdb50e3 100644 --- a/docs/content/en/docs/tasks/cluster/cluster-terraform.md +++ b/docs/content/en/docs/tasks/cluster/cluster-terraform.md @@ -7,7 +7,7 @@ description: > Use Terraform to manage EKS Anywhere Clusters --- ->**_NOTE_**: Support for using Terraform to manage and modify an EKS Anywhere cluster is available for vSphere and Snow clusters, but not yet for Bare Metal, CloudStack, or Nutanix clusters. +>**_NOTE_**: Support for using Terraform to manage and modify an EKS Anywhere cluster is available for vSphere, Snow and Nutanix clusters, but not yet for Bare Metal or CloudStack clusters. > ## Using Terraform to manage an EKS Anywhere Cluster (Optional) diff --git a/go.mod b/go.mod index aa950afb715e..e999210899ac 100644 --- a/go.mod +++ b/go.mod @@ -14,7 +14,7 @@ require ( github.com/aws/eks-anywhere/internal/aws-sdk-go-v2/service/snowballdevice v0.0.0-00010101000000-000000000000 github.com/aws/eks-anywhere/release v0.0.0-00010101000000-000000000000 github.com/aws/eks-distro-build-tooling/release v0.0.0-20211103003257-a7e2379eae5e - github.com/aws/etcdadm-bootstrap-provider v1.0.6 + github.com/aws/etcdadm-bootstrap-provider v1.0.7-rc1 github.com/aws/etcdadm-controller v1.0.5 github.com/aws/smithy-go v1.13.2 github.com/docker/cli v20.10.21+incompatible @@ -103,6 +103,7 @@ require ( github.com/containerd/containerd v1.6.19 // indirect github.com/coredns/caddy v1.1.0 // indirect github.com/coredns/corefile-migration v1.0.20 // indirect + github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect github.com/davecgh/go-spew v1.1.1 // indirect github.com/docker/distribution v2.8.1+incompatible // indirect github.com/docker/docker v20.10.21+incompatible // indirect @@ -170,6 +171,7 @@ require ( github.com/prometheus/common v0.37.0 // indirect github.com/prometheus/procfs v0.8.0 // indirect github.com/rogpeppe/go-internal v1.9.0 // indirect + github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/satori/go.uuid v1.2.0 // indirect github.com/sergi/go-diff v1.2.0 // indirect github.com/sirupsen/logrus v1.9.0 // indirect @@ -222,5 +224,5 @@ replace ( k8s.io/component-base => k8s.io/component-base v0.25.6 // need the modifications eksa made to the capi api structs - sigs.k8s.io/cluster-api => github.com/abhay-krishna/cluster-api v1.3.5-eksa.1 + sigs.k8s.io/cluster-api => github.com/abhay-krishna/cluster-api v1.3.5-eksa.2 ) diff --git a/go.sum b/go.sum index 8612b0a225ce..d89759ed9cf2 100644 --- a/go.sum +++ b/go.sum @@ -141,8 +141,8 @@ github.com/VictorLowther/simplexml v0.0.0-20180716164440-0bff93621230/go.mod h1: github.com/VictorLowther/soap v0.0.0-20150314151524-8e36fca84b22 h1:a0MBqYm44o0NcthLKCljZHe1mxlN6oahCQHHThnSwB4= github.com/VictorLowther/soap v0.0.0-20150314151524-8e36fca84b22/go.mod h1:/B7V22rcz4860iDqstGvia/2+IYWXf3/JdQCVd/1D2A= github.com/a8m/tree v0.0.0-20210115125333-10a5fd5b637d/go.mod h1:FSdwKX97koS5efgm8WevNf7XS3PqtyFkKDDXrz778cg= -github.com/abhay-krishna/cluster-api v1.3.5-eksa.1 h1:qLIJrqs33fsyJMDuYuQFwbqRIMX+S4jwSuV4cFbnGJQ= -github.com/abhay-krishna/cluster-api v1.3.5-eksa.1/go.mod h1:hX5YK+6WVfT6MqhPKoG2y+qd98I4415oPfNCsdv9dwo= +github.com/abhay-krishna/cluster-api v1.3.5-eksa.2 h1:rpwKehV1TSRB+ab1s3jkfIoHs5inxNGZaEX7pGm33xc= +github.com/abhay-krishna/cluster-api v1.3.5-eksa.2/go.mod h1:hX5YK+6WVfT6MqhPKoG2y+qd98I4415oPfNCsdv9dwo= github.com/acomagu/bufpipe v1.0.3 h1:fxAGrHZTgQ9w5QqVItgzwj235/uYZYgbXitB+dLupOk= github.com/acomagu/bufpipe v1.0.3/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4= github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM= @@ -211,8 +211,8 @@ github.com/aws/eks-anywhere-packages v0.2.8 h1:v8RbLypdpVPyQ0Cd7e5ln4Wlhd6STv3F5 github.com/aws/eks-anywhere-packages v0.2.8/go.mod h1:qNCSi6k9tb/9ztf2x98B32EDeIr/xwHemB7YNxtxd3U= github.com/aws/eks-distro-build-tooling/release v0.0.0-20211103003257-a7e2379eae5e h1:GB6Cn9yKEt31mDF7RrVWyM9WoppNkGYth8zBPIJGJ+w= github.com/aws/eks-distro-build-tooling/release v0.0.0-20211103003257-a7e2379eae5e/go.mod h1:p/KHVJAMv3kofnUnShkZ6pUnZYzm+LK2G7bIi8nnTKA= -github.com/aws/etcdadm-bootstrap-provider v1.0.6 h1:3x9aFKcCTITKcGKCaxdFmJ9Te7/NyHjc2IuC/74+6Ag= -github.com/aws/etcdadm-bootstrap-provider v1.0.6/go.mod h1:x4wfd8mty4fB0sQ9u2SaCEr9i7ohKoRaXHXd+hyurPU= +github.com/aws/etcdadm-bootstrap-provider v1.0.7-rc1 h1:SkY/TuCqte8Q0q+Q5CyEOFMsG04yqP6dbs4LeHqcpcY= +github.com/aws/etcdadm-bootstrap-provider v1.0.7-rc1/go.mod h1:ByyZiwC4FN1MowHA9dXU+2+kxAohH5M6v9n+Wm1EvD4= github.com/aws/etcdadm-controller v1.0.5 h1:HQiVs464UFxnYdg7bL6b4ErgeFxyoDr5iZH0DLgSNYQ= github.com/aws/etcdadm-controller v1.0.5/go.mod h1:YmQm1ngTGoEKP8dutSUiTwPW+mpr0pwr4bPt2Td0Mgo= github.com/aws/smithy-go v1.11.2/go.mod h1:3xHYmszWVx2c0kIwQeEVf9uSm4fYZt67FBJnwub1bgM= @@ -393,6 +393,7 @@ github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwc github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= +github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w= github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= @@ -1172,6 +1173,7 @@ github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZV github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd/go.mod h1:hPqNNc0+uJM6H+SuU8sEs5K5IQeKccPqeSjfgcKGgPk= github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= diff --git a/pkg/api/v1alpha1/cluster.go b/pkg/api/v1alpha1/cluster.go index 0533e997299f..063aa55690b4 100644 --- a/pkg/api/v1alpha1/cluster.go +++ b/pkg/api/v1alpha1/cluster.go @@ -658,12 +658,24 @@ func validateCNIConfig(cniConfig *CNIConfig) error { } func validateCiliumConfig(cilium *CiliumConfig) error { + if cilium == nil { + return nil + } + + if !cilium.IsManaged() { + if cilium.PolicyEnforcementMode != "" { + return errors.New("when using skipUpgrades for cilium all other fields must be empty") + } + } + if cilium.PolicyEnforcementMode == "" { return nil } + if !validCiliumPolicyEnforcementModes[cilium.PolicyEnforcementMode] { return fmt.Errorf("cilium policyEnforcementMode \"%s\" not supported", cilium.PolicyEnforcementMode) } + return nil } @@ -723,15 +735,6 @@ func validateMirrorConfig(clusterConfig *Cluster) error { return fmt.Errorf("registry mirror port %s is invalid, please provide a valid port", clusterConfig.Spec.RegistryMirrorConfiguration.Port) } - if clusterConfig.Spec.RegistryMirrorConfiguration.InsecureSkipVerify { - switch clusterConfig.Spec.DatacenterRef.Kind { - case DockerDatacenterKind, NutanixDatacenterKind, VSphereDatacenterKind, TinkerbellDatacenterKind, CloudStackDatacenterKind, SnowDatacenterKind: - break - default: - return fmt.Errorf("insecureSkipVerify is only supported for docker, nutanix, snow, tinkerbell, cloudstack and vsphere providers") - } - } - mirrorCount := 0 ociNamespaces := clusterConfig.Spec.RegistryMirrorConfiguration.OCINamespaces for _, ociNamespace := range ociNamespaces { diff --git a/pkg/api/v1alpha1/cluster_test.go b/pkg/api/v1alpha1/cluster_test.go index 511e8753b02b..9abbee3dfae2 100644 --- a/pkg/api/v1alpha1/cluster_test.go +++ b/pkg/api/v1alpha1/cluster_test.go @@ -2464,6 +2464,41 @@ func TestValidateCNIConfig(t *testing.T) { }, }, }, + { + name: "CiliumSkipUpgradeWithoutOtherFields", + wantErr: nil, + clusterNetwork: &ClusterNetwork{ + CNIConfig: &CNIConfig{ + Cilium: &CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + }, + }, + }, + { + name: "CiliumSkipUpgradeWithOtherFields", + wantErr: fmt.Errorf("validating cniConfig: when using skipUpgrades for cilium all " + + "other fields must be empty"), + clusterNetwork: &ClusterNetwork{ + CNIConfig: &CNIConfig{ + Cilium: &CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + PolicyEnforcementMode: "never", + }, + }, + }, + }, + { + name: "CiliumSkipUpgradeExplicitFalseWithOtherFields", + clusterNetwork: &ClusterNetwork{ + CNIConfig: &CNIConfig{ + Cilium: &CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + PolicyEnforcementMode: "never", + }, + }, + }, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { @@ -2571,22 +2606,6 @@ func TestValidateMirrorConfig(t *testing.T) { }, }, }, - { - name: "insecureSkipVerify on an unsupported provider", - wantErr: "insecureSkipVerify is only supported for docker, nutanix, snow, tinkerbell, cloudstack and vsphere providers", - cluster: &Cluster{ - Spec: ClusterSpec{ - RegistryMirrorConfiguration: &RegistryMirrorConfiguration{ - Endpoint: "1.2.3.4", - Port: "443", - InsecureSkipVerify: true, - }, - DatacenterRef: Ref{ - Kind: "nonsnow", - }, - }, - }, - }, { name: "insecureSkipVerify on snow provider", wantErr: "", diff --git a/pkg/api/v1alpha1/cluster_types.go b/pkg/api/v1alpha1/cluster_types.go index b71300b93c70..4169a6ec326e 100644 --- a/pkg/api/v1alpha1/cluster_types.go +++ b/pkg/api/v1alpha1/cluster_types.go @@ -174,7 +174,6 @@ type RegistryMirrorConfiguration struct { // InsecureSkipVerify skips the registry certificate verification. // Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. - // Currently only supported for snow provider InsecureSkipVerify bool `json:"insecureSkipVerify,omitempty"` } @@ -489,7 +488,22 @@ func (n *CiliumConfig) Equal(o *CiliumConfig) bool { if n == nil || o == nil { return false } - return n.PolicyEnforcementMode == o.PolicyEnforcementMode + + if n.PolicyEnforcementMode != o.PolicyEnforcementMode { + return false + } + + oSkipUpgradeIsFalse := o.SkipUpgrade == nil || !*o.SkipUpgrade + nSkipUpgradeIsFalse := n.SkipUpgrade == nil || !*n.SkipUpgrade + + // We consider nil to be false in equality checks. Here we're checking if o is false then + // n must be false and vice-versa. If neither of these are true, then both o and n must be + // true so we don't need an explicit check. + if oSkipUpgradeIsFalse && !nSkipUpgradeIsFalse || !oSkipUpgradeIsFalse && nSkipUpgradeIsFalse { + return false + } + + return true } func (n *KindnetdConfig) Equal(o *KindnetdConfig) bool { @@ -678,6 +692,18 @@ type CNIConfig struct { type CiliumConfig struct { // PolicyEnforcementMode determines communication allowed between pods. Accepted values are default, always, never. PolicyEnforcementMode CiliumPolicyEnforcementMode `json:"policyEnforcementMode,omitempty"` + + // SkipUpgrade indicicates that Cilium maintenance should be skipped during upgrades. This can + // be used when operators wish to self manage the Cilium installation. + // +kubebuilder:default=false + // +optional + SkipUpgrade *bool `json:"skipUpgrade,omitempty"` +} + +// IsManaged returns true if SkipUpgrade is nil or false indicating EKS-A is responsible for +// the Cilium installation. +func (n *CiliumConfig) IsManaged() bool { + return n.SkipUpgrade == nil || !*n.SkipUpgrade } type KindnetdConfig struct{} diff --git a/pkg/api/v1alpha1/cluster_types_test.go b/pkg/api/v1alpha1/cluster_types_test.go index 15a89dca4473..b42010b594e6 100644 --- a/pkg/api/v1alpha1/cluster_types_test.go +++ b/pkg/api/v1alpha1/cluster_types_test.go @@ -2471,3 +2471,136 @@ func TestPackageControllerCronJob_Equal(t *testing.T) { }) } } + +func TestCiliumConfigEquality(t *testing.T) { + tests := []struct { + Name string + A *v1alpha1.CiliumConfig + B *v1alpha1.CiliumConfig + Equal bool + }{ + { + Name: "Nils", + A: nil, + B: nil, + Equal: true, + }, + { + Name: "NilA", + A: nil, + B: &v1alpha1.CiliumConfig{}, + Equal: false, + }, + { + Name: "NilB", + A: &v1alpha1.CiliumConfig{}, + B: nil, + Equal: false, + }, + { + Name: "ZeroValues", + A: &v1alpha1.CiliumConfig{}, + B: &v1alpha1.CiliumConfig{}, + Equal: true, + }, + { + Name: "EqualPolicyEnforcement", + A: &v1alpha1.CiliumConfig{ + PolicyEnforcementMode: "always", + }, + B: &v1alpha1.CiliumConfig{ + PolicyEnforcementMode: "always", + }, + Equal: true, + }, + { + Name: "DiffPolicyEnforcement", + A: &v1alpha1.CiliumConfig{ + PolicyEnforcementMode: "always", + }, + B: &v1alpha1.CiliumConfig{ + PolicyEnforcementMode: "default", + }, + Equal: false, + }, + { + Name: "NilSkipUpgradeAFalse", + A: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + B: &v1alpha1.CiliumConfig{}, + Equal: true, + }, + { + Name: "NilSkipUpgradeBFalse", + A: &v1alpha1.CiliumConfig{}, + B: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + Equal: true, + }, + { + Name: "SkipUpgradeBothFalse", + A: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + B: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + Equal: true, + }, + { + Name: "NilSkipUpgradeATrue", + A: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + B: &v1alpha1.CiliumConfig{}, + Equal: false, + }, + { + Name: "NilSkipUpgradeBTrue", + A: &v1alpha1.CiliumConfig{}, + B: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + Equal: false, + }, + { + Name: "SkipUpgradeBothTrue", + A: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + B: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + Equal: true, + }, + { + Name: "SkipUpgradeAFalseBTrue", + A: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + B: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + Equal: false, + }, + { + Name: "SkipUpgradeATrueBFalse", + A: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + B: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + Equal: false, + }, + } + + for _, tc := range tests { + t.Run(tc.Name, func(t *testing.T) { + g := NewWithT(t) + g.Expect(tc.A.Equal(tc.B)).To(Equal(tc.Equal)) + }) + } +} diff --git a/pkg/api/v1alpha1/hostosconfig.go b/pkg/api/v1alpha1/hostosconfig.go index ed253a3d09a4..29a8ddde0a7d 100644 --- a/pkg/api/v1alpha1/hostosconfig.go +++ b/pkg/api/v1alpha1/hostosconfig.go @@ -61,7 +61,11 @@ func validateBotterocketConfig(config *BottlerocketConfiguration, osFamily OSFam return fmt.Errorf("BottlerocketConfiguration can only be used with osFamily: \"%s\"", Bottlerocket) } - return validateBottlerocketKubernetesConfig(config.Kubernetes) + if err := validateBottlerocketKubernetesConfig(config.Kubernetes); err != nil { + return err + } + + return validateBottlerocketKernelConfiguration(config.Kernel) } func validateBottlerocketKubernetesConfig(config *v1beta1.BottlerocketKubernetesSettings) error { @@ -87,3 +91,16 @@ func validateBottlerocketKubernetesConfig(config *v1beta1.BottlerocketKubernetes return nil } + +func validateBottlerocketKernelConfiguration(config *v1beta1.BottlerocketKernelSettings) error { + if config == nil { + return nil + } + + for key := range config.SysctlSettings { + if key == "" { + return errors.New("sysctlSettings key cannot be empty") + } + } + return nil +} diff --git a/pkg/api/v1alpha1/hostosconfig_test.go b/pkg/api/v1alpha1/hostosconfig_test.go index 18a47e382a7a..46ebc533d078 100644 --- a/pkg/api/v1alpha1/hostosconfig_test.go +++ b/pkg/api/v1alpha1/hostosconfig_test.go @@ -154,6 +154,36 @@ func TestValidateHostOSConfig(t *testing.T) { osFamily: Ubuntu, wantErr: "BottlerocketConfiguration can only be used with osFamily: \"bottlerocket\"", }, + { + name: "valid kernel config", + hostOSConfig: &HostOSConfiguration{ + BottlerocketConfiguration: &BottlerocketConfiguration{ + Kernel: &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "vm.max_map_count": "262144", + "fs.file-max": "65535", + "net.ipv4.tcp_mtu_probing": "1", + }, + }, + }, + }, + osFamily: Bottlerocket, + wantErr: "", + }, + { + name: "invalid kernel key value", + hostOSConfig: &HostOSConfiguration{ + BottlerocketConfiguration: &BottlerocketConfiguration{ + Kernel: &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "": "262144", + }, + }, + }, + }, + osFamily: Bottlerocket, + wantErr: "sysctlSettings key cannot be empty", + }, } for _, tt := range tests { diff --git a/pkg/api/v1alpha1/hostosconfig_types.go b/pkg/api/v1alpha1/hostosconfig_types.go index c8e702803617..e0d6c782e05a 100644 --- a/pkg/api/v1alpha1/hostosconfig_types.go +++ b/pkg/api/v1alpha1/hostosconfig_types.go @@ -23,4 +23,7 @@ type BottlerocketConfiguration struct { // Kubernetes defines the Kubernetes settings on the host OS. // +optional Kubernetes *v1beta1.BottlerocketKubernetesSettings `json:"kubernetes,omitempty"` + + // Kernel defines the kernel settings for bottlerocket. + Kernel *v1beta1.BottlerocketKernelSettings `json:"kernel,omitempty"` } diff --git a/pkg/api/v1alpha1/snowmachineconfig.go b/pkg/api/v1alpha1/snowmachineconfig.go index 6694611b3c7b..c3c15f94fdc7 100644 --- a/pkg/api/v1alpha1/snowmachineconfig.go +++ b/pkg/api/v1alpha1/snowmachineconfig.go @@ -96,6 +96,10 @@ func validateSnowMachineConfig(config *SnowMachineConfig) error { return err } + if err := validateHostOSConfig(config.Spec.HostOSConfiguration, config.Spec.OSFamily); err != nil { + return fmt.Errorf("SnowMachineConfig HostOSConfiguration is invalid: %v", err) + } + return validateSnowMachineConfigNonRootVolumes(config.Spec.NonRootVolumes) } diff --git a/pkg/api/v1alpha1/snowmachineconfig_test.go b/pkg/api/v1alpha1/snowmachineconfig_test.go index cd6591d79d3e..98f8436e9af1 100644 --- a/pkg/api/v1alpha1/snowmachineconfig_test.go +++ b/pkg/api/v1alpha1/snowmachineconfig_test.go @@ -5,6 +5,7 @@ import ( . "github.com/onsi/gomega" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1" snowv1 "github.com/aws/eks-anywhere/pkg/providers/snow/api/v1beta1" ) @@ -85,6 +86,37 @@ func TestSnowMachineConfigSetDefaults(t *testing.T) { }, }, }, + { + name: "HostOSConfiguration exists", + before: &SnowMachineConfig{ + Spec: SnowMachineConfigSpec{ + OSFamily: Bottlerocket, + HostOSConfiguration: &HostOSConfiguration{ + BottlerocketConfiguration: &BottlerocketConfiguration{ + Kernel: &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + }, + }, + }, + }, + }, + }, + after: &SnowMachineConfig{ + Spec: SnowMachineConfigSpec{ + InstanceType: DefaultSnowInstanceType, + PhysicalNetworkConnector: DefaultSnowPhysicalNetworkConnectorType, + OSFamily: Bottlerocket, + HostOSConfiguration: &HostOSConfiguration{ + BottlerocketConfiguration: &BottlerocketConfiguration{ + Kernel: &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{"foo": "bar"}, + }, + }, + }, + }, + }, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { @@ -530,6 +562,38 @@ func TestSnowMachineConfigValidate(t *testing.T) { }, wantErr: "SnowMachineConfig NonRootVolumes[0].Size must be no smaller than 8 Gi", }, + { + name: "invalid HostOSConfiguration", + obj: &SnowMachineConfig{ + Spec: SnowMachineConfigSpec{ + AMIID: "ami-1", + InstanceType: DefaultSnowInstanceType, + PhysicalNetworkConnector: DefaultSnowPhysicalNetworkConnectorType, + Devices: []string{"1.2.3.4"}, + OSFamily: Ubuntu, + HostOSConfiguration: &HostOSConfiguration{ + BottlerocketConfiguration: &BottlerocketConfiguration{ + Kernel: &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{"foo": "bar"}, + }, + }, + }, + ContainersVolume: &snowv1.Volume{ + Size: 25, + }, + Network: SnowNetwork{ + DirectNetworkInterfaces: []SnowDirectNetworkInterface{ + { + Index: 1, + DHCP: true, + Primary: true, + }, + }, + }, + }, + }, + wantErr: "SnowMachineConfig HostOSConfiguration is invalid: BottlerocketConfiguration can only be used with osFamily: \"bottlerocket\"", + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { diff --git a/pkg/api/v1alpha1/snowmachineconfig_types.go b/pkg/api/v1alpha1/snowmachineconfig_types.go index 3cbb8a0cca82..b3e60f3fb784 100644 --- a/pkg/api/v1alpha1/snowmachineconfig_types.go +++ b/pkg/api/v1alpha1/snowmachineconfig_types.go @@ -50,6 +50,9 @@ type SnowMachineConfigSpec struct { // Network provides the custom network setting for the machine. Network SnowNetwork `json:"network"` + + // HostOSConfiguration provides OS specific configurations for the machine + HostOSConfiguration *HostOSConfiguration `json:"hostOSConfiguration,omitempty"` } // SnowNetwork specifies the network configurations for snow. diff --git a/pkg/api/v1alpha1/zz_generated.deepcopy.go b/pkg/api/v1alpha1/zz_generated.deepcopy.go index 95f8ef91599a..8fc73982c5d7 100644 --- a/pkg/api/v1alpha1/zz_generated.deepcopy.go +++ b/pkg/api/v1alpha1/zz_generated.deepcopy.go @@ -247,6 +247,11 @@ func (in *BottlerocketConfiguration) DeepCopyInto(out *BottlerocketConfiguration *out = new(apiv1beta1.BottlerocketKubernetesSettings) (*in).DeepCopyInto(*out) } + if in.Kernel != nil { + in, out := &in.Kernel, &out.Kernel + *out = new(apiv1beta1.BottlerocketKernelSettings) + (*in).DeepCopyInto(*out) + } } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BottlerocketConfiguration. @@ -280,7 +285,7 @@ func (in *CNIConfig) DeepCopyInto(out *CNIConfig) { if in.Cilium != nil { in, out := &in.Cilium, &out.Cilium *out = new(CiliumConfig) - **out = **in + (*in).DeepCopyInto(*out) } if in.Kindnetd != nil { in, out := &in.Kindnetd, &out.Kindnetd @@ -302,6 +307,11 @@ func (in *CNIConfig) DeepCopy() *CNIConfig { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *CiliumConfig) DeepCopyInto(out *CiliumConfig) { *out = *in + if in.SkipUpgrade != nil { + in, out := &in.SkipUpgrade, &out.SkipUpgrade + *out = new(bool) + **out = **in + } } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CiliumConfig. @@ -2340,7 +2350,23 @@ func (in *SnowMachineConfigSpec) DeepCopyInto(out *SnowMachineConfigSpec) { *out = new(snowapiv1beta1.Volume) **out = **in } + if in.NonRootVolumes != nil { + in, out := &in.NonRootVolumes, &out.NonRootVolumes + *out = make([]*snowapiv1beta1.Volume, len(*in)) + for i := range *in { + if (*in)[i] != nil { + in, out := &(*in)[i], &(*out)[i] + *out = new(snowapiv1beta1.Volume) + **out = **in + } + } + } in.Network.DeepCopyInto(&out.Network) + if in.HostOSConfiguration != nil { + in, out := &in.HostOSConfiguration, &out.HostOSConfiguration + *out = new(HostOSConfiguration) + (*in).DeepCopyInto(*out) + } } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SnowMachineConfigSpec. diff --git a/pkg/clusterapi/bottlerocket.go b/pkg/clusterapi/bottlerocket.go index 2abcb71dd263..8c65cc9b8730 100644 --- a/pkg/clusterapi/bottlerocket.go +++ b/pkg/clusterapi/bottlerocket.go @@ -47,6 +47,14 @@ func pause(image v1alpha1.Image) bootstrapv1.Pause { } } +func hostConfig(config *anywherev1.HostOSConfiguration) *bootstrapv1.BottlerocketSettings { + return &bootstrapv1.BottlerocketSettings{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: config.BottlerocketConfiguration.Kernel.SysctlSettings, + }, + } +} + // SetBottlerocketInKubeadmControlPlane adds bottlerocket bootstrap image metadata in kubeadmControlPlane. func SetBottlerocketInKubeadmControlPlane(kcp *controlplanev1.KubeadmControlPlane, versionsBundle *cluster.VersionsBundle) { b := bottlerocketBootstrap(versionsBundle.BottleRocketHostContainers.KubeadmBootstrap) @@ -110,6 +118,27 @@ func SetBottlerocketControlContainerImageInKubeadmConfigTemplate(kct *bootstrapv kct.Spec.Template.Spec.JoinConfiguration.BottlerocketControl = bottlerocketControl(versionsBundle.BottleRocketHostContainers.Control) } +// SetBottlerocketHostConfigInKubeadmControlPlane sets bottlerocket specific kernel settings in kubeadmControlPlane. +func SetBottlerocketHostConfigInKubeadmControlPlane(kcp *controlplanev1.KubeadmControlPlane, hostOSConfig *anywherev1.HostOSConfiguration) { + if hostOSConfig == nil || hostOSConfig.BottlerocketConfiguration == nil || hostOSConfig.BottlerocketConfiguration.Kernel == nil || + hostOSConfig.BottlerocketConfiguration.Kernel.SysctlSettings == nil { + return + } + + kcp.Spec.KubeadmConfigSpec.ClusterConfiguration.Bottlerocket = hostConfig(hostOSConfig) + kcp.Spec.KubeadmConfigSpec.JoinConfiguration.Bottlerocket = hostConfig(hostOSConfig) +} + +// SetBottlerocketHostConfigInKubeadmConfigTemplate sets bottlerocket specific kernel settings in kubeadmConfigTemplate. +func SetBottlerocketHostConfigInKubeadmConfigTemplate(kct *bootstrapv1.KubeadmConfigTemplate, hostOSConfig *anywherev1.HostOSConfiguration) { + if hostOSConfig == nil || hostOSConfig.BottlerocketConfiguration == nil || hostOSConfig.BottlerocketConfiguration.Kernel == nil || + hostOSConfig.BottlerocketConfiguration.Kernel.SysctlSettings == nil { + return + } + + kct.Spec.Template.Spec.JoinConfiguration.Bottlerocket = hostConfig(hostOSConfig) +} + // SetBottlerocketInEtcdCluster adds bottlerocket config in etcdadmCluster. func SetBottlerocketInEtcdCluster(etcd *etcdv1.EtcdadmCluster, versionsBundle *cluster.VersionsBundle) { etcd.Spec.EtcdadmConfigSpec.Format = etcdbootstrapv1.Format(anywherev1.Bottlerocket) @@ -129,3 +158,15 @@ func SetBottlerocketAdminContainerImageInEtcdCluster(etcd *etcdv1.EtcdadmCluster func SetBottlerocketControlContainerImageInEtcdCluster(etcd *etcdv1.EtcdadmCluster, controlImage v1alpha1.Image) { etcd.Spec.EtcdadmConfigSpec.BottlerocketConfig.ControlImage = controlImage.VersionedImage() } + +// SetBottlerocketHostConfigInEtcdCluster sets bottlerocket specific kernel settings in etcdadmCluster. +func SetBottlerocketHostConfigInEtcdCluster(etcd *etcdv1.EtcdadmCluster, hostOSConfig *anywherev1.HostOSConfiguration) { + if hostOSConfig == nil || hostOSConfig.BottlerocketConfiguration == nil || hostOSConfig.BottlerocketConfiguration.Kernel == nil || + hostOSConfig.BottlerocketConfiguration.Kernel.SysctlSettings == nil { + return + } + + etcd.Spec.EtcdadmConfigSpec.BottlerocketConfig.Kernel = &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: hostOSConfig.BottlerocketConfiguration.Kernel.SysctlSettings, + } +} diff --git a/pkg/clusterapi/bottlerocket_test.go b/pkg/clusterapi/bottlerocket_test.go index 898f39744032..be3683c9a4f4 100644 --- a/pkg/clusterapi/bottlerocket_test.go +++ b/pkg/clusterapi/bottlerocket_test.go @@ -7,6 +7,7 @@ import ( . "github.com/onsi/gomega" bootstrapv1 "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1" + anywherev1 "github.com/aws/eks-anywhere/pkg/api/v1alpha1" "github.com/aws/eks-anywhere/pkg/clusterapi" ) @@ -38,6 +39,15 @@ var controlContainer = bootstrapv1.BottlerocketControl{ }, } +var kernel = &bootstrapv1.BottlerocketSettings{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + "abc": "def", + }, + }, +} + func TestSetBottlerocketInKubeadmControlPlane(t *testing.T) { g := newApiBuilerTest(t) got := wantKubeadmControlPlane() @@ -166,3 +176,71 @@ func TestSetBottlerocketControlContainerImageInEtcdCluster(t *testing.T) { clusterapi.SetBottlerocketControlContainerImageInEtcdCluster(got, g.clusterSpec.VersionsBundle.BottleRocketHostContainers.Control) g.Expect(got).To(Equal(want)) } + +func TestSetBottlerocketHostConfigInKubeadmControlPlane(t *testing.T) { + g := newApiBuilerTest(t) + got := wantKubeadmControlPlane() + want := got.DeepCopy() + want.Spec.KubeadmConfigSpec.ClusterConfiguration.Bottlerocket = kernel + want.Spec.KubeadmConfigSpec.JoinConfiguration.Bottlerocket = kernel + + clusterapi.SetBottlerocketHostConfigInKubeadmControlPlane(got, &anywherev1.HostOSConfiguration{ + BottlerocketConfiguration: &anywherev1.BottlerocketConfiguration{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + "abc": "def", + }, + }, + }, + }) + g.Expect(got).To(Equal(want)) +} + +func TestSetBottlerocketHostConfigInKubeadmConfigTemplate(t *testing.T) { + g := newApiBuilerTest(t) + got := wantKubeadmConfigTemplate() + want := got.DeepCopy() + want.Spec.Template.Spec.JoinConfiguration.Bottlerocket = kernel + + clusterapi.SetBottlerocketHostConfigInKubeadmConfigTemplate(got, &anywherev1.HostOSConfiguration{ + BottlerocketConfiguration: &anywherev1.BottlerocketConfiguration{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + "abc": "def", + }, + }, + }, + }) + g.Expect(got).To(Equal(want)) +} + +func TestSetBottlerocketKernelSettingsInEtcdCluster(t *testing.T) { + g := newApiBuilerTest(t) + got := wantEtcdCluster() + got.Spec.EtcdadmConfigSpec.BottlerocketConfig = &etcdbootstrapv1.BottlerocketConfig{ + EtcdImage: "public.ecr.aws/eks-distro/etcd-io/etcd:0.0.1", + BootstrapImage: "public.ecr.aws/eks-anywhere/bottlerocket-bootstrap:0.0.1", + PauseImage: "public.ecr.aws/eks-distro/kubernetes/pause:0.0.1", + } + want := got.DeepCopy() + want.Spec.EtcdadmConfigSpec.BottlerocketConfig.Kernel = &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + "abc": "def", + }, + } + + clusterapi.SetBottlerocketHostConfigInEtcdCluster(got, &anywherev1.HostOSConfiguration{ + BottlerocketConfiguration: &anywherev1.BottlerocketConfiguration{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + "abc": "def", + }, + }, + }, + }) + g.Expect(got).To(Equal(want)) +} diff --git a/pkg/clustermanager/eksa_installer.go b/pkg/clustermanager/eksa_installer.go index 9e5c2cf15325..a1c71f0a9ae8 100644 --- a/pkg/clustermanager/eksa_installer.go +++ b/pkg/clustermanager/eksa_installer.go @@ -177,7 +177,8 @@ func fullLifeCycleControllerForProvider(cluster *anywherev1.Cluster) bool { return cluster.Spec.DatacenterRef.Kind == anywherev1.VSphereDatacenterKind || cluster.Spec.DatacenterRef.Kind == anywherev1.DockerDatacenterKind || cluster.Spec.DatacenterRef.Kind == anywherev1.SnowDatacenterKind || - cluster.Spec.DatacenterRef.Kind == anywherev1.NutanixDatacenterKind + cluster.Spec.DatacenterRef.Kind == anywherev1.NutanixDatacenterKind || + cluster.Spec.DatacenterRef.Kind == anywherev1.TinkerbellDatacenterKind } func (g *EKSAComponentGenerator) parseEKSAComponentsSpec(spec *cluster.Spec) (*eksaComponents, error) { diff --git a/pkg/clustermanager/eksa_installer_test.go b/pkg/clustermanager/eksa_installer_test.go index ad12de86a741..db60d36e0540 100644 --- a/pkg/clustermanager/eksa_installer_test.go +++ b/pkg/clustermanager/eksa_installer_test.go @@ -262,6 +262,18 @@ func TestSetManagerFlags(t *testing.T) { } }), }, + { + name: "full lifecycle, tinkerbell", + deployment: deployment(), + spec: test.NewClusterSpec(func(s *cluster.Spec) { + s.Cluster.Spec.DatacenterRef.Kind = anywherev1.TinkerbellDatacenterKind + }), + want: deployment(func(d *appsv1.Deployment) { + d.Spec.Template.Spec.Containers[0].Args = []string{ + "--feature-gates=FullLifecycleAPI=true", + } + }), + }, { name: "full lifecycle, feature flag enabled", deployment: deployment(), diff --git a/pkg/controller/clusterapi.go b/pkg/controller/clusterapi.go index b477616d3a1c..7500db45805c 100644 --- a/pkg/controller/clusterapi.go +++ b/pkg/controller/clusterapi.go @@ -61,3 +61,20 @@ func GetKubeadmControlPlane(ctx context.Context, client client.Client, cluster * } return kubeadmControlPlane, nil } + +// GetMachineDeployment reads a cluster-api MachineDeployment for an eks-a cluster using a kube client. +// If the MachineDeployment is not found, the method returns (nil, nil). +func GetMachineDeployment(ctx context.Context, client client.Client, machineDeploymentName string) (*clusterv1.MachineDeployment, error) { + machineDeployment := &clusterv1.MachineDeployment{} + key := types.NamespacedName{Namespace: constants.EksaSystemNamespace, Name: machineDeploymentName} + + err := client.Get(ctx, key, machineDeployment) + if apierrors.IsNotFound(err) { + return nil, nil + } + + if err != nil { + return nil, err + } + return machineDeployment, nil +} diff --git a/pkg/controller/clusterapi_test.go b/pkg/controller/clusterapi_test.go index e40c6327a8d3..165546a5bdf2 100644 --- a/pkg/controller/clusterapi_test.go +++ b/pkg/controller/clusterapi_test.go @@ -77,6 +77,35 @@ func TestGetKubeadmControlPlaneError(t *testing.T) { g.Expect(err).To(MatchError(ContainSubstring("no kind is registered for the type"))) } +func TestGetMachineDeploymentSuccess(t *testing.T) { + g := NewWithT(t) + ctx := context.Background() + eksaCluster := eksaCluster() + machineDeployment := machineDeployment() + client := fake.NewClientBuilder().WithObjects(eksaCluster, machineDeployment).Build() + + g.Expect(controller.GetMachineDeployment(ctx, client, "my-cluster")).To(Equal(machineDeployment)) +} + +func TestGetMachineDeploymentMissingMD(t *testing.T) { + g := NewWithT(t) + ctx := context.Background() + eksaCluster := eksaCluster() + client := fake.NewClientBuilder().WithObjects(eksaCluster).Build() + + g.Expect(controller.GetMachineDeployment(ctx, client, "test")).To(BeNil()) +} + +func TestGetMachineDeploymentError(t *testing.T) { + g := NewWithT(t) + ctx := context.Background() + // This should make the client fail because CRDs are not registered + client := fake.NewClientBuilder().WithScheme(runtime.NewScheme()).Build() + + _, err := controller.GetMachineDeployment(ctx, client, "test") + g.Expect(err).To(MatchError(ContainSubstring("no kind is registered for the type"))) +} + func TestGetCapiClusterObjectKey(t *testing.T) { g := NewWithT(t) eksaCluster := eksaCluster() @@ -127,3 +156,16 @@ func kubeadmControlPlane() *controlplanev1.KubeadmControlPlane { }, } } + +func machineDeployment() *clusterv1.MachineDeployment { + return &clusterv1.MachineDeployment{ + TypeMeta: metav1.TypeMeta{ + Kind: "MachineDeployment", + APIVersion: clusterv1.GroupVersion.String(), + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "my-cluster", + Namespace: "eksa-system", + }, + } +} diff --git a/pkg/controller/clusters/workers.go b/pkg/controller/clusters/workers.go index b7ff4d2cf2bb..79d68c13ae00 100644 --- a/pkg/controller/clusters/workers.go +++ b/pkg/controller/clusters/workers.go @@ -2,6 +2,7 @@ package clusters import ( "context" + "reflect" "time" "github.com/go-logr/logr" @@ -46,11 +47,13 @@ type WorkerGroup struct { } func (g *WorkerGroup) objects() []client.Object { - return []client.Object{ - g.KubeadmConfigTemplate, - g.MachineDeployment, - g.ProviderMachineTemplate, + objs := []client.Object{g.KubeadmConfigTemplate, g.MachineDeployment} + + if !reflect.ValueOf(g.ProviderMachineTemplate).IsNil() { + objs = append(objs, g.ProviderMachineTemplate) } + + return objs } // ToWorkers converts the generic clusterapi Workers definition to the concrete one defined diff --git a/pkg/curatedpackages/packagecontrollerclient.go b/pkg/curatedpackages/packagecontrollerclient.go index 616abd980472..c59022b1c701 100644 --- a/pkg/curatedpackages/packagecontrollerclient.go +++ b/pkg/curatedpackages/packagecontrollerclient.go @@ -213,7 +213,8 @@ func (pc *PackageControllerClient) Enable(ctx context.Context) error { skipCRDs := false chartName := pc.chart.Name if pc.managementClusterName != pc.clusterName { - values = append(values, "workloadOnly=true") + values = append(values, "workloadPackageOnly=true") + values = append(values, "managementClusterName="+pc.managementClusterName) chartName = chartName + "-" + pc.clusterName skipCRDs = true } @@ -469,7 +470,8 @@ func (pc *PackageControllerClient) Reconcile(ctx context.Context, logger logr.Lo // No Kubeconfig is passed. This is intentional. The helm executable will // get that configuration from its environment. - if err := pc.EnableFullLifecycle(ctx, logger, cluster.Name, "", image, registry); err != nil { + if err := pc.EnableFullLifecycle(ctx, logger, cluster.Name, "", image, registry, + WithManagementClusterName(cluster.ManagedBy())); err != nil { return fmt.Errorf("packages client error: %w", err) } diff --git a/pkg/curatedpackages/packagecontrollerclient_test.go b/pkg/curatedpackages/packagecontrollerclient_test.go index 7a5ecfb2624b..7a9421e3f8ef 100644 --- a/pkg/curatedpackages/packagecontrollerclient_test.go +++ b/pkg/curatedpackages/packagecontrollerclient_test.go @@ -327,8 +327,9 @@ func TestEnableSucceedInWorkloadCluster(t *testing.T) { if (tt.eksaAccessID == "" || tt.eksaAccessKey == "") && tt.registryMirror == nil { values = append(values, "cronjob.suspend=true") } - values = append(values, "workloadOnly=true") - tt.chartManager.EXPECT().InstallChart(tt.ctx, tt.chart.Name+"-billy", ociURI, tt.chart.Tag(), tt.kubeConfig, constants.EksaPackagesName, valueFilePath, true, values).Return(nil) + values = append(values, "managementClusterName=mgmt") + values = append(values, "workloadPackageOnly=true") + tt.chartManager.EXPECT().InstallChart(tt.ctx, tt.chart.Name+"-billy", ociURI, tt.chart.Tag(), tt.kubeConfig, constants.EksaPackagesName, valueFilePath, true, gomock.InAnyOrder(values)).Return(nil) tt.kubectl.EXPECT(). GetObject(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()). DoAndReturn(getPBCSuccess(t)). @@ -1090,7 +1091,8 @@ func TestEnableFullLifecyclePath(t *testing.T) { // determined its chart. values := []string{ "clusterName=" + clusterName, - "workloadOnly=true", + "managementClusterName=mgmt", + "workloadPackageOnly=true", "sourceRegistry=public.ecr.aws/eks-anywhere", "defaultRegistry=public.ecr.aws/eks-anywhere", "defaultImageRegistry=783794618700.dkr.ecr.us-west-2.amazonaws.com", @@ -1111,7 +1113,9 @@ func TestEnableFullLifecyclePath(t *testing.T) { URI: "test_registry/eks-anywhere/eks-anywhere-packages:v1", } - err := tt.command.EnableFullLifecycle(tt.ctx, log, clusterName, kubeConfig, chartImage, tt.registryMirror, curatedpackages.WithEksaRegion("us-west-2")) + err := tt.command.EnableFullLifecycle(tt.ctx, log, clusterName, kubeConfig, chartImage, tt.registryMirror, + curatedpackages.WithEksaRegion("us-west-2"), + curatedpackages.WithManagementClusterName("mgmt")) if err != nil { t.Errorf("Install Controller Should succeed when installation passes") } diff --git a/pkg/dependencies/factory.go b/pkg/dependencies/factory.go index 4d7cbbc7c866..310771d3b8a4 100644 --- a/pkg/dependencies/factory.go +++ b/pkg/dependencies/factory.go @@ -707,10 +707,12 @@ func (f *Factory) WithNetworking(clusterConfig *v1alpha1.Cluster) *Factory { } else { f.WithKubectl().WithCiliumTemplater() networkingBuilder = func() clustermanager.Networking { - return cilium.NewCilium( + c := cilium.NewCilium( cilium.NewRetrier(f.dependencies.Kubectl), f.dependencies.CiliumTemplater, ) + c.SetSkipUpgrade(!clusterConfig.Spec.ClusterNetwork.CNIConfig.Cilium.IsManaged()) + return c } } diff --git a/pkg/executables/kubectl.go b/pkg/executables/kubectl.go index 663592554a21..46f8707712ea 100644 --- a/pkg/executables/kubectl.go +++ b/pkg/executables/kubectl.go @@ -54,15 +54,21 @@ const ( var ( capiClustersResourceType = fmt.Sprintf("clusters.%s", clusterv1.GroupVersion.Group) + capiProvidersResourceType = fmt.Sprintf("providers.clusterctl.%s", clusterv1.GroupVersion.Group) + capiMachinesType = fmt.Sprintf("machines.%s", clusterv1.GroupVersion.Group) + capiMachineDeploymentsType = fmt.Sprintf("machinedeployments.%s", clusterv1.GroupVersion.Group) + capiMachineSetsType = fmt.Sprintf("machinesets.%s", clusterv1.GroupVersion.Group) eksaClusterResourceType = fmt.Sprintf("clusters.%s", v1alpha1.GroupVersion.Group) eksaVSphereDatacenterResourceType = fmt.Sprintf("vspheredatacenterconfigs.%s", v1alpha1.GroupVersion.Group) eksaVSphereMachineResourceType = fmt.Sprintf("vspheremachineconfigs.%s", v1alpha1.GroupVersion.Group) + vsphereMachineTemplatesType = fmt.Sprintf("vspheremachinetemplates.infrastructure.%s", clusterv1.GroupVersion.Group) eksaTinkerbellDatacenterResourceType = fmt.Sprintf("tinkerbelldatacenterconfigs.%s", v1alpha1.GroupVersion.Group) eksaTinkerbellMachineResourceType = fmt.Sprintf("tinkerbellmachineconfigs.%s", v1alpha1.GroupVersion.Group) TinkerbellHardwareResourceType = fmt.Sprintf("hardware.%s", tinkv1alpha1.GroupVersion.Group) rufioMachineResourceType = fmt.Sprintf("machines.%s", rufiov1alpha1.GroupVersion.Group) eksaCloudStackDatacenterResourceType = fmt.Sprintf("cloudstackdatacenterconfigs.%s", v1alpha1.GroupVersion.Group) eksaCloudStackMachineResourceType = fmt.Sprintf("cloudstackmachineconfigs.%s", v1alpha1.GroupVersion.Group) + cloudstackMachineTemplatesType = fmt.Sprintf("cloudstackmachinetemplates.infrastructure.%s", clusterv1.GroupVersion.Group) eksaNutanixDatacenterResourceType = fmt.Sprintf("nutanixdatacenterconfigs.%s", v1alpha1.GroupVersion.Group) eksaNutanixMachineResourceType = fmt.Sprintf("nutanixmachineconfigs.%s", v1alpha1.GroupVersion.Group) eksaAwsResourceType = fmt.Sprintf("awsdatacenterconfigs.%s", v1alpha1.GroupVersion.Group) @@ -76,6 +82,8 @@ var ( kubeadmControlPlaneResourceType = fmt.Sprintf("kubeadmcontrolplanes.controlplane.%s", clusterv1.GroupVersion.Group) eksdReleaseType = fmt.Sprintf("releases.%s", eksdv1alpha1.GroupVersion.Group) eksaPackagesType = fmt.Sprintf("packages.%s", packagesv1.GroupVersion.Group) + eksaPackagesBundleControllerType = fmt.Sprintf("packagebundlecontroller.%s", packagesv1.GroupVersion.Group) + eksaPackageBundlesType = fmt.Sprintf("packagebundles.%s", packagesv1.GroupVersion.Group) kubectlConnectionRefusedRegex = regexp.MustCompile("The connection to the server .* was refused") kubectlIoTimeoutRegex = regexp.MustCompile("Unable to connect to the server.*i/o timeout.*") ) @@ -91,7 +99,7 @@ type capiMachinesResponse struct { // GetCAPIMachines returns all the CAPI machines for the provided clusterName. func (k *Kubectl) GetCAPIMachines(ctx context.Context, cluster *types.Cluster, clusterName string) ([]clusterv1.Machine, error) { params := []string{ - "get", "machines.cluster.x-k8s.io", "-o", "json", "--kubeconfig", cluster.KubeconfigFile, + "get", capiMachinesType, "-o", "json", "--kubeconfig", cluster.KubeconfigFile, "--selector=cluster.x-k8s.io/cluster-name=" + clusterName, "--namespace", constants.EksaSystemNamespace, } @@ -343,7 +351,7 @@ func (k *Kubectl) WaitForManagedExternalEtcdNotReady(ctx context.Context, cluste } func (k *Kubectl) WaitForMachineDeploymentReady(ctx context.Context, cluster *types.Cluster, timeout string, machineDeploymentName string) error { - return k.Wait(ctx, cluster.KubeconfigFile, timeout, "Ready=true", fmt.Sprintf("machinedeployments.%s/%s", clusterv1.GroupVersion.Group, machineDeploymentName), constants.EksaSystemNamespace) + return k.Wait(ctx, cluster.KubeconfigFile, timeout, "Ready=true", fmt.Sprintf("%s/%s", capiMachineDeploymentsType, machineDeploymentName), constants.EksaSystemNamespace) } // WaitForService blocks until an IP address is assigned. @@ -642,7 +650,7 @@ func (k *Kubectl) DeleteFluxConfig(ctx context.Context, managementCluster *types // GetPackageBundleController will retrieve the packagebundlecontroller from eksa-packages namespace and return the object. func (k *Kubectl) GetPackageBundleController(ctx context.Context, kubeconfigFile, clusterName string) (packagesv1.PackageBundleController, error) { - params := []string{"get", "pbc", clusterName, "-o", "json", "--kubeconfig", kubeconfigFile, "--namespace", "eksa-packages", "--ignore-not-found=true"} + params := []string{"get", eksaPackagesBundleControllerType, clusterName, "-o", "json", "--kubeconfig", kubeconfigFile, "--namespace", constants.EksaPackagesName, "--ignore-not-found=true"} stdOut, _ := k.Execute(ctx, params...) response := &packagesv1.PackageBundleController{} err := json.Unmarshal(stdOut.Bytes(), response) @@ -654,11 +662,11 @@ func (k *Kubectl) GetPackageBundleController(ctx context.Context, kubeconfigFile // GetPackageBundleList will retrieve the packagebundle list from eksa-packages namespace and return the list. func (k *Kubectl) GetPackageBundleList(ctx context.Context, kubeconfigFile string) ([]packagesv1.PackageBundle, error) { - err := k.WaitJSONPathLoop(ctx, kubeconfigFile, "5m", "items", "PackageBundle", "packagebundles", "eksa-packages") + err := k.WaitJSONPathLoop(ctx, kubeconfigFile, "5m", "items", "PackageBundle", eksaPackageBundlesType, constants.EksaPackagesName) if err != nil { return nil, fmt.Errorf("waiting on package bundle resource to exist %v", err) } - params := []string{"get", "packagebundle", "-o", "json", "--kubeconfig", kubeconfigFile, "--namespace", "eksa-packages", "--ignore-not-found=true"} + params := []string{"get", eksaPackageBundlesType, "-o", "json", "--kubeconfig", kubeconfigFile, "--namespace", constants.EksaPackagesName, "--ignore-not-found=true"} stdOut, err := k.Execute(ctx, params...) if err != nil { return nil, fmt.Errorf("getting package bundle resource %v", err) @@ -672,7 +680,7 @@ func (k *Kubectl) GetPackageBundleList(ctx context.Context, kubeconfigFile strin } func (k *Kubectl) DeletePackageResources(ctx context.Context, managementCluster *types.Cluster, clusterName string) error { - params := []string{"delete", "pbc", clusterName, "--kubeconfig", managementCluster.KubeconfigFile, "--namespace", "eksa-packages", "--ignore-not-found=true"} + params := []string{"delete", eksaPackagesBundleControllerType, clusterName, "--kubeconfig", managementCluster.KubeconfigFile, "--namespace", constants.EksaPackagesName, "--ignore-not-found=true"} _, err := k.Execute(ctx, params...) if err != nil { return fmt.Errorf("deleting package resources for %s: %v", clusterName, err) @@ -876,7 +884,7 @@ func (k *Kubectl) VsphereWorkerNodesMachineTemplate(ctx context.Context, cluster return nil, err } - params := []string{"get", "vspheremachinetemplates", machineTemplateName, "-o", "go-template", "--template", "{{.spec.template.spec}}", "-o", "yaml", "--kubeconfig", kubeconfig, "--namespace", namespace} + params := []string{"get", vsphereMachineTemplatesType, machineTemplateName, "-o", "go-template", "--template", "{{.spec.template.spec}}", "-o", "yaml", "--kubeconfig", kubeconfig, "--namespace", namespace} buffer, err := k.Execute(ctx, params...) if err != nil { return nil, err @@ -894,7 +902,7 @@ func (k *Kubectl) CloudstackWorkerNodesMachineTemplate(ctx context.Context, clus return nil, err } - params := []string{"get", "cloudstackmachinetemplates", machineTemplateName, "-o", "go-template", "--template", "{{.spec.template.spec}}", "-o", "yaml", "--kubeconfig", kubeconfig, "--namespace", namespace} + params := []string{"get", cloudstackMachineTemplatesType, machineTemplateName, "-o", "go-template", "--template", "{{.spec.template.spec}}", "-o", "yaml", "--kubeconfig", kubeconfig, "--namespace", namespace} buffer, err := k.Execute(ctx, params...) if err != nil { return nil, err @@ -908,7 +916,7 @@ func (k *Kubectl) CloudstackWorkerNodesMachineTemplate(ctx context.Context, clus func (k *Kubectl) MachineTemplateName(ctx context.Context, clusterName string, kubeconfig string, opts ...KubectlOpt) (string, error) { template := "{{.spec.template.spec.infrastructureRef.name}}" - params := []string{"get", "MachineDeployment", fmt.Sprintf("%s-md-0", clusterName), "-o", "go-template", "--template", template, "--kubeconfig", kubeconfig} + params := []string{"get", capiMachineDeploymentsType, fmt.Sprintf("%s-md-0", clusterName), "-o", "go-template", "--template", template, "--kubeconfig", kubeconfig} applyOpts(¶ms, opts...) buffer, err := k.Execute(ctx, params...) if err != nil { @@ -1030,7 +1038,7 @@ type machinesResponse struct { func (k *Kubectl) GetMachines(ctx context.Context, cluster *types.Cluster, clusterName string) ([]types.Machine, error) { params := []string{ - "get", "machines.cluster.x-k8s.io", "-o", "json", "--kubeconfig", cluster.KubeconfigFile, + "get", capiMachinesType, "-o", "json", "--kubeconfig", cluster.KubeconfigFile, "--selector=cluster.x-k8s.io/cluster-name=" + clusterName, "--namespace", constants.EksaSystemNamespace, } @@ -1054,7 +1062,7 @@ type machineSetResponse struct { func (k *Kubectl) GetMachineSets(ctx context.Context, machineDeploymentName string, cluster *types.Cluster) ([]clusterv1.MachineSet, error) { params := []string{ - "get", "machinesets", "-o", "json", "--kubeconfig", cluster.KubeconfigFile, + "get", capiMachineSetsType, "-o", "json", "--kubeconfig", cluster.KubeconfigFile, "--selector=cluster.x-k8s.io/deployment-name=" + machineDeploymentName, "--namespace", constants.EksaSystemNamespace, } @@ -1120,7 +1128,7 @@ type NutanixMachineConfigResponse struct { } func (k *Kubectl) ValidateClustersCRD(ctx context.Context, cluster *types.Cluster) error { - params := []string{"get", "crd", capiClustersResourceType, "--kubeconfig", cluster.KubeconfigFile} + params := []string{"get", "customresourcedefinition", capiClustersResourceType, "--kubeconfig", cluster.KubeconfigFile} _, err := k.Execute(ctx, params...) if err != nil { return fmt.Errorf("getting clusters crd: %v", err) @@ -1129,7 +1137,7 @@ func (k *Kubectl) ValidateClustersCRD(ctx context.Context, cluster *types.Cluste } func (k *Kubectl) ValidateEKSAClustersCRD(ctx context.Context, cluster *types.Cluster) error { - params := []string{"get", "crd", eksaClusterResourceType, "--kubeconfig", cluster.KubeconfigFile} + params := []string{"get", "customresourcedefinition", eksaClusterResourceType, "--kubeconfig", cluster.KubeconfigFile} _, err := k.Execute(ctx, params...) if err != nil { return fmt.Errorf("getting eksa clusters crd: %v", err) @@ -1139,7 +1147,7 @@ func (k *Kubectl) ValidateEKSAClustersCRD(ctx context.Context, cluster *types.Cl func (k *Kubectl) RolloutRestartDaemonSet(ctx context.Context, dsName, dsNamespace, kubeconfig string) error { params := []string{ - "rollout", "restart", "ds", dsName, + "rollout", "restart", "daemonset", dsName, "--kubeconfig", kubeconfig, "--namespace", dsNamespace, } _, err := k.Execute(ctx, params...) @@ -1368,7 +1376,7 @@ func (k *Kubectl) GetKubeadmControlPlane(ctx context.Context, cluster *types.Clu } func (k *Kubectl) GetMachineDeployment(ctx context.Context, workerNodeGroupName string, opts ...KubectlOpt) (*clusterv1.MachineDeployment, error) { - params := []string{"get", fmt.Sprintf("machinedeployments.%s", clusterv1.GroupVersion.Group), workerNodeGroupName, "-o", "json"} + params := []string{"get", capiMachineDeploymentsType, workerNodeGroupName, "-o", "json"} applyOpts(¶ms, opts...) stdOut, err := k.Execute(ctx, params...) if err != nil { @@ -1386,7 +1394,7 @@ func (k *Kubectl) GetMachineDeployment(ctx context.Context, workerNodeGroupName // GetMachineDeployments retrieves all Machine Deployments. func (k *Kubectl) GetMachineDeployments(ctx context.Context, opts ...KubectlOpt) ([]clusterv1.MachineDeployment, error) { - params := []string{"get", fmt.Sprintf("machinedeployments.%s", clusterv1.GroupVersion.Group), "-o", "json"} + params := []string{"get", capiMachineDeploymentsType, "-o", "json"} applyOpts(¶ms, opts...) stdOut, err := k.Execute(ctx, params...) if err != nil { @@ -1893,7 +1901,7 @@ func (k *Kubectl) CheckProviderExists(ctx context.Context, kubeconfigFile, name, return false, nil } - params = []string{"get", "provider", "--namespace", namespace, fmt.Sprintf("--field-selector=metadata.name=%s", name), "--kubeconfig", kubeconfigFile} + params = []string{"get", capiProvidersResourceType, "--namespace", namespace, fmt.Sprintf("--field-selector=metadata.name=%s", name), "--kubeconfig", kubeconfigFile} stdOut, err = k.Execute(ctx, params...) if err != nil { return false, fmt.Errorf("checking whether provider exists: %v", err) @@ -2240,7 +2248,7 @@ func (k *Kubectl) AllTinkerbellHardware(ctx context.Context, kubeconfig string) // HasCRD checks if the given CRD exists in the cluster specified by kubeconfig. func (k *Kubectl) HasCRD(ctx context.Context, crd, kubeconfig string) (bool, error) { - _, err := k.Execute(ctx, "get", "crd", crd, "--kubeconfig", kubeconfig) + _, err := k.Execute(ctx, "get", "customresourcedefinition", crd, "--kubeconfig", kubeconfig) if err == nil { return true, nil @@ -2255,7 +2263,7 @@ func (k *Kubectl) HasCRD(ctx context.Context, crd, kubeconfig string) (bool, err // DeleteCRD removes the given CRD from the cluster specified in kubeconfig. func (k *Kubectl) DeleteCRD(ctx context.Context, crd, kubeconfig string) error { - _, err := k.Execute(ctx, "delete", "crd", crd, "--kubeconfig", kubeconfig) + _, err := k.Execute(ctx, "delete", "customresourcedefinition", crd, "--kubeconfig", kubeconfig) if err != nil && !strings.Contains(err.Error(), "NotFound") { return err } diff --git a/pkg/executables/kubectl_test.go b/pkg/executables/kubectl_test.go index 5e9c29998b64..da1a00d25e60 100644 --- a/pkg/executables/kubectl_test.go +++ b/pkg/executables/kubectl_test.go @@ -617,11 +617,11 @@ func TestCloudStackWorkerNodesMachineTemplate(t *testing.T) { machineTemplatesBuffer := bytes.NewBufferString(test.ReadFile(t, "testdata/kubectl_no_cs_machineconfigs.json")) k, ctx, _, e := newKubectl(t) expectedParam1 := []string{ - "get", "MachineDeployment", fmt.Sprintf("%s-md-0", clusterName), "-o", "go-template", + "get", "machinedeployments.cluster.x-k8s.io", fmt.Sprintf("%s-md-0", clusterName), "-o", "go-template", "--template", "{{.spec.template.spec.infrastructureRef.name}}", "--kubeconfig", kubeconfig, "--namespace", namespace, } expectedParam2 := []string{ - "get", "cloudstackmachinetemplates", machineTemplateName, "-o", "go-template", "--template", + "get", "cloudstackmachinetemplates.infrastructure.cluster.x-k8s.io", machineTemplateName, "-o", "go-template", "--template", "{{.spec.template.spec}}", "-o", "yaml", "--kubeconfig", kubeconfig, "--namespace", namespace, } e.EXPECT().Execute(ctx, gomock.Eq(expectedParam1)).Return(*machineTemplateNameBuffer, nil) @@ -632,6 +632,27 @@ func TestCloudStackWorkerNodesMachineTemplate(t *testing.T) { } } +func TestVsphereWorkerNodesMachineTemplate(t *testing.T) { + var kubeconfig, namespace, clusterName, machineTemplateName string + machineTemplateNameBuffer := bytes.NewBufferString(machineTemplateName) + machineTemplatesBuffer := bytes.NewBufferString(test.ReadFile(t, "testdata/kubectl_no_cs_machineconfigs.json")) + k, ctx, _, e := newKubectl(t) + expectedParam1 := []string{ + "get", "machinedeployments.cluster.x-k8s.io", fmt.Sprintf("%s-md-0", clusterName), "-o", "go-template", + "--template", "{{.spec.template.spec.infrastructureRef.name}}", "--kubeconfig", kubeconfig, "--namespace", namespace, + } + expectedParam2 := []string{ + "get", "vspheremachinetemplates.infrastructure.cluster.x-k8s.io", machineTemplateName, "-o", "go-template", "--template", + "{{.spec.template.spec}}", "-o", "yaml", "--kubeconfig", kubeconfig, "--namespace", namespace, + } + e.EXPECT().Execute(ctx, gomock.Eq(expectedParam1)).Return(*machineTemplateNameBuffer, nil) + e.EXPECT().Execute(ctx, gomock.Eq(expectedParam2)).Return(*machineTemplatesBuffer, nil) + _, err := k.VsphereWorkerNodesMachineTemplate(ctx, clusterName, kubeconfig, namespace) + if err != nil { + t.Errorf("Kubectl.GetNamespace() error = %v, want nil", err) + } +} + func TestKubectlSaveLogSuccess(t *testing.T) { filename := "testfile" _, writer := test.NewWriter(t) @@ -1425,7 +1446,7 @@ func TestKubectlRolloutRestartDaemonSetSuccess(t *testing.T) { e.EXPECT().Execute( ctx, []string{ - "rollout", "restart", "ds", "cilium", + "rollout", "restart", "daemonset", "cilium", "--kubeconfig", cluster.KubeconfigFile, "--namespace", constants.KubeSystemNamespace, }, ).Return(bytes.Buffer{}, nil) @@ -1441,7 +1462,7 @@ func TestKubectlRolloutRestartDaemonSetError(t *testing.T) { e.EXPECT().Execute( ctx, []string{ - "rollout", "restart", "ds", "cilium", + "rollout", "restart", "daemonset", "cilium", "--kubeconfig", cluster.KubeconfigFile, "--namespace", constants.KubeSystemNamespace, }, ).Return(bytes.Buffer{}, fmt.Errorf("error")) @@ -1981,7 +2002,7 @@ func TestKubectlVersion(t *testing.T) { func TestKubectlValidateClustersCRDSuccess(t *testing.T) { k, ctx, cluster, e := newKubectl(t) - e.EXPECT().Execute(ctx, []string{"get", "crd", "clusters.cluster.x-k8s.io", "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, nil) + e.EXPECT().Execute(ctx, []string{"get", "customresourcedefinition", "clusters.cluster.x-k8s.io", "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, nil) err := k.ValidateClustersCRD(ctx, cluster) if err != nil { t.Fatalf("Kubectl.ValidateClustersCRD() error = %v, want nil", err) @@ -1990,13 +2011,31 @@ func TestKubectlValidateClustersCRDSuccess(t *testing.T) { func TestKubectlValidateClustersCRDNotFound(t *testing.T) { k, ctx, cluster, e := newKubectl(t) - e.EXPECT().Execute(ctx, []string{"get", "crd", "clusters.cluster.x-k8s.io", "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, errors.New("CRD not found")) + e.EXPECT().Execute(ctx, []string{"get", "customresourcedefinition", "clusters.cluster.x-k8s.io", "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, errors.New("CRD not found")) err := k.ValidateClustersCRD(ctx, cluster) if err == nil { t.Fatalf("Kubectl.ValidateClustersCRD() error == nil, want CRD not found") } } +func TestKubectlValidateEKSAClustersCRDSuccess(t *testing.T) { + k, ctx, cluster, e := newKubectl(t) + e.EXPECT().Execute(ctx, []string{"get", "customresourcedefinition", "clusters.anywhere.eks.amazonaws.com", "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, nil) + err := k.ValidateEKSAClustersCRD(ctx, cluster) + if err != nil { + t.Fatalf("Kubectl.ValidateEKSAClustersCRD() error = %v, want nil", err) + } +} + +func TestKubectlValidateEKSAClustersCRDNotFound(t *testing.T) { + k, ctx, cluster, e := newKubectl(t) + e.EXPECT().Execute(ctx, []string{"get", "customresourcedefinition", "clusters.anywhere.eks.amazonaws.com", "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, errors.New("CRD not found")) + err := k.ValidateEKSAClustersCRD(ctx, cluster) + if err == nil { + t.Fatalf("Kubectl.ValidateEKSAClustersCRD() error == nil, want CRD not found") + } +} + func TestKubectlUpdateEnvironmentVariablesInNamespace(t *testing.T) { k, ctx, cluster, e := newKubectl(t) envMap := map[string]string{ @@ -2182,7 +2221,7 @@ func TestKubectlCheckCAPIProviderExistsInstalled(t *testing.T) { []string{"get", "namespace", fmt.Sprintf("--field-selector=metadata.name=%s", providerNs), "--kubeconfig", tt.cluster.KubeconfigFile}).Return(*bytes.NewBufferString("namespace"), nil) tt.e.EXPECT().Execute(tt.ctx, - []string{"get", "provider", "--namespace", providerNs, fmt.Sprintf("--field-selector=metadata.name=%s", providerName), "--kubeconfig", tt.cluster.KubeconfigFile}) + []string{"get", "providers.clusterctl.cluster.x-k8s.io", "--namespace", providerNs, fmt.Sprintf("--field-selector=metadata.name=%s", providerName), "--kubeconfig", tt.cluster.KubeconfigFile}) tt.Expect(tt.k.CheckProviderExists(tt.ctx, tt.cluster.KubeconfigFile, providerName, providerNs)) } @@ -2774,7 +2813,7 @@ func TestKubectlDeletePackageResources(t *testing.T) { tt := newKubectlTest(t) tt.e.EXPECT().Execute( tt.ctx, - "delete", "pbc", "clusterName", "--kubeconfig", tt.kubeconfig, "--namespace", "eksa-packages", "--ignore-not-found=true", + "delete", "packagebundlecontroller.packages.eks.amazonaws.com", "clusterName", "--kubeconfig", tt.kubeconfig, "--namespace", "eksa-packages", "--ignore-not-found=true", ).Return(*bytes.NewBufferString("//"), nil) tt.e.EXPECT().Execute( tt.ctx, @@ -2788,7 +2827,7 @@ func TestKubectlDeletePackageResources(t *testing.T) { tt := newKubectlTest(t) tt.e.EXPECT().Execute( tt.ctx, - "delete", "pbc", "clusterName", "--kubeconfig", tt.kubeconfig, "--namespace", "eksa-packages", "--ignore-not-found=true", + "delete", "packagebundlecontroller.packages.eks.amazonaws.com", "clusterName", "--kubeconfig", tt.kubeconfig, "--namespace", "eksa-packages", "--ignore-not-found=true", ).Return(*bytes.NewBufferString("//"), fmt.Errorf("bam")) tt.Expect(tt.k.DeletePackageResources(tt.ctx, tt.cluster, "clusterName")).To(MatchError(ContainSubstring("bam"))) @@ -2798,7 +2837,7 @@ func TestKubectlDeletePackageResources(t *testing.T) { tt := newKubectlTest(t) tt.e.EXPECT().Execute( tt.ctx, - "delete", "pbc", "clusterName", "--kubeconfig", tt.kubeconfig, "--namespace", "eksa-packages", "--ignore-not-found=true", + "delete", "packagebundlecontroller.packages.eks.amazonaws.com", "clusterName", "--kubeconfig", tt.kubeconfig, "--namespace", "eksa-packages", "--ignore-not-found=true", ).Return(*bytes.NewBufferString("//"), nil) tt.e.EXPECT().Execute( tt.ctx, @@ -3093,7 +3132,7 @@ func TestGetPackageBundleController(t *testing.T) { t.Errorf("marshaling test service: %s", err) } ret := bytes.NewBuffer(respJSON) - expectedParam := []string{"get", "pbc", "testcluster", "-o", "json", "--kubeconfig", "c.kubeconfig", "--namespace", "eksa-packages", "--ignore-not-found=true"} + expectedParam := []string{"get", "packagebundlecontroller.packages.eks.amazonaws.com", "testcluster", "-o", "json", "--kubeconfig", "c.kubeconfig", "--namespace", "eksa-packages", "--ignore-not-found=true"} tt.e.EXPECT().Execute(gomock.Any(), gomock.Eq(expectedParam)).Return(*ret, nil).AnyTimes() if _, err := tt.k.GetPackageBundleController(tt.ctx, tt.cluster.KubeconfigFile, "testcluster"); err != nil { t.Errorf("Kubectl.GetPackageBundleController() error = %v, want nil", err) @@ -3112,9 +3151,9 @@ func TestGetPackageBundleList(t *testing.T) { t.Errorf("marshaling test service: %s", err) } ret := bytes.NewBuffer(respJSON) - expectedParam := []string{"get", "packagebundles", "-o", "jsonpath='{.items}'", "--kubeconfig", "c.kubeconfig", "-n", "eksa-packages"} + expectedParam := []string{"get", "packagebundles.packages.eks.amazonaws.com", "-o", "jsonpath='{.items}'", "--kubeconfig", "c.kubeconfig", "-n", "eksa-packages"} tt.e.EXPECT().Execute(gomock.Any(), gomock.Eq(expectedParam)).Return(*ret, nil).AnyTimes() - expectedParam = []string{"get", "packagebundle", "-o", "json", "--kubeconfig", "c.kubeconfig", "--namespace", "eksa-packages", "--ignore-not-found=true"} + expectedParam = []string{"get", "packagebundles.packages.eks.amazonaws.com", "-o", "json", "--kubeconfig", "c.kubeconfig", "--namespace", "eksa-packages", "--ignore-not-found=true"} tt.e.EXPECT().Execute(gomock.Any(), gomock.Eq(expectedParam)).Return(*ret, nil).AnyTimes() if _, err := tt.k.GetPackageBundleList(tt.ctx, tt.cluster.KubeconfigFile); err != nil { t.Errorf("Kubectl.GetPackageBundleList() error = %v, want nil", err) @@ -3319,7 +3358,7 @@ func TestKubectlHasCRD(t *testing.T) { const kubeconfig = "kubeconfig" var b bytes.Buffer - params := []string{"get", "crd", crd, "--kubeconfig", kubeconfig} + params := []string{"get", "customresourcedefinition", crd, "--kubeconfig", kubeconfig} e.EXPECT().Execute(ctx, gomock.Eq(params)).Return(b, tt.Error) r, err := k.HasCRD(context.Background(), crd, kubeconfig) @@ -3362,7 +3401,7 @@ func TestKubectlDeleteCRD(t *testing.T) { const kubeconfig = "kubeconfig" var b bytes.Buffer - params := []string{"delete", "crd", crd, "--kubeconfig", kubeconfig} + params := []string{"delete", "customresourcedefinition", crd, "--kubeconfig", kubeconfig} e.EXPECT().Execute(ctx, gomock.Eq(params)).Return(b, tt.Error) err := k.DeleteCRD(context.Background(), crd, kubeconfig) diff --git a/pkg/networking/cilium/upgrader.go b/pkg/networking/cilium/upgrader.go index 710309ff4ecb..f8d04c94732d 100644 --- a/pkg/networking/cilium/upgrader.go +++ b/pkg/networking/cilium/upgrader.go @@ -32,6 +32,9 @@ type UpgradeTemplater interface { type Upgrader struct { templater UpgradeTemplater client KubernetesClient + + // skipUpgrade indicates Cilium upgrades should be skipped. + skipUpgrade bool } // NewUpgrader constructs a new Upgrader. @@ -44,6 +47,11 @@ func NewUpgrader(client KubernetesClient, templater UpgradeTemplater) *Upgrader // Upgrade configures a Cilium installation to match the desired state in the cluster Spec. func (u *Upgrader) Upgrade(ctx context.Context, cluster *types.Cluster, currentSpec, newSpec *cluster.Spec, namespaces []string) (*types.ChangeDiff, error) { + if u.skipUpgrade { + logger.V(1).Info("Cilium upgrade skipped") + return nil, nil + } + diff := ciliumChangeDiff(currentSpec, newSpec) chartValuesChanged := ciliumHelmChartValuesChanged(currentSpec, newSpec) if diff == nil && !chartValuesChanged { @@ -175,3 +183,8 @@ func (u *Upgrader) RunPostControlPlaneUpgradeSetup(ctx context.Context, cluster } return nil } + +// SetSkipUpgrade configures u to skip the upgrade process. +func (u *Upgrader) SetSkipUpgrade(v bool) { + u.skipUpgrade = v +} diff --git a/pkg/providers/common/common.go b/pkg/providers/common/common.go index 3a689ad4ffad..bbf638e29bf1 100644 --- a/pkg/providers/common/common.go +++ b/pkg/providers/common/common.go @@ -121,6 +121,13 @@ func GetCAPIBottlerocketSettingsConfig(config *v1alpha1.BottlerocketConfiguratio b.Kubernetes.ClusterDNSIPs = config.Kubernetes.ClusterDNSIPs } } + if config.Kernel != nil { + if config.Kernel.SysctlSettings != nil { + b.Kernel = &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: config.Kernel.SysctlSettings, + } + } + } brMap := map[string]*v1beta1.BottlerocketSettings{ "bottlerocket": b, diff --git a/pkg/providers/common/common_test.go b/pkg/providers/common/common_test.go index 7706682d7837..88ec7b096f10 100644 --- a/pkg/providers/common/common_test.go +++ b/pkg/providers/common/common_test.go @@ -32,6 +32,11 @@ const ( clusterDNSIPs: - 1.2.3.4 - 5.6.7.8` + + kernelSysctlConfig = `bottlerocket: + kernel: + sysctlSettings: + foo: bar` ) func TestGetCAPIBottlerocketSettingsConfig(t *testing.T) { @@ -85,6 +90,17 @@ func TestGetCAPIBottlerocketSettingsConfig(t *testing.T) { }, expected: maxPodsConfig, }, + { + name: "with kernel sysctl config", + config: &v1alpha1.BottlerocketConfiguration{ + Kernel: &v1beta1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + }, + }, + }, + expected: kernelSysctlConfig, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { diff --git a/pkg/providers/nutanix/template.go b/pkg/providers/nutanix/template.go index cd934576d4ce..48b53cde67ae 100644 --- a/pkg/providers/nutanix/template.go +++ b/pkg/providers/nutanix/template.go @@ -116,7 +116,10 @@ func (ntb *TemplateBuilder) GenerateEKSASpecSecret(clusterSpec *cluster.Spec, bu // EKSASecretName returns the name of the secret containing the credentials for the nutanix prism central and is used by the // EKS-Anywhere controller. func EKSASecretName(spec *cluster.Spec) string { - return spec.NutanixDatacenter.Spec.CredentialRef.Name + if spec.NutanixDatacenter.Spec.CredentialRef != nil { + return spec.NutanixDatacenter.Spec.CredentialRef.Name + } + return constants.NutanixCredentialsName } func (ntb *TemplateBuilder) generateSpecSecret(secretName string, creds credentials.BasicAuthCredential, buildOptions ...providers.BuildMapOption) ([]byte, error) { diff --git a/pkg/providers/nutanix/template_test.go b/pkg/providers/nutanix/template_test.go index 528226008ea0..5786b113969b 100644 --- a/pkg/providers/nutanix/template_test.go +++ b/pkg/providers/nutanix/template_test.go @@ -98,6 +98,25 @@ func TestNewNutanixTemplateBuilderGenerateSpecSecretFailure(t *testing.T) { assert.Error(t, err) } +func TestNewNutanixTemplateBuilderGenerateSpecSecretDefaultCreds(t *testing.T) { + dcConf, machineConf, workerConfs := minimalNutanixConfigSpec(t) + t.Setenv(constants.NutanixUsernameKey, "admin") + t.Setenv(constants.NutanixPasswordKey, "password") + creds := GetCredsFromEnv() + builder := NewNutanixTemplateBuilder(&dcConf.Spec, &machineConf.Spec, &machineConf.Spec, workerConfs, creds, time.Now) + assert.NotNil(t, builder) + + buildSpec := test.NewFullClusterSpec(t, "testdata/eksa-cluster-no-credentialref.yaml") + + secretSpec, err := builder.GenerateCAPISpecSecret(buildSpec) + assert.NoError(t, err) + assert.NotNil(t, secretSpec) + + secretSpec, err = builder.GenerateEKSASpecSecret(buildSpec) + assert.NoError(t, err) + assert.NotNil(t, secretSpec) +} + func TestNutanixTemplateBuilderGenerateCAPISpecForCreateWithAutoscalingConfiguration(t *testing.T) { dcConf, machineConf, workerConfs := minimalNutanixConfigSpec(t) diff --git a/pkg/providers/nutanix/testdata/eksa-cluster-no-credentialref.yaml b/pkg/providers/nutanix/testdata/eksa-cluster-no-credentialref.yaml new file mode 100644 index 000000000000..964237eab26a --- /dev/null +++ b/pkg/providers/nutanix/testdata/eksa-cluster-no-credentialref.yaml @@ -0,0 +1,72 @@ +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: Cluster +metadata: + name: eksa-unit-test + namespace: default +spec: + kubernetesVersion: "1.19" + controlPlaneConfiguration: + name: eksa-unit-test + count: 3 + endpoint: + host: test-ip + machineGroupRef: + name: eksa-unit-test + kind: NutanixMachineConfig + workerNodeGroupConfigurations: + - count: 4 + name: eksa-unit-test + machineGroupRef: + name: eksa-unit-test + kind: NutanixMachineConfig + externalEtcdConfiguration: + name: eksa-unit-test + count: 3 + machineGroupRef: + name: eksa-unit-test + kind: NutanixMachineConfig + datacenterRef: + kind: NutanixDatacenterConfig + name: eksa-unit-test + clusterNetwork: + cni: "cilium" + pods: + cidrBlocks: + - 192.168.0.0/16 + services: + cidrBlocks: + - 10.96.0.0/12 +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: NutanixDatacenterConfig +metadata: + name: eksa-unit-test + namespace: default +spec: + endpoint: "prism.nutanix.com" + port: 9440 +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: NutanixMachineConfig +metadata: + name: eksa-unit-test + namespace: default +spec: + vcpusPerSocket: 1 + vcpuSockets: 4 + memorySize: 8Gi + image: + type: "name" + name: "prism-image" + cluster: + type: "name" + name: "prism-cluster" + subnet: + type: "name" + name: "prism-subnet" + systemDiskSize: 40Gi + osFamily: "ubuntu" + users: + - name: "mySshUsername" + sshAuthorizedKeys: + - "mySshAuthorizedKey" diff --git a/pkg/providers/snow/apibuilder.go b/pkg/providers/snow/apibuilder.go index 24edb557d619..1c987ff94698 100644 --- a/pkg/providers/snow/apibuilder.go +++ b/pkg/providers/snow/apibuilder.go @@ -53,7 +53,9 @@ func KubeadmControlPlane(log logr.Logger, clusterSpec *cluster.Spec, snowMachine addStackedEtcdExtraArgsInKubeadmControlPlane(kcp, clusterSpec.Cluster.Spec.ExternalEtcdConfiguration) - osFamily := clusterSpec.SnowMachineConfig(clusterSpec.Cluster.Spec.ControlPlaneConfiguration.MachineGroupRef.Name).OSFamily() + machineConfig := clusterSpec.SnowMachineConfig(clusterSpec.Cluster.Spec.ControlPlaneConfiguration.MachineGroupRef.Name) + + osFamily := machineConfig.OSFamily() switch osFamily { case v1alpha1.Bottlerocket: clusterapi.SetProxyConfigInKubeadmControlPlaneForBottlerocket(kcp, clusterSpec.Cluster) @@ -63,6 +65,7 @@ func KubeadmControlPlane(log logr.Logger, clusterSpec *cluster.Spec, snowMachine clusterapi.SetBottlerocketControlContainerImageInKubeadmControlPlane(kcp, clusterSpec.VersionsBundle) clusterapi.SetUnstackedEtcdConfigInKubeadmControlPlaneForBottlerocket(kcp, clusterSpec.Cluster.Spec.ExternalEtcdConfiguration) addBottlerocketBootstrapSnowInKubeadmControlPlane(kcp, clusterSpec.VersionsBundle.Snow.BottlerocketBootstrapSnow) + clusterapi.SetBottlerocketHostConfigInKubeadmControlPlane(kcp, machineConfig.Spec.HostOSConfiguration) case v1alpha1.Ubuntu: kcp.Spec.KubeadmConfigSpec.PreKubeadmCommands = append(kcp.Spec.KubeadmConfigSpec.PreKubeadmCommands, @@ -100,7 +103,8 @@ func KubeadmConfigTemplate(log logr.Logger, clusterSpec *cluster.Spec, workerNod joinConfigKubeletExtraArg := kct.Spec.Template.Spec.JoinConfiguration.NodeRegistration.KubeletExtraArgs joinConfigKubeletExtraArg["provider-id"] = "aws-snow:////'{{ ds.meta_data.instance_id }}'" - osFamily := clusterSpec.SnowMachineConfig(workerNodeGroupConfig.MachineGroupRef.Name).OSFamily() + machineConfig := clusterSpec.SnowMachineConfig(workerNodeGroupConfig.MachineGroupRef.Name) + osFamily := machineConfig.OSFamily() switch osFamily { case v1alpha1.Bottlerocket: clusterapi.SetProxyConfigInKubeadmConfigTemplateForBottlerocket(kct, clusterSpec.Cluster) @@ -109,6 +113,7 @@ func KubeadmConfigTemplate(log logr.Logger, clusterSpec *cluster.Spec, workerNod clusterapi.SetBottlerocketAdminContainerImageInKubeadmConfigTemplate(kct, clusterSpec.VersionsBundle) clusterapi.SetBottlerocketControlContainerImageInKubeadmConfigTemplate(kct, clusterSpec.VersionsBundle) addBottlerocketBootstrapSnowInKubeadmConfigTemplate(kct, clusterSpec.VersionsBundle.Snow.BottlerocketBootstrapSnow) + clusterapi.SetBottlerocketHostConfigInKubeadmConfigTemplate(kct, machineConfig.Spec.HostOSConfiguration) case v1alpha1.Ubuntu: kct.Spec.Template.Spec.PreKubeadmCommands = append(kct.Spec.Template.Spec.PreKubeadmCommands, @@ -139,13 +144,15 @@ func machineDeployment(clusterSpec *cluster.Spec, workerNodeGroupConfig v1alpha1 func EtcdadmCluster(log logr.Logger, clusterSpec *cluster.Spec, snowMachineTemplate *snowv1.AWSSnowMachineTemplate) *etcdv1.EtcdadmCluster { etcd := clusterapi.EtcdadmCluster(clusterSpec, snowMachineTemplate) - osFamily := clusterSpec.SnowMachineConfig(clusterSpec.Cluster.Spec.ExternalEtcdConfiguration.MachineGroupRef.Name).OSFamily() + machineConfig := clusterSpec.SnowMachineConfig(clusterSpec.Cluster.Spec.ExternalEtcdConfiguration.MachineGroupRef.Name) + osFamily := machineConfig.OSFamily() switch osFamily { case v1alpha1.Bottlerocket: clusterapi.SetBottlerocketInEtcdCluster(etcd, clusterSpec.VersionsBundle) clusterapi.SetBottlerocketAdminContainerImageInEtcdCluster(etcd, clusterSpec.VersionsBundle.BottleRocketHostContainers.Admin) clusterapi.SetBottlerocketControlContainerImageInEtcdCluster(etcd, clusterSpec.VersionsBundle.BottleRocketHostContainers.Control) addBottlerocketBootstrapSnowInEtcdCluster(etcd, clusterSpec.VersionsBundle.Snow.BottlerocketBootstrapSnow) + clusterapi.SetBottlerocketHostConfigInEtcdCluster(etcd, machineConfig.Spec.HostOSConfiguration) case v1alpha1.Ubuntu: clusterapi.SetUbuntuConfigInEtcdCluster(etcd, clusterSpec.VersionsBundle.KubeDistro.EtcdVersion) diff --git a/pkg/providers/snow/apibuilder_test.go b/pkg/providers/snow/apibuilder_test.go index b78e2cc4ecf5..437b87f12094 100644 --- a/pkg/providers/snow/apibuilder_test.go +++ b/pkg/providers/snow/apibuilder_test.go @@ -598,6 +598,81 @@ func TestKubeadmControlPlaneWithProxyConfigBottlerocket(t *testing.T) { } } +var bottlerocketAdditionalSettingsTests = []struct { + name string + settings *v1alpha1.HostOSConfiguration + wantConfig *bootstrapv1.BottlerocketSettings +}{ + { + name: "with kernel sysctl settings", + settings: &v1alpha1.HostOSConfiguration{ + BottlerocketConfiguration: &v1alpha1.BottlerocketConfiguration{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + }, + }, + }, + }, + wantConfig: &bootstrapv1.BottlerocketSettings{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + }, + }, + }, + }, +} + +func TestKubeadmControlPlaneWithBottlerocketAdditionalSettings(t *testing.T) { + for _, tt := range bottlerocketAdditionalSettingsTests { + t.Run(tt.name, func(t *testing.T) { + g := newApiBuilerTest(t) + g.clusterSpec.SnowMachineConfig("test-cp").Spec.HostOSConfiguration = tt.settings + g.clusterSpec.SnowMachineConfig("test-cp").Spec.OSFamily = v1alpha1.Bottlerocket + controlPlaneMachineTemplate := snow.MachineTemplate("snow-test-control-plane-1", g.machineConfigs["test-cp"], nil) + got, err := snow.KubeadmControlPlane(g.logger, g.clusterSpec, controlPlaneMachineTemplate) + g.Expect(err).To(Succeed()) + want := wantKubeadmControlPlane() + want.Spec.KubeadmConfigSpec.Format = "bottlerocket" + want.Spec.KubeadmConfigSpec.PreKubeadmCommands = []string{} + want.Spec.KubeadmConfigSpec.ClusterConfiguration.BottlerocketBootstrap = bootstrap + want.Spec.KubeadmConfigSpec.ClusterConfiguration.BottlerocketAdmin = admin + want.Spec.KubeadmConfigSpec.ClusterConfiguration.BottlerocketControl = control + want.Spec.KubeadmConfigSpec.ClusterConfiguration.BottlerocketCustomBootstrapContainers = bootstrapCustom + want.Spec.KubeadmConfigSpec.ClusterConfiguration.Pause = pause + want.Spec.KubeadmConfigSpec.JoinConfiguration.BottlerocketBootstrap = bootstrap + want.Spec.KubeadmConfigSpec.JoinConfiguration.BottlerocketAdmin = admin + want.Spec.KubeadmConfigSpec.JoinConfiguration.BottlerocketControl = control + want.Spec.KubeadmConfigSpec.JoinConfiguration.BottlerocketCustomBootstrapContainers = bootstrapCustom + want.Spec.KubeadmConfigSpec.JoinConfiguration.Pause = pause + want.Spec.KubeadmConfigSpec.ClusterConfiguration.Bottlerocket = tt.wantConfig + want.Spec.KubeadmConfigSpec.JoinConfiguration.Bottlerocket = tt.wantConfig + want.Spec.KubeadmConfigSpec.ClusterConfiguration.ControllerManager.ExtraVolumes = append(want.Spec.KubeadmConfigSpec.ClusterConfiguration.ControllerManager.ExtraVolumes, + bootstrapv1.HostPathMount{ + HostPath: "/var/lib/kubeadm/controller-manager.conf", + MountPath: "/etc/kubernetes/controller-manager.conf", + Name: "kubeconfig", + PathType: "File", + ReadOnly: true, + }, + ) + want.Spec.KubeadmConfigSpec.ClusterConfiguration.Scheduler.ExtraVolumes = append(want.Spec.KubeadmConfigSpec.ClusterConfiguration.Scheduler.ExtraVolumes, + bootstrapv1.HostPathMount{ + HostPath: "/var/lib/kubeadm/scheduler.conf", + MountPath: "/etc/kubernetes/scheduler.conf", + Name: "kubeconfig", + PathType: "File", + ReadOnly: true, + }, + ) + want.Spec.KubeadmConfigSpec.ClusterConfiguration.CertificatesDir = "/var/lib/kubeadm/pki" + + g.Expect(got).To(Equal(want)) + }) + } +} + func wantKubeadmConfigTemplate() *bootstrapv1.KubeadmConfigTemplate { return &bootstrapv1.KubeadmConfigTemplate{ TypeMeta: metav1.TypeMeta{ @@ -970,6 +1045,11 @@ func wantEtcdClusterBottlerocket() *etcdv1.EtcdadmCluster { Mode: "always", }, }, + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + }, + }, } return etcd } @@ -1015,6 +1095,15 @@ func TestEtcdadmClusterBottlerocket(t *testing.T) { }, Spec: v1alpha1.SnowMachineConfigSpec{ OSFamily: "bottlerocket", + HostOSConfiguration: &v1alpha1.HostOSConfiguration{ + BottlerocketConfiguration: &v1alpha1.BottlerocketConfiguration{ + Kernel: &bootstrapv1.BottlerocketKernelSettings{ + SysctlSettings: map[string]string{ + "foo": "bar", + }, + }, + }, + }, }, } tt.machineConfigs["test-etcd"] = tt.clusterSpec.SnowMachineConfigs["test-etcd"] diff --git a/pkg/providers/tinkerbell/hardware/catalogue_hardware.go b/pkg/providers/tinkerbell/hardware/catalogue_hardware.go index f9ae68250162..dce8da48d910 100644 --- a/pkg/providers/tinkerbell/hardware/catalogue_hardware.go +++ b/pkg/providers/tinkerbell/hardware/catalogue_hardware.go @@ -47,41 +47,34 @@ func (c *Catalogue) InsertHardware(hardware *tinkv1alpha1.Hardware) error { return nil } -// RemoveHardware removes a specific hardware at a given index from the catalogue. -func (c *Catalogue) RemoveHardware(hardware *tinkv1alpha1.Hardware, index int) error { - if err := c.hardwareIndex.Remove(hardware); err != nil { - return err - } - - if index >= len(c.hardware) { - return fmt.Errorf("index out of range: %d", index) - } - c.hardware[index] = c.hardware[len(c.hardware)-1] - c.hardware[len(c.hardware)-1] = nil - c.hardware = c.hardware[:len(c.hardware)-1] - - return nil -} - // RemoveHardwares removes a slice of hardwares from the catalogue. func (c *Catalogue) RemoveHardwares(hardware []tinkv1alpha1.Hardware) error { - m := make(map[string]int, len(c.hardware)) - for i, hw := range c.hardware { - m[hw.Name+":"+hw.Namespace] = i + m := make(map[string]bool, len(hardware)) + for _, hw := range hardware { + m[getRemoveKey(hw)] = true } - for _, hw := range hardware { - key := hw.Name + ":" + hw.Namespace - if _, ok := m[key]; ok { - if err := c.RemoveHardware(c.hardware[m[key]], m[key]); err != nil { + diff := []*tinkv1alpha1.Hardware{} + for i, hw := range c.hardware { + key := getRemoveKey(*hw) + if _, ok := m[key]; !ok { + diff = append(diff, c.hardware[i]) + } else { + if err := c.hardwareIndex.Remove(c.hardware[i]); err != nil { return err } - delete(m, key) } } + + c.hardware = diff return nil } +// getRemoveKey returns key used to search and remove hardware. +func getRemoveKey(hardware tinkv1alpha1.Hardware) string { + return hardware.Name + ":" + hardware.Namespace +} + // AllHardware retrieves a copy of the catalogued Hardware instances. func (c *Catalogue) AllHardware() []*tinkv1alpha1.Hardware { hardware := make([]*tinkv1alpha1.Hardware, len(c.hardware)) diff --git a/pkg/providers/tinkerbell/hardware/catalogue_hardware_test.go b/pkg/providers/tinkerbell/hardware/catalogue_hardware_test.go index 656291a2380a..f41bbf9f6cd2 100644 --- a/pkg/providers/tinkerbell/hardware/catalogue_hardware_test.go +++ b/pkg/providers/tinkerbell/hardware/catalogue_hardware_test.go @@ -51,7 +51,7 @@ func TestCatalogue_Hardwares_Remove(t *testing.T) { g.Expect(catalogue.TotalHardware()).To(gomega.Equal(1)) } -func TestCatalogue_Hardware_RemoveFail(t *testing.T) { +func TestCatalogue_Hardwares_RemoveDuplicates(t *testing.T) { g := gomega.NewWithT(t) catalogue := hardware.NewCatalogue() @@ -63,19 +63,118 @@ func TestCatalogue_Hardware_RemoveFail(t *testing.T) { }, }) g.Expect(err).ToNot(gomega.HaveOccurred()) - g.Expect(catalogue.RemoveHardware(&v1alpha1.Hardware{ + err = catalogue.InsertHardware(&v1alpha1.Hardware{ + ObjectMeta: v1.ObjectMeta{ + Name: "hw2", + Namespace: "namespace", + }, + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + err = catalogue.InsertHardware(&v1alpha1.Hardware{ ObjectMeta: v1.ObjectMeta{ Name: "hw2", Namespace: "namespace", }, - }, 1)).To(gomega.HaveOccurred()) + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.RemoveHardwares([]v1alpha1.Hardware{ + { + ObjectMeta: v1.ObjectMeta{ + Name: "hw2", + Namespace: "namespace", + }, + }, + })).ToNot(gomega.HaveOccurred()) g.Expect(catalogue.TotalHardware()).To(gomega.Equal(1)) - g.Expect(catalogue.RemoveHardware(&v1alpha1.Hardware{ +} + +func TestCatalogue_Hardwares_RemoveExtraHw(t *testing.T) { + g := gomega.NewWithT(t) + + catalogue := hardware.NewCatalogue() + + err := catalogue.InsertHardware(&v1alpha1.Hardware{ + ObjectMeta: v1.ObjectMeta{ + Name: "hw1", + Namespace: "namespace", + }, + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + err = catalogue.InsertHardware(&v1alpha1.Hardware{ ObjectMeta: v1.ObjectMeta{ Name: "hw2", Namespace: "namespace", }, - }, 2)).To(gomega.HaveOccurred()) + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + err = catalogue.InsertHardware(&v1alpha1.Hardware{ + ObjectMeta: v1.ObjectMeta{ + Name: "hw3", + Namespace: "namespace", + }, + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.RemoveHardwares([]v1alpha1.Hardware{ + { + ObjectMeta: v1.ObjectMeta{ + Name: "hw2", + Namespace: "namespace", + }, + }, + { + ObjectMeta: v1.ObjectMeta{ + Name: "hw3", + Namespace: "namespace", + }, + }, + })).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.TotalHardware()).To(gomega.Equal(1)) +} + +func TestCatalogue_Hardwares_RemoveNothing(t *testing.T) { + g := gomega.NewWithT(t) + + catalogue := hardware.NewCatalogue() + + err := catalogue.InsertHardware(&v1alpha1.Hardware{ + ObjectMeta: v1.ObjectMeta{ + Name: "hw1", + Namespace: "namespace", + }, + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.RemoveHardwares([]v1alpha1.Hardware{ + { + ObjectMeta: v1.ObjectMeta{ + Name: "hw2", + Namespace: "namespace", + }, + }, + })).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.TotalHardware()).To(gomega.Equal(1)) +} + +func TestCatalogue_Hardwares_RemoveEverything(t *testing.T) { + g := gomega.NewWithT(t) + + catalogue := hardware.NewCatalogue() + + err := catalogue.InsertHardware(&v1alpha1.Hardware{ + ObjectMeta: v1.ObjectMeta{ + Name: "hw1", + Namespace: "namespace", + }, + }) + g.Expect(err).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.RemoveHardwares([]v1alpha1.Hardware{ + { + ObjectMeta: v1.ObjectMeta{ + Name: "hw1", + Namespace: "namespace", + }, + }, + })).ToNot(gomega.HaveOccurred()) + g.Expect(catalogue.TotalHardware()).To(gomega.Equal(0)) } func TestCatalogue_Hardware_UnknownIndexErrors(t *testing.T) { diff --git a/pkg/providers/tinkerbell/reconciler/reconciler.go b/pkg/providers/tinkerbell/reconciler/reconciler.go index f00ff7fa1eae..18724f0463a2 100644 --- a/pkg/providers/tinkerbell/reconciler/reconciler.go +++ b/pkg/providers/tinkerbell/reconciler/reconciler.go @@ -3,6 +3,7 @@ package reconciler import ( "context" "fmt" + "reflect" "github.com/go-logr/logr" "github.com/pkg/errors" @@ -29,8 +30,6 @@ const ( NewClusterOperation Operation = "NewCluster" // K8sVersionUpgradeOperation indicates to upgrade all nodes to a new Kubernetes version. K8sVersionUpgradeOperation Operation = "K8sVersionUpgrade" - // ScaleOperation indicates to change the node size of the cluster. - ScaleOperation Operation = "Scale" // NoChange indicates no change made to cluster during periodical sync. NoChange Operation = "NoChange" ) @@ -101,7 +100,6 @@ func (r *Reconciler) Reconcile(ctx context.Context, log logr.Logger, cluster *an r.ValidateClusterSpec, r.GenerateSpec, r.ValidateHardware, - r.OmitMachineTemplate, r.ValidateDatacenterConfig, r.ValidateRufioMachines, r.CleanupStatusAfterValidate, @@ -157,6 +155,11 @@ func (r *Reconciler) GenerateSpec(ctx context.Context, log logr.Logger, tinkerbe } tinkerbellScope.Workers = w + err = r.omitTinkerbellMachineTemplates(ctx, tinkerbellScope) + if err != nil { + return controller.Result{}, err + } + return controller.Result{}, nil } @@ -169,66 +172,62 @@ func (r *Reconciler) DetectOperation(ctx context.Context, log logr.Logger, tinke return "", err } if currentKCP == nil { + log.Info("Operation detected", "operation", NewClusterOperation) return NewClusterOperation, nil } - wantVersionChange := currentKCP.Spec.Version != tinkerbellScope.ControlPlane.KubeadmControlPlane.Spec.Version - cpWantScaleChange := *currentKCP.Spec.Replicas != *tinkerbellScope.ControlPlane.KubeadmControlPlane.Spec.Replicas - workerWantScaleChange, err := r.WorkerReplicasDiff(ctx, tinkerbellScope) + // The restriction that not allowing scaling and rolling is covered in webhook. + if currentKCP.Spec.Version != tinkerbellScope.ControlPlane.KubeadmControlPlane.Spec.Version { + log.Info("Operation detected", "operation", K8sVersionUpgradeOperation) + return K8sVersionUpgradeOperation, nil + } + + log.Info("Operation detected", "operation", NoChange) + return NoChange, nil +} + +func (r *Reconciler) omitTinkerbellMachineTemplates(ctx context.Context, tinkerbellScope *Scope) error { //nolint:gocyclo + currentKCP, err := controller.GetKubeadmControlPlane(ctx, r.client, tinkerbellScope.ClusterSpec.Cluster) if err != nil { - return "", err + return errors.Wrap(err, "failed to get kubeadmcontrolplane") } - wantScaleChange := workerWantScaleChange || cpWantScaleChange - // The restriction that not allowing scaling and rolling is covered in webhook. - op := NoChange - switch { - case wantScaleChange: - op = ScaleOperation - case wantVersionChange: - op = K8sVersionUpgradeOperation + if currentKCP == nil || currentKCP.Spec.Version != tinkerbellScope.ControlPlane.KubeadmControlPlane.Spec.Version { + return nil + } + + cpMachineTemplate, err := tinkerbell.GetMachineTemplate(ctx, clientutil.NewKubeClient(r.client), currentKCP.Spec.MachineTemplate.InfrastructureRef.Name, currentKCP.GetNamespace()) + if err != nil && !apierrors.IsNotFound(err) { + return errors.Wrap(err, "failed to get controlplane machinetemplate") } - log.Info("Operation detected", "operation", op) - return op, nil -} -// WorkerReplicasDiff indicates if there's difference between current and desired worker node groups. -func (r *Reconciler) WorkerReplicasDiff(ctx context.Context, tinkerbellScope *Scope) (bool, error) { - workerWantScaleChange := false - for _, wnc := range tinkerbellScope.ClusterSpec.Cluster.Spec.WorkerNodeGroupConfigurations { - md := &clusterv1.MachineDeployment{} - mdName := clusterapi.MachineDeploymentName(tinkerbellScope.ClusterSpec.Cluster, wnc) - key := types.NamespacedName{Namespace: constants.EksaSystemNamespace, Name: mdName} - err := r.client.Get(ctx, key, md) + if cpMachineTemplate != nil { + tinkerbellScope.ControlPlane.ControlPlaneMachineTemplate = nil + tinkerbellScope.ControlPlane.KubeadmControlPlane.Spec.MachineTemplate.InfrastructureRef.Name = cpMachineTemplate.GetName() + } + + for i, wg := range tinkerbellScope.Workers.Groups { + machineDeployment, err := controller.GetMachineDeployment(ctx, r.client, wg.MachineDeployment.GetName()) if err != nil { - if apierrors.IsNotFound(err) { - workerWantScaleChange = true - break - } else { - return workerWantScaleChange, errors.Wrap(err, "comparing worker replicas diff") - } + return errors.Wrap(err, "failed to get workernode group machinedeployment") } - if int(*md.Spec.Replicas) != *wnc.Count { - workerWantScaleChange = true - break + if machineDeployment == nil || + !reflect.DeepEqual(machineDeployment.Spec.Template.Spec.Version, tinkerbellScope.Workers.Groups[i].MachineDeployment.Spec.Template.Spec.Version) { + continue } - } - return workerWantScaleChange, nil -} -// OmitMachineTemplate omits control plane and worker machine template on scaling update. -func (r *Reconciler) OmitMachineTemplate(ctx context.Context, log logr.Logger, tinkerbellScope *Scope) (controller.Result, error) { - log = log.WithValues("phase", "OmitMachineTemplate") - o, err := r.DetectOperation(ctx, log, tinkerbellScope) - if err != nil { - return controller.Result{}, err - } - if o == ScaleOperation || o == NoChange { - tinkerbell.OmitTinkerbellCPMachineTemplate(tinkerbellScope.ControlPlane) - tinkerbell.OmitTinkerbellWorkersMachineTemplate(tinkerbellScope.Workers) - log.Info("Machine Template omitted") + workerMachineTemplate, err := tinkerbell.GetMachineTemplate(ctx, clientutil.NewKubeClient(r.client), machineDeployment.Spec.Template.Spec.InfrastructureRef.Name, machineDeployment.GetNamespace()) + if err != nil && !apierrors.IsNotFound(err) { + return errors.Wrap(err, "failed to get workernode group machinetemplate") + } + + if workerMachineTemplate != nil { + tinkerbellScope.Workers.Groups[i].ProviderMachineTemplate = nil + tinkerbellScope.Workers.Groups[i].MachineDeployment.Spec.Template.Spec.InfrastructureRef.Name = workerMachineTemplate.GetName() + } } - return controller.Result{}, nil + + return nil } // ReconcileControlPlane applies the control plane CAPI objects to the cluster. @@ -260,7 +259,6 @@ func (r *Reconciler) ReconcileWorkerNodes(ctx context.Context, log logr.Logger, r.GenerateSpec, r.ValidateHardware, r.ValidateRufioMachines, - r.OmitMachineTemplate, r.ReconcileWorkers, ).Run(ctx, log, NewScope(clusterSpec)) } @@ -376,7 +374,7 @@ func (r *Reconciler) ValidateHardware(ctx context.Context, log logr.Logger, tink v.Register(tinkerbell.ExtraHardwareAvailableAssertionForRollingUpgrade(kubeReader.GetCatalogue())) case NewClusterOperation: v.Register(tinkerbell.MinimumHardwareAvailableAssertionForCreate(kubeReader.GetCatalogue())) - case ScaleOperation: + case NoChange: currentKCP, err := controller.GetKubeadmControlPlane(ctx, r.client, tinkerbellScope.ClusterSpec.Cluster) if err != nil { return controller.Result{}, err diff --git a/pkg/providers/tinkerbell/reconciler/reconciler_test.go b/pkg/providers/tinkerbell/reconciler/reconciler_test.go index 1f042d142ef4..06f6616ea649 100644 --- a/pkg/providers/tinkerbell/reconciler/reconciler_test.go +++ b/pkg/providers/tinkerbell/reconciler/reconciler_test.go @@ -39,6 +39,20 @@ const ( clusterNamespace = "test-namespace" ) +func TestReconcilerGenerateSpec(t *testing.T) { + tt := newReconcilerTest(t) + tt.createAllObjs() + logger := test.NewNullLogger() + scope := tt.buildScope() + result, err := tt.reconciler().GenerateSpec(tt.ctx, logger, scope) + tt.Expect(err).NotTo(HaveOccurred()) + tt.Expect(result).To(Equal(controller.Result{})) + + tt.Expect(scope.ControlPlane).To(Equal(tinkerbellCP(workloadClusterName))) + tt.Expect(scope.Workers).To(Equal(tinkWorker(workloadClusterName))) + tt.cleanup() +} + func TestReconcilerReconcileSuccess(t *testing.T) { tt := newReconcilerTest(t) @@ -173,7 +187,6 @@ func TestReconcilerReconcileControlPlaneScaleSuccess(t *testing.T) { tt.Expect(err).NotTo(HaveOccurred()) _, err = tt.reconciler().DetectOperation(tt.ctx, logger, scope) tt.Expect(err).NotTo(HaveOccurred()) - _, _ = tt.reconciler().OmitMachineTemplate(tt.ctx, logger, scope) result, err := tt.reconciler().ReconcileControlPlane(tt.ctx, logger, scope) tt.Expect(err).NotTo(HaveOccurred()) @@ -397,8 +410,6 @@ func TestReconcilerReconcileWorkersScaleSuccess(t *testing.T) { scope.Workers = tinkWorker(tt.cluster.Name, func(w *tinkerbell.Workers) { w.Groups[0].MachineDeployment.Spec.Replicas = ptr.Int32(2) }) - _, err := tt.reconciler().OmitMachineTemplate(tt.ctx, logger, scope) - tt.Expect(err).NotTo(HaveOccurred()) result, err := tt.reconciler().ReconcileWorkers(tt.ctx, logger, scope) tt.Expect(err).NotTo(HaveOccurred()) @@ -539,7 +550,7 @@ func TestReconcilerValidateHardwareScalingUpdateFail(t *testing.T) { tt.Expect(err).NotTo(HaveOccurred()) op, err := tt.reconciler().DetectOperation(tt.ctx, logger, scope) tt.Expect(err).NotTo(HaveOccurred()) - tt.Expect(op).To(Equal(reconciler.ScaleOperation)) + tt.Expect(op).To(Equal(reconciler.NoChange)) result, err := tt.reconciler().ValidateHardware(tt.ctx, logger, scope) tt.Expect(err).To(BeNil(), "error should be nil to prevent requeue") @@ -624,20 +635,7 @@ func TestReconcilerValidateRufioMachinesFail(t *testing.T) { tt.cleanup() } -func TestReconcilerGenerateSpec(t *testing.T) { - tt := newReconcilerTest(t) - tt.createAllObjs() - logger := test.NewNullLogger() - scope := tt.buildScope() - result, err := tt.reconciler().GenerateSpec(tt.ctx, logger, scope) - tt.Expect(err).NotTo(HaveOccurred()) - tt.Expect(result).To(Equal(controller.Result{})) - tt.Expect(scope.ControlPlane).To(Equal(tinkerbellCP(workloadClusterName))) - tt.Expect(scope.Workers).To(Equal(tinkWorker(workloadClusterName))) - tt.cleanup() -} - -func TestReconciler_DetectOperationK8sVersionUpgrade(t *testing.T) { +func TestReconcilerDetectOperationK8sVersionUpgrade(t *testing.T) { tt := newReconcilerTest(t) tt.createAllObjs() @@ -652,7 +650,7 @@ func TestReconciler_DetectOperationK8sVersionUpgrade(t *testing.T) { tt.cleanup() } -func TestReconciler_DetectOperationExistingWorkerNodeGroupScaleUpdate(t *testing.T) { +func TestReconcilerDetectOperationExistingWorkerNodeGroupScaleUpdate(t *testing.T) { tt := newReconcilerTest(t) tt.createAllObjs() @@ -663,11 +661,11 @@ func TestReconciler_DetectOperationExistingWorkerNodeGroupScaleUpdate(t *testing tt.Expect(err).NotTo(HaveOccurred()) op, err := tt.reconciler().DetectOperation(tt.ctx, logger, scope) tt.Expect(err).NotTo(HaveOccurred()) - tt.Expect(op).To(Equal(reconciler.ScaleOperation)) + tt.Expect(op).To(Equal(reconciler.NoChange)) tt.cleanup() } -func TestReconciler_DetectOperationNewWorkerNodeGroupScaleUpdate(t *testing.T) { +func TestReconcilerDetectOperationNewWorkerNodeGroupScaleUpdate(t *testing.T) { tt := newReconcilerTest(t) tt.createAllObjs() @@ -688,11 +686,11 @@ func TestReconciler_DetectOperationNewWorkerNodeGroupScaleUpdate(t *testing.T) { tt.Expect(err).NotTo(HaveOccurred()) op, err := tt.reconciler().DetectOperation(tt.ctx, logger, scope) tt.Expect(err).NotTo(HaveOccurred()) - tt.Expect(op).To(Equal(reconciler.ScaleOperation)) + tt.Expect(op).To(Equal(reconciler.NoChange)) tt.cleanup() } -func TestReconciler_DetectOperationNoChanges(t *testing.T) { +func TestReconcilerDetectOperationNoChanges(t *testing.T) { tt := newReconcilerTest(t) tt.createAllObjs() @@ -706,7 +704,7 @@ func TestReconciler_DetectOperationNoChanges(t *testing.T) { tt.cleanup() } -func TestReconciler_DetectOperationNewCluster(t *testing.T) { +func TestReconcilerDetectOperationNewCluster(t *testing.T) { tt := newReconcilerTest(t) tt.createAllObjs() logger := test.NewNullLogger() @@ -720,36 +718,13 @@ func TestReconciler_DetectOperationNewCluster(t *testing.T) { tt.cleanup() } -func TestReconciler_DetectOperationFail(t *testing.T) { +func TestReconcilerDetectOperationFail(t *testing.T) { tt := newReconcilerTest(t) tt.client = fake.NewClientBuilder().WithScheme(runtime.NewScheme()).Build() _, err := tt.reconciler().DetectOperation(tt.ctx, test.NewNullLogger(), &reconciler.Scope{ClusterSpec: &clusterspec.Spec{Config: &clusterspec.Config{Cluster: &anywherev1.Cluster{}}}}) tt.Expect(err).To(MatchError(ContainSubstring("no kind is registered for the type"))) } -func TestWorkerReplicasDiffFail(t *testing.T) { - tt := newReconcilerTest(t) - tt.client = fake.NewClientBuilder().WithScheme(runtime.NewScheme()).Build() - scope := &reconciler.Scope{ - ClusterSpec: &clusterspec.Spec{ - Config: &clusterspec.Config{ - Cluster: &anywherev1.Cluster{ - Spec: anywherev1.ClusterSpec{ - WorkerNodeGroupConfigurations: []anywherev1.WorkerNodeGroupConfiguration{ - { - Name: "md-0", - }, - }, - }, - }, - }, - }, - } - - _, err := tt.reconciler().WorkerReplicasDiff(tt.ctx, scope) - tt.Expect(err).To(MatchError(ContainSubstring("no kind is registered for the type"))) -} - func (tt *reconcilerTest) withFakeClient() { tt.client = fake.NewClientBuilder().WithObjects(clientutil.ObjectsToClientObjects(tt.allObjs())...).Build() } diff --git a/pkg/providers/tinkerbell/template.go b/pkg/providers/tinkerbell/template.go index 941f55ffeeb5..507ae667764a 100644 --- a/pkg/providers/tinkerbell/template.go +++ b/pkg/providers/tinkerbell/template.go @@ -525,19 +525,6 @@ func buildTemplateMapMD( return values, nil } -// OmitTinkerbellCPMachineTemplate omits control plane machine template on scale update. -func OmitTinkerbellCPMachineTemplate(cp *ControlPlane) { - cp.ControlPlaneMachineTemplate = nil -} - -// OmitTinkerbellWorkersMachineTemplate omits workers machine template on scale update. -func OmitTinkerbellWorkersMachineTemplate(w *Workers) { - wg := w.Groups - for _, w := range wg { - w.ProviderMachineTemplate = nil - } -} - func omitTinkerbellMachineTemplate(inputSpec []byte) ([]byte, error) { var outSpec []unstructured.Unstructured resources := strings.Split(string(inputSpec), "---") diff --git a/pkg/providers/tinkerbell/testdata/cluster_bottlerocket_kernel_settings_config.yaml b/pkg/providers/tinkerbell/testdata/cluster_bottlerocket_kernel_settings_config.yaml new file mode 100644 index 000000000000..ba52cc3ad080 --- /dev/null +++ b/pkg/providers/tinkerbell/testdata/cluster_bottlerocket_kernel_settings_config.yaml @@ -0,0 +1,110 @@ +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: Cluster +metadata: + name: test + namespace: test-namespace +spec: + clusterNetwork: + cni: cilium + pods: + cidrBlocks: + - 192.168.0.0/16 + services: + cidrBlocks: + - 10.96.0.0/12 + controlPlaneConfiguration: + count: 1 + upgradeRolloutStrategy: + type: "RollingUpdate" + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + endpoint: + host: 1.2.3.4 + machineGroupRef: + name: test-cp + kind: TinkerbellMachineConfig + datacenterRef: + kind: TinkerbellDatacenterConfig + name: test + kubernetesVersion: "1.21" + managementCluster: + name: test + workerNodeGroupConfigurations: + - count: 1 + machineGroupRef: + name: test-md + kind: TinkerbellMachineConfig + upgradeRolloutStrategy: + type: "RollingUpdate" + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: TinkerbellDatacenterConfig +metadata: + name: test + namespace: test-namespace +spec: + tinkerbellIP: "5.6.7.8" + osImageURL: "https://bottlerocket.gz" + +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: TinkerbellMachineConfig +metadata: + name: test-cp + namespace: test-namespace +spec: + hardwareSelector: + type: "cp" + osFamily: bottlerocket + hostOSConfiguration: + bottlerocketConfiguration: + kubernetes: + maxPods: 50 + clusterDNSIPs: + - 1.2.3.4 + - 4.3.2.1 + allowedUnsafeSysctls: + - "net.core.somaxconn" + - "net.ipv4.ip_local_port_range" + kernel: + sysctlSettings: + "foo": "bar" + "abc": "def" + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ== testemail@test.com" +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: TinkerbellMachineConfig +metadata: + name: test-md + namespace: test-namespace +spec: + hardwareSelector: + type: "worker" + osFamily: bottlerocket + hostOSConfiguration: + bottlerocketConfiguration: + kernel: + sysctlSettings: + "foo": "bar" + "abc": "def" + kubernetes: + maxPods: 50 + clusterDNSIPs: + - 1.2.3.4 + - 4.3.2.1 + allowedUnsafeSysctls: + - "net.core.somaxconn" + - "net.ipv4.ip_local_port_range" + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ== testemail@test.com" +--- diff --git a/pkg/providers/tinkerbell/testdata/expected_results_bottlerocket_kernel_config_cp.yaml b/pkg/providers/tinkerbell/testdata/expected_results_bottlerocket_kernel_config_cp.yaml new file mode 100644 index 000000000000..b9b0c4e1e681 --- /dev/null +++ b/pkg/providers/tinkerbell/testdata/expected_results_bottlerocket_kernel_config_cp.yaml @@ -0,0 +1,274 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + labels: + cluster.x-k8s.io/cluster-name: test + name: test + namespace: eksa-system +spec: + clusterNetwork: + pods: + cidrBlocks: [192.168.0.0/16] + services: + cidrBlocks: [10.96.0.0/12] + controlPlaneEndpoint: + host: 1.2.3.4 + port: 6443 + controlPlaneRef: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlane + name: test + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: TinkerbellCluster + name: test +--- +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 +kind: KubeadmControlPlane +metadata: + name: test + namespace: eksa-system +spec: + kubeadmConfigSpec: + clusterConfiguration: + imageRepository: public.ecr.aws/eks-distro/kubernetes + etcd: + local: + imageRepository: public.ecr.aws/eks-distro/etcd-io + imageTag: v3.4.16-eks-1-21-4 + dns: + imageRepository: public.ecr.aws/eks-distro/coredns + imageTag: v1.8.3-eks-1-21-4 + pause: + imageRepository: public.ecr.aws/eks-distro/kubernetes/pause + imageTag: v1.21.2-eks-1-21-4 + bottlerocketBootstrap: + imageRepository: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap + imageTag: v1-21-4-eks-a-v0.0.0-dev-build.158 + bottlerocket: + kernel: + sysctlSettings: + abc: def + foo: bar + kubernetes: + allowedUnsafeSysctls: + - net.core.somaxconn + - net.ipv4.ip_local_port_range + clusterDNSIPs: + - 1.2.3.4 + - 4.3.2.1 + maxPods: 50 + apiServer: + extraArgs: + feature-gates: ServiceLoadBalancerClass=true + controllerManager: + extraVolumes: + - hostPath: /var/lib/kubeadm/controller-manager.conf + mountPath: /etc/kubernetes/controller-manager.conf + name: kubeconfig + pathType: File + readOnly: true + scheduler: + extraVolumes: + - hostPath: /var/lib/kubeadm/scheduler.conf + mountPath: /etc/kubernetes/scheduler.conf + name: kubeconfig + pathType: File + readOnly: true + certificatesDir: /var/lib/kubeadm/pki + initConfiguration: + nodeRegistration: + kubeletExtraArgs: + provider-id: PROVIDER_ID + read-only-port: "0" + anonymous-auth: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + joinConfiguration: + pause: + imageRepository: public.ecr.aws/eks-distro/kubernetes/pause + imageTag: v1.21.2-eks-1-21-4 + bottlerocketBootstrap: + imageRepository: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap + imageTag: v1-21-4-eks-a-v0.0.0-dev-build.158 + bottlerocket: + kernel: + sysctlSettings: + abc: def + foo: bar + kubernetes: + allowedUnsafeSysctls: + - net.core.somaxconn + - net.ipv4.ip_local_port_range + clusterDNSIPs: + - 1.2.3.4 + - 4.3.2.1 + maxPods: 50 + nodeRegistration: + ignorePreflightErrors: + - DirAvailable--etc-kubernetes-manifests + kubeletExtraArgs: + provider-id: PROVIDER_ID + read-only-port: "0" + anonymous-auth: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + files: + - content: | + apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + name: kube-vip + namespace: kube-system + spec: + containers: + - args: + - manager + env: + - name: vip_arp + value: "true" + - name: port + value: "6443" + - name: vip_cidr + value: "32" + - name: cp_enable + value: "true" + - name: cp_namespace + value: kube-system + - name: vip_ddns + value: "false" + - name: vip_leaderelection + value: "true" + - name: vip_leaseduration + value: "15" + - name: vip_renewdeadline + value: "10" + - name: vip_retryperiod + value: "2" + - name: address + value: 1.2.3.4 + image: public.ecr.aws/l0g8r8j6/kube-vip/kube-vip:v0.3.7-eks-a-v0.0.0-dev-build.581 + imagePullPolicy: IfNotPresent + name: kube-vip + resources: {} + securityContext: + capabilities: + add: + - NET_ADMIN + - NET_RAW + volumeMounts: + - mountPath: /etc/kubernetes/admin.conf + name: kubeconfig + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes/admin.conf + name: kubeconfig + status: {} + owner: root:root + path: /etc/kubernetes/manifests/kube-vip.yaml + users: + - name: ec2-user + sshAuthorizedKeys: + - 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ==' + sudo: ALL=(ALL) NOPASSWD:ALL + format: bottlerocket + machineTemplate: + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: TinkerbellMachineTemplate + name: test-control-plane-template-1234567890000 + replicas: 1 + rolloutStrategy: + rollingUpdate: + maxSurge: 1 + version: v1.21.2-eks-1-21-4 +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: TinkerbellMachineTemplate +metadata: + name: test-control-plane-template-1234567890000 + namespace: eksa-system +spec: + template: + spec: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + type: cp + templateOverride: | + global_timeout: 6000 + id: "" + name: test + tasks: + - actions: + - environment: + COMPRESSED: "true" + DEST_DISK: '{{ index .Hardware.Disks 0 }}' + IMG_URL: https://bottlerocket.gz + image: "" + name: stream-image + timeout: 600 + - environment: + BOOTCONFIG_CONTENTS: kernel {} + DEST_DISK: '{{ formatPartition ( index .Hardware.Disks 0 ) 12 }}' + DEST_PATH: /bootconfig.data + DIRMODE: "0700" + FS_TYPE: ext4 + GID: "0" + MODE: "0644" + UID: "0" + image: "" + name: write-bootconfig + pid: host + timeout: 90 + - environment: + DEST_DISK: '{{ formatPartition ( index .Hardware.Disks 0 ) 12 }}' + DEST_PATH: /user-data.toml + DIRMODE: "0700" + FS_TYPE: ext4 + GID: "0" + HEGEL_URLS: http://5.6.7.8:50061,http://5.6.7.8:50061 + MODE: "0644" + UID: "0" + image: "" + name: write-user-data + pid: host + timeout: 90 + - environment: + DEST_DISK: '{{ formatPartition ( index .Hardware.Disks 0 ) 12 }}' + DEST_PATH: /net.toml + DIRMODE: "0755" + FS_TYPE: ext4 + GID: "0" + IFNAME: eno1 + MODE: "0644" + STATIC_BOTTLEROCKET: "true" + UID: "0" + image: "" + name: write-netplan + pid: host + timeout: 90 + - image: "" + name: reboot-image + pid: host + timeout: 90 + volumes: + - /worker:/worker + name: test + volumes: + - /dev:/dev + - /dev/console:/dev/console + - /lib/firmware:/lib/firmware:ro + worker: '{{.device_1}}' + version: "0.1" + +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: TinkerbellCluster +metadata: + name: test + namespace: eksa-system +spec: + imageLookupFormat: --kube-v1.21.2-eks-1-21-4.raw.gz + imageLookupBaseRegistry: / \ No newline at end of file diff --git a/pkg/providers/tinkerbell/testdata/expected_results_bottlerocket_kernel_config_md.yaml b/pkg/providers/tinkerbell/testdata/expected_results_bottlerocket_kernel_config_md.yaml new file mode 100644 index 000000000000..9478639df483 --- /dev/null +++ b/pkg/providers/tinkerbell/testdata/expected_results_bottlerocket_kernel_config_md.yaml @@ -0,0 +1,158 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: MachineDeployment +metadata: + labels: + cluster.x-k8s.io/cluster-name: test + pool: md-0 + name: test-md-0 + namespace: eksa-system +spec: + clusterName: test + replicas: 1 + selector: + matchLabels: {} + template: + metadata: + labels: + cluster.x-k8s.io/cluster-name: test + pool: md-0 + spec: + bootstrap: + configRef: + apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 + kind: KubeadmConfigTemplate + name: test-md-0-template-1234567890000 + clusterName: test + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: TinkerbellMachineTemplate + name: test-md-0-1234567890000 + version: v1.21.2-eks-1-21-4 + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: TinkerbellMachineTemplate +metadata: + name: test-md-0-1234567890000 + namespace: eksa-system +spec: + template: + spec: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + type: worker + templateOverride: | + global_timeout: 6000 + id: "" + name: test + tasks: + - actions: + - environment: + COMPRESSED: "true" + DEST_DISK: '{{ index .Hardware.Disks 0 }}' + IMG_URL: https://bottlerocket.gz + image: "" + name: stream-image + timeout: 600 + - environment: + BOOTCONFIG_CONTENTS: kernel {} + DEST_DISK: '{{ formatPartition ( index .Hardware.Disks 0 ) 12 }}' + DEST_PATH: /bootconfig.data + DIRMODE: "0700" + FS_TYPE: ext4 + GID: "0" + MODE: "0644" + UID: "0" + image: "" + name: write-bootconfig + pid: host + timeout: 90 + - environment: + DEST_DISK: '{{ formatPartition ( index .Hardware.Disks 0 ) 12 }}' + DEST_PATH: /user-data.toml + DIRMODE: "0700" + FS_TYPE: ext4 + GID: "0" + HEGEL_URLS: http://5.6.7.8:50061,http://5.6.7.8:50061 + MODE: "0644" + UID: "0" + image: "" + name: write-user-data + pid: host + timeout: 90 + - environment: + DEST_DISK: '{{ formatPartition ( index .Hardware.Disks 0 ) 12 }}' + DEST_PATH: /net.toml + DIRMODE: "0755" + FS_TYPE: ext4 + GID: "0" + IFNAME: eno1 + MODE: "0644" + STATIC_BOTTLEROCKET: "true" + UID: "0" + image: "" + name: write-netplan + pid: host + timeout: 90 + - image: "" + name: reboot-image + pid: host + timeout: 90 + volumes: + - /worker:/worker + name: test + volumes: + - /dev:/dev + - /dev/console:/dev/console + - /lib/firmware:/lib/firmware:ro + worker: '{{.device_1}}' + version: "0.1" + +--- +apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 +kind: KubeadmConfigTemplate +metadata: + name: test-md-0-template-1234567890000 + namespace: eksa-system +spec: + template: + spec: + joinConfiguration: + pause: + imageRepository: public.ecr.aws/eks-distro/kubernetes/pause + imageTag: v1.21.2-eks-1-21-4 + bottlerocketBootstrap: + imageRepository: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap + imageTag: v1-21-4-eks-a-v0.0.0-dev-build.158 + bottlerocket: + kernel: + sysctlSettings: + abc: def + foo: bar + kubernetes: + allowedUnsafeSysctls: + - net.core.somaxconn + - net.ipv4.ip_local_port_range + clusterDNSIPs: + - 1.2.3.4 + - 4.3.2.1 + maxPods: 50 + nodeRegistration: + kubeletExtraArgs: + provider-id: PROVIDER_ID + read-only-port: "0" + anonymous-auth: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + users: + - name: ec2-user + sshAuthorizedKeys: + - 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ==' + sudo: ALL=(ALL) NOPASSWD:ALL + format: bottlerocket + +--- diff --git a/pkg/providers/tinkerbell/tinkerbell_test.go b/pkg/providers/tinkerbell/tinkerbell_test.go index edab91662197..9df91aeaeff0 100644 --- a/pkg/providers/tinkerbell/tinkerbell_test.go +++ b/pkg/providers/tinkerbell/tinkerbell_test.go @@ -1588,8 +1588,6 @@ func TestProviderGenerateDeploymentFileForBottlerocketWithBottlerocketSettingsCo t.Fatalf("failed to generate cluster api spec contents: %v", err) } - fmt.Println(string(md)) - test.AssertContentToFile(t, string(cp), "testdata/expected_results_bottlerocket_settings_config_cp.yaml") test.AssertContentToFile(t, string(md), "testdata/expected_results_bottlerocket_settings_config_md.yaml") } @@ -1627,3 +1625,37 @@ func TestTinkerbellProviderGenerateCAPISpecForCreateWithPodIAMConfig(t *testing. test.AssertContentToFile(t, string(cp), "testdata/expected_results_tinkerbell_pod_iam_config.yaml") } + +func TestProviderGenerateDeploymentFileForBottlerocketWithKernelSettingsConfig(t *testing.T) { + clusterSpecManifest := "cluster_bottlerocket_kernel_settings_config.yaml" + mockCtrl := gomock.NewController(t) + docker := stackmocks.NewMockDocker(mockCtrl) + helm := stackmocks.NewMockHelm(mockCtrl) + kubectl := mocks.NewMockProviderKubectlClient(mockCtrl) + stackInstaller := stackmocks.NewMockStackInstaller(mockCtrl) + writer := filewritermocks.NewMockFileWriter(mockCtrl) + cluster := &types.Cluster{Name: "test"} + forceCleanup := false + + clusterSpec := givenClusterSpec(t, clusterSpecManifest) + datacenterConfig := givenDatacenterConfig(t, clusterSpecManifest) + machineConfigs := givenMachineConfigs(t, clusterSpecManifest) + ctx := context.Background() + + provider := newProvider(datacenterConfig, machineConfigs, clusterSpec.Cluster, writer, docker, helm, kubectl, forceCleanup) + provider.stackInstaller = stackInstaller + + stackInstaller.EXPECT().CleanupLocalBoots(ctx, forceCleanup) + + if err := provider.SetupAndValidateCreateCluster(ctx, clusterSpec); err != nil { + t.Fatalf("failed to setup and validate: %v", err) + } + + cp, md, err := provider.GenerateCAPISpecForCreate(context.Background(), cluster, clusterSpec) + if err != nil { + t.Fatalf("failed to generate cluster api spec contents: %v", err) + } + + test.AssertContentToFile(t, string(cp), "testdata/expected_results_bottlerocket_kernel_config_cp.yaml") + test.AssertContentToFile(t, string(md), "testdata/expected_results_bottlerocket_kernel_config_md.yaml") +} diff --git a/pkg/providers/vsphere/config/template-cp.yaml b/pkg/providers/vsphere/config/template-cp.yaml index b82859d3595c..e1f9a38a8911 100644 --- a/pkg/providers/vsphere/config/template-cp.yaml +++ b/pkg/providers/vsphere/config/template-cp.yaml @@ -514,6 +514,13 @@ spec: etcdImage: {{.etcdImage}} bootstrapImage: {{.bottlerocketBootstrapRepository}}:{{.bottlerocketBootstrapVersion}} pauseImage: {{.pauseRepository}}:{{.pauseVersion}} +{{- if .etcdKernelSettings }} + kernel: + sysctlSettings: +{{- range $key, $value := .etcdKernelSettings }} + {{ $key }}: {{ $value }} +{{- end }} +{{- end }} {{- else}} cloudInitConfig: version: {{.externalEtcdVersion}} diff --git a/pkg/providers/vsphere/template.go b/pkg/providers/vsphere/template.go index d2c9a891cab4..d32640b368f3 100644 --- a/pkg/providers/vsphere/template.go +++ b/pkg/providers/vsphere/template.go @@ -275,8 +275,17 @@ func buildTemplateMapCP( values["etcdSshUsername"] = firstEtcdMachinesUser.Name values["vsphereEtcdSshAuthorizedKey"] = etcdSSHKey - if etcdMachineSpec.HostOSConfiguration != nil && etcdMachineSpec.HostOSConfiguration.NTPConfiguration != nil { - values["etcdNtpServers"] = etcdMachineSpec.HostOSConfiguration.NTPConfiguration.Servers + if etcdMachineSpec.HostOSConfiguration != nil { + if etcdMachineSpec.HostOSConfiguration.NTPConfiguration != nil { + values["etcdNtpServers"] = etcdMachineSpec.HostOSConfiguration.NTPConfiguration.Servers + } + + if etcdMachineSpec.HostOSConfiguration.BottlerocketConfiguration != nil { + if etcdMachineSpec.HostOSConfiguration.BottlerocketConfiguration.Kernel != nil && + etcdMachineSpec.HostOSConfiguration.BottlerocketConfiguration.Kernel.SysctlSettings != nil { + values["etcdKernelSettings"] = etcdMachineSpec.HostOSConfiguration.BottlerocketConfiguration.Kernel.SysctlSettings + } + } } } diff --git a/pkg/providers/vsphere/testdata/cluster_bottlerocket_kernel_config.yaml b/pkg/providers/vsphere/testdata/cluster_bottlerocket_kernel_config.yaml new file mode 100644 index 000000000000..fb2451a02d92 --- /dev/null +++ b/pkg/providers/vsphere/testdata/cluster_bottlerocket_kernel_config.yaml @@ -0,0 +1,125 @@ +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: Cluster +metadata: + name: test +spec: + controlPlaneConfiguration: + count: 3 + endpoint: + host: 1.2.3.4 + machineGroupRef: + name: test-cp + kind: VSphereMachineConfig + kubernetesVersion: "1.21" + workerNodeGroupConfigurations: + - count: 3 + machineGroupRef: + name: test-wn + kind: VSphereMachineConfig + name: md-0 + externalEtcdConfiguration: + count: 3 + machineGroupRef: + name: test-etcd + kind: VSphereMachineConfig + datacenterRef: + kind: VSphereDatacenterConfig + name: test + clusterNetwork: + cni: "cilium" + pods: + cidrBlocks: + - 192.168.0.0/16 + services: + cidrBlocks: + - 10.96.0.0/12 +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: VSphereMachineConfig +metadata: + name: test-cp +spec: + diskGiB: 25 + cloneMode: linkedClone + datastore: "/SDDC-Datacenter/datastore/WorkloadDatastore" + folder: "/SDDC-Datacenter/vm" + memoryMiB: 8192 + numCPUs: 2 + osFamily: bottlerocket + resourcePool: "*/Resources" + storagePolicyName: "vSAN Default Storage Policy" + template: "/SDDC-Datacenter/vm/Templates/bottlerocket-1804-kube-v1.19.6" + hostOSConfiguration: + bottlerocketConfiguration: + kernel: + sysctlSettings: + setting1: foo + setting2: bar + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ== testemail@test.com" +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: VSphereMachineConfig +metadata: + name: test-wn +spec: + diskGiB: 25 + cloneMode: linkedClone + datastore: "/SDDC-Datacenter/datastore/WorkloadDatastore" + folder: "/SDDC-Datacenter/vm" + memoryMiB: 4096 + numCPUs: 3 + osFamily: bottlerocket + resourcePool: "*/Resources" + storagePolicyName: "vSAN Default Storage Policy" + template: "/SDDC-Datacenter/vm/Templates/bottlerocket-1804-kube-v1.19.6" + hostOSConfiguration: + bottlerocketConfiguration: + kernel: + sysctlSettings: + setting1: foo + setting2: bar + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ== testemail@test.com" +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: VSphereMachineConfig +metadata: + name: test-etcd +spec: + diskGiB: 25 + cloneMode: linkedClone + datastore: "/SDDC-Datacenter/datastore/WorkloadDatastore" + folder: "/SDDC-Datacenter/vm" + memoryMiB: 4096 + numCPUs: 3 + osFamily: bottlerocket + resourcePool: "*/Resources" + storagePolicyName: "vSAN Default Storage Policy" + template: "/SDDC-Datacenter/vm/Templates/bottlerocket-1804-kube-v1.19.6" + hostOSConfiguration: + bottlerocketConfiguration: + kernel: + sysctlSettings: + setting1: foo + setting2: bar + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ== testemail@test.com" +--- +apiVersion: anywhere.eks.amazonaws.com/v1alpha1 +kind: VSphereDatacenterConfig +metadata: + name: test +spec: + datacenter: "SDDC-Datacenter" + network: "/SDDC-Datacenter/network/sddc-cgw-network-1" + server: "vsphere_server" + thumbprint: "ABCDEFG" + insecure: false + \ No newline at end of file diff --git a/pkg/providers/vsphere/testdata/expected_results_bottlerocket_kernel_config_cp.yaml b/pkg/providers/vsphere/testdata/expected_results_bottlerocket_kernel_config_cp.yaml new file mode 100644 index 000000000000..c2b42a819219 --- /dev/null +++ b/pkg/providers/vsphere/testdata/expected_results_bottlerocket_kernel_config_cp.yaml @@ -0,0 +1,1219 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + labels: + cluster.x-k8s.io/cluster-name: test + name: test + namespace: eksa-system +spec: + clusterNetwork: + pods: + cidrBlocks: [192.168.0.0/16] + services: + cidrBlocks: [10.96.0.0/12] + controlPlaneRef: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlane + name: test + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: VSphereCluster + name: test + managedExternalEtcdRef: + apiVersion: etcdcluster.cluster.x-k8s.io/v1beta1 + kind: EtcdadmCluster + name: test-etcd + namespace: eksa-system +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: VSphereCluster +metadata: + name: test + namespace: eksa-system +spec: + controlPlaneEndpoint: + host: 1.2.3.4 + port: 6443 + identityRef: + kind: Secret + name: test-vsphere-credentials + server: vsphere_server + thumbprint: 'ABCDEFG' +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: VSphereMachineTemplate +metadata: + name: test-control-plane-template-1234567890000 + namespace: eksa-system +spec: + template: + spec: + cloneMode: linkedClone + datacenter: 'SDDC-Datacenter' + datastore: /SDDC-Datacenter/datastore/WorkloadDatastore + diskGiB: 25 + folder: '/SDDC-Datacenter/vm' + memoryMiB: 8192 + network: + devices: + - dhcp4: true + networkName: /SDDC-Datacenter/network/sddc-cgw-network-1 + numCPUs: 2 + resourcePool: '*/Resources' + server: vsphere_server + storagePolicyName: "vSAN Default Storage Policy" + template: /SDDC-Datacenter/vm/Templates/bottlerocket-1804-kube-v1.19.6 + thumbprint: 'ABCDEFG' +--- +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 +kind: KubeadmControlPlane +metadata: + name: test + namespace: eksa-system +spec: + machineTemplate: + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: VSphereMachineTemplate + name: test-control-plane-template-1234567890000 + kubeadmConfigSpec: + clusterConfiguration: + imageRepository: public.ecr.aws/eks-distro/kubernetes + etcd: + external: + endpoints: [] + caFile: "/var/lib/kubeadm/pki/etcd/ca.crt" + certFile: "/var/lib/kubeadm/pki/server-etcd-client.crt" + keyFile: "/var/lib/kubeadm/pki/apiserver-etcd-client.key" + dns: + imageRepository: public.ecr.aws/eks-distro/coredns + imageTag: v1.8.3-eks-1-21-4 + pause: + imageRepository: public.ecr.aws/eks-distro/kubernetes/pause + imageTag: v1.21.2-eks-1-21-4 + bottlerocketBootstrap: + imageRepository: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap + imageTag: v1-21-4-eks-a-v0.0.0-dev-build.158 + bottlerocket: + kernel: + sysctlSettings: + setting1: foo + setting2: bar + apiServer: + extraArgs: + cloud-provider: external + audit-policy-file: /etc/kubernetes/audit-policy.yaml + audit-log-path: /var/log/kubernetes/api-audit.log + audit-log-maxage: "30" + audit-log-maxbackup: "10" + audit-log-maxsize: "512" + profiling: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + extraVolumes: + - hostPath: /var/lib/kubeadm/audit-policy.yaml + mountPath: /etc/kubernetes/audit-policy.yaml + name: audit-policy + pathType: File + readOnly: true + - hostPath: /var/log/kubernetes + mountPath: /var/log/kubernetes + name: audit-log-dir + pathType: DirectoryOrCreate + readOnly: false + controllerManager: + extraArgs: + cloud-provider: external + profiling: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + extraVolumes: + - hostPath: /var/lib/kubeadm/controller-manager.conf + mountPath: /etc/kubernetes/controller-manager.conf + name: kubeconfig + pathType: File + readOnly: true + scheduler: + extraArgs: + profiling: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + extraVolumes: + - hostPath: /var/lib/kubeadm/scheduler.conf + mountPath: /etc/kubernetes/scheduler.conf + name: kubeconfig + pathType: File + readOnly: true + certificatesDir: /var/lib/kubeadm/pki + files: + - content: | + apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + name: kube-vip + namespace: kube-system + spec: + containers: + - args: + - manager + env: + - name: vip_arp + value: "true" + - name: port + value: "6443" + - name: vip_cidr + value: "32" + - name: cp_enable + value: "true" + - name: cp_namespace + value: kube-system + - name: vip_ddns + value: "false" + - name: vip_leaderelection + value: "true" + - name: vip_leaseduration + value: "15" + - name: vip_renewdeadline + value: "10" + - name: vip_retryperiod + value: "2" + - name: address + value: 1.2.3.4 + image: public.ecr.aws/l0g8r8j6/kube-vip/kube-vip:v0.3.7-eks-a-v0.0.0-dev-build.158 + imagePullPolicy: IfNotPresent + name: kube-vip + resources: {} + securityContext: + capabilities: + add: + - NET_ADMIN + - NET_RAW + volumeMounts: + - mountPath: /etc/kubernetes/admin.conf + name: kubeconfig + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes/admin.conf + name: kubeconfig + status: {} + owner: root:root + path: /etc/kubernetes/manifests/kube-vip.yaml + - content: | + apiVersion: audit.k8s.io/v1beta1 + kind: Policy + rules: + # Log aws-auth configmap changes + - level: RequestResponse + namespaces: ["kube-system"] + verbs: ["update", "patch", "delete"] + resources: + - group: "" # core + resources: ["configmaps"] + resourceNames: ["aws-auth"] + omitStages: + - "RequestReceived" + # The following requests were manually identified as high-volume and low-risk, + # so drop them. + - level: None + users: ["system:kube-proxy"] + verbs: ["watch"] + resources: + - group: "" # core + resources: ["endpoints", "services", "services/status"] + - level: None + users: ["kubelet"] # legacy kubelet identity + verbs: ["get"] + resources: + - group: "" # core + resources: ["nodes", "nodes/status"] + - level: None + userGroups: ["system:nodes"] + verbs: ["get"] + resources: + - group: "" # core + resources: ["nodes", "nodes/status"] + - level: None + users: + - system:kube-controller-manager + - system:kube-scheduler + - system:serviceaccount:kube-system:endpoint-controller + verbs: ["get", "update"] + namespaces: ["kube-system"] + resources: + - group: "" # core + resources: ["endpoints"] + - level: None + users: ["system:apiserver"] + verbs: ["get"] + resources: + - group: "" # core + resources: ["namespaces", "namespaces/status", "namespaces/finalize"] + # Don't log HPA fetching metrics. + - level: None + users: + - system:kube-controller-manager + verbs: ["get", "list"] + resources: + - group: "metrics.k8s.io" + # Don't log these read-only URLs. + - level: None + nonResourceURLs: + - /healthz* + - /version + - /swagger* + # Don't log events requests. + - level: None + resources: + - group: "" # core + resources: ["events"] + # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes + - level: Request + users: ["kubelet", "system:node-problem-detector", "system:serviceaccount:kube-system:node-problem-detector"] + verbs: ["update","patch"] + resources: + - group: "" # core + resources: ["nodes/status", "pods/status"] + omitStages: + - "RequestReceived" + - level: Request + userGroups: ["system:nodes"] + verbs: ["update","patch"] + resources: + - group: "" # core + resources: ["nodes/status", "pods/status"] + omitStages: + - "RequestReceived" + # deletecollection calls can be large, don't log responses for expected namespace deletions + - level: Request + users: ["system:serviceaccount:kube-system:namespace-controller"] + verbs: ["deletecollection"] + omitStages: + - "RequestReceived" + # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data, + # so only log at the Metadata level. + - level: Metadata + resources: + - group: "" # core + resources: ["secrets", "configmaps"] + - group: authentication.k8s.io + resources: ["tokenreviews"] + omitStages: + - "RequestReceived" + - level: Request + resources: + - group: "" + resources: ["serviceaccounts/token"] + # Get repsonses can be large; skip them. + - level: Request + verbs: ["get", "list", "watch"] + resources: + - group: "" # core + - group: "admissionregistration.k8s.io" + - group: "apiextensions.k8s.io" + - group: "apiregistration.k8s.io" + - group: "apps" + - group: "authentication.k8s.io" + - group: "authorization.k8s.io" + - group: "autoscaling" + - group: "batch" + - group: "certificates.k8s.io" + - group: "extensions" + - group: "metrics.k8s.io" + - group: "networking.k8s.io" + - group: "policy" + - group: "rbac.authorization.k8s.io" + - group: "scheduling.k8s.io" + - group: "settings.k8s.io" + - group: "storage.k8s.io" + omitStages: + - "RequestReceived" + # Default level for known APIs + - level: RequestResponse + resources: + - group: "" # core + - group: "admissionregistration.k8s.io" + - group: "apiextensions.k8s.io" + - group: "apiregistration.k8s.io" + - group: "apps" + - group: "authentication.k8s.io" + - group: "authorization.k8s.io" + - group: "autoscaling" + - group: "batch" + - group: "certificates.k8s.io" + - group: "extensions" + - group: "metrics.k8s.io" + - group: "networking.k8s.io" + - group: "policy" + - group: "rbac.authorization.k8s.io" + - group: "scheduling.k8s.io" + - group: "settings.k8s.io" + - group: "storage.k8s.io" + omitStages: + - "RequestReceived" + # Default level for all other requests. + - level: Metadata + omitStages: + - "RequestReceived" + owner: root:root + path: /etc/kubernetes/audit-policy.yaml + initConfiguration: + nodeRegistration: + criSocket: /var/run/containerd/containerd.sock + kubeletExtraArgs: + cloud-provider: external + read-only-port: "0" + anonymous-auth: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + name: '{{ ds.meta_data.hostname }}' + joinConfiguration: + pause: + imageRepository: public.ecr.aws/eks-distro/kubernetes/pause + imageTag: v1.21.2-eks-1-21-4 + bottlerocketBootstrap: + imageRepository: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap + imageTag: v1-21-4-eks-a-v0.0.0-dev-build.158 + bottlerocket: + kernel: + sysctlSettings: + setting1: foo + setting2: bar + nodeRegistration: + criSocket: /var/run/containerd/containerd.sock + kubeletExtraArgs: + cloud-provider: external + read-only-port: "0" + anonymous-auth: "false" + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + name: '{{ ds.meta_data.hostname }}' + preKubeadmCommands: + - hostname "{{ ds.meta_data.hostname }}" + - echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts + - echo "127.0.0.1 localhost" >>/etc/hosts + - echo "127.0.0.1 {{ ds.meta_data.hostname }}" >>/etc/hosts + - echo "{{ ds.meta_data.hostname }}" >/etc/hostname + useExperimentalRetryJoin: true + users: + - name: ec2-user + sshAuthorizedKeys: + - 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ==' + sudo: ALL=(ALL) NOPASSWD:ALL + format: bottlerocket + replicas: 3 + version: v1.21.2-eks-1-21-4 +--- +apiVersion: addons.cluster.x-k8s.io/v1beta1 +kind: ClusterResourceSet +metadata: + labels: + cluster.x-k8s.io/cluster-name: test + name: test-csi + namespace: eksa-system +spec: + strategy: Reconcile + clusterSelector: + matchLabels: + cluster.x-k8s.io/cluster-name: test + resources: + - kind: Secret + name: test-vsphere-csi-controller + - kind: ConfigMap + name: test-vsphere-csi-controller-role + - kind: ConfigMap + name: test-vsphere-csi-controller-binding + - kind: Secret + name: test-csi-vsphere-config + - kind: ConfigMap + name: test-csi.vsphere.vmware.com + - kind: ConfigMap + name: test-vsphere-csi-node + - kind: ConfigMap + name: test-vsphere-csi-controller +--- +apiVersion: addons.cluster.x-k8s.io/v1beta1 +kind: ClusterResourceSet +metadata: + labels: + cluster.x-k8s.io/cluster-name: test + name: test-cpi + namespace: eksa-system +spec: + strategy: Reconcile + clusterSelector: + matchLabels: + cluster.x-k8s.io/cluster-name: test + resources: + - kind: Secret + name: test-cloud-controller-manager + - kind: Secret + name: test-cloud-provider-vsphere-credentials + - kind: ConfigMap + name: test-cpi-manifests +--- +kind: EtcdadmCluster +apiVersion: etcdcluster.cluster.x-k8s.io/v1beta1 +metadata: + name: test-etcd + namespace: eksa-system +spec: + replicas: 3 + etcdadmConfigSpec: + etcdadmBuiltin: true + format: bottlerocket + bottlerocketConfig: + etcdImage: public.ecr.aws/eks-distro/etcd-io/etcd:v3.4.16-eks-1-21-4 + bootstrapImage: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap:v1-21-4-eks-a-v0.0.0-dev-build.158 + pauseImage: public.ecr.aws/eks-distro/kubernetes/pause:v1.21.2-eks-1-21-4 + kernel: + sysctlSettings: + setting1: foo + setting2: bar + cipherSuites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + users: + - name: ec2-user + sshAuthorizedKeys: + - 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ==' + sudo: ALL=(ALL) NOPASSWD:ALL + infrastructureTemplate: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: VSphereMachineTemplate + name: test-etcd-template-1234567890000 +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: VSphereMachineTemplate +metadata: + name: test-etcd-template-1234567890000 + namespace: 'eksa-system' +spec: + template: + spec: + cloneMode: linkedClone + datacenter: 'SDDC-Datacenter' + datastore: /SDDC-Datacenter/datastore/WorkloadDatastore + diskGiB: 25 + folder: '/SDDC-Datacenter/vm' + memoryMiB: 8192 + network: + devices: + - dhcp4: true + networkName: /SDDC-Datacenter/network/sddc-cgw-network-1 + numCPUs: 3 + resourcePool: '*/Resources' + server: vsphere_server + storagePolicyName: "vSAN Default Storage Policy" + template: /SDDC-Datacenter/vm/Templates/bottlerocket-1804-kube-v1.19.6 + thumbprint: 'ABCDEFG' +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-vsphere-credentials + namespace: eksa-system + labels: + clusterctl.cluster.x-k8s.io/move: "true" +stringData: + username: "vsphere_username" + password: "vsphere_password" +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-vsphere-csi-controller + namespace: eksa-system +stringData: + data: | + apiVersion: v1 + kind: ServiceAccount + metadata: + name: vsphere-csi-controller + namespace: kube-system +type: addons.cluster.x-k8s.io/resource-set +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-csi-vsphere-config + namespace: eksa-system +stringData: + data: | + apiVersion: v1 + kind: Secret + metadata: + name: csi-vsphere-config + namespace: kube-system + stringData: + csi-vsphere.conf: |+ + [Global] + cluster-id = "default/test" + thumbprint = "ABCDEFG" + + [VirtualCenter "vsphere_server"] + user = "vsphere_username" + password = "vsphere_password" + datacenters = "SDDC-Datacenter" + insecure-flag = "false" + + [Network] + public-network = "/SDDC-Datacenter/network/sddc-cgw-network-1" + type: Opaque +type: addons.cluster.x-k8s.io/resource-set +--- +apiVersion: v1 +data: + data: | + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: vsphere-csi-controller-role + rules: + - apiGroups: + - storage.k8s.io + resources: + - csidrivers + verbs: + - create + - delete + - apiGroups: + - "" + resources: + - nodes + - pods + - secrets + - configmaps + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - watch + - update + - create + - delete + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments + verbs: + - get + - list + - watch + - update + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments/status + verbs: + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get + - list + - watch + - update + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + - csinodes + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - list + - watch + - create + - update + - patch + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshots + verbs: + - get + - list + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents + verbs: + - get + - list +kind: ConfigMap +metadata: + name: test-vsphere-csi-controller-role + namespace: eksa-system +--- +apiVersion: v1 +data: + data: | + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: vsphere-csi-controller-binding + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: vsphere-csi-controller-role + subjects: + - kind: ServiceAccount + name: vsphere-csi-controller + namespace: kube-system +kind: ConfigMap +metadata: + name: test-vsphere-csi-controller-binding + namespace: eksa-system +--- +apiVersion: v1 +data: + data: | + apiVersion: storage.k8s.io/v1 + kind: CSIDriver + metadata: + name: csi.vsphere.vmware.com + spec: + attachRequired: true +kind: ConfigMap +metadata: + name: test-csi.vsphere.vmware.com + namespace: eksa-system +--- +apiVersion: v1 +data: + data: | + apiVersion: apps/v1 + kind: DaemonSet + metadata: + name: vsphere-csi-node + namespace: kube-system + spec: + selector: + matchLabels: + app: vsphere-csi-node + template: + metadata: + labels: + app: vsphere-csi-node + role: vsphere-csi + spec: + containers: + - args: + - --v=5 + - --csi-address=$(ADDRESS) + - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) + env: + - name: ADDRESS + value: /csi/csi.sock + - name: DRIVER_REG_SOCK_PATH + value: /var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock + image: public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar:v2.1.0-eks-1-21-4 + lifecycle: + preStop: + exec: + command: + - /bin/sh + - -c + - rm -rf /registration/csi.vsphere.vmware.com-reg.sock /csi/csi.sock + name: node-driver-registrar + resources: {} + securityContext: + privileged: true + volumeMounts: + - mountPath: /csi + name: plugin-dir + - mountPath: /registration + name: registration-dir + - env: + - name: CSI_ENDPOINT + value: unix:///csi/csi.sock + - name: X_CSI_MODE + value: node + - name: X_CSI_SPEC_REQ_VALIDATION + value: "false" + - name: VSPHERE_CSI_CONFIG + value: /etc/cloud/csi-vsphere.conf + - name: LOGGER_LEVEL + value: PRODUCTION + - name: X_CSI_LOG_LEVEL + value: INFO + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + image: public.ecr.aws/l0g8r8j6/kubernetes-sigs/vsphere-csi-driver/csi/driver:v2.2.0-eks-a-v0.0.0-dev-build.158 + livenessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: healthz + initialDelaySeconds: 10 + periodSeconds: 5 + timeoutSeconds: 3 + name: vsphere-csi-node + ports: + - containerPort: 9808 + name: healthz + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: true + capabilities: + add: + - SYS_ADMIN + privileged: true + volumeMounts: + - mountPath: /etc/cloud + name: vsphere-config-volume + - mountPath: /csi + name: plugin-dir + - mountPath: /var/lib/kubelet + mountPropagation: Bidirectional + name: pods-mount-dir + - mountPath: /dev + name: device-dir + - args: + - --csi-address=/csi/csi.sock + image: public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.2.0-eks-1-21-4 + name: liveness-probe + resources: {} + volumeMounts: + - mountPath: /csi + name: plugin-dir + dnsPolicy: Default + tolerations: + - effect: NoSchedule + operator: Exists + - effect: NoExecute + operator: Exists + volumes: + - name: vsphere-config-volume + secret: + secretName: csi-vsphere-config + - hostPath: + path: /var/lib/kubelet/plugins_registry + type: Directory + name: registration-dir + - hostPath: + path: /var/lib/kubelet/plugins/csi.vsphere.vmware.com/ + type: DirectoryOrCreate + name: plugin-dir + - hostPath: + path: /var/lib/kubelet + type: Directory + name: pods-mount-dir + - hostPath: + path: /dev + name: device-dir + updateStrategy: + type: RollingUpdate +kind: ConfigMap +metadata: + name: test-vsphere-csi-node + namespace: eksa-system +--- +apiVersion: v1 +data: + data: | + apiVersion: apps/v1 + kind: Deployment + metadata: + name: vsphere-csi-controller + namespace: kube-system + spec: + replicas: 1 + selector: + matchLabels: + app: vsphere-csi-controller + template: + metadata: + labels: + app: vsphere-csi-controller + role: vsphere-csi + spec: + containers: + - args: + - --v=4 + - --timeout=300s + - --csi-address=$(ADDRESS) + - --leader-election + env: + - name: ADDRESS + value: /csi/csi.sock + image: public.ecr.aws/eks-distro/kubernetes-csi/external-attacher:v3.1.0-eks-1-21-4 + name: csi-attacher + resources: {} + volumeMounts: + - mountPath: /csi + name: socket-dir + - env: + - name: CSI_ENDPOINT + value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock + - name: X_CSI_MODE + value: controller + - name: VSPHERE_CSI_CONFIG + value: /etc/cloud/csi-vsphere.conf + - name: LOGGER_LEVEL + value: PRODUCTION + - name: X_CSI_LOG_LEVEL + value: INFO + image: public.ecr.aws/l0g8r8j6/kubernetes-sigs/vsphere-csi-driver/csi/driver:v2.2.0-eks-a-v0.0.0-dev-build.158 + livenessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: healthz + initialDelaySeconds: 10 + periodSeconds: 5 + timeoutSeconds: 3 + name: vsphere-csi-controller + ports: + - containerPort: 9808 + name: healthz + protocol: TCP + resources: {} + volumeMounts: + - mountPath: /etc/cloud + name: vsphere-config-volume + readOnly: true + - mountPath: /var/lib/csi/sockets/pluginproxy/ + name: socket-dir + - args: + - --csi-address=$(ADDRESS) + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + image: public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.2.0-eks-1-21-4 + name: liveness-probe + resources: {} + volumeMounts: + - mountPath: /var/lib/csi/sockets/pluginproxy/ + name: socket-dir + - args: + - --leader-election + env: + - name: X_CSI_FULL_SYNC_INTERVAL_MINUTES + value: "30" + - name: LOGGER_LEVEL + value: PRODUCTION + - name: VSPHERE_CSI_CONFIG + value: /etc/cloud/csi-vsphere.conf + image: public.ecr.aws/l0g8r8j6/kubernetes-sigs/vsphere-csi-driver/csi/syncer:v2.2.0-eks-a-v0.0.0-dev-build.158 + name: vsphere-syncer + resources: {} + volumeMounts: + - mountPath: /etc/cloud + name: vsphere-config-volume + readOnly: true + - args: + - --v=4 + - --timeout=300s + - --csi-address=$(ADDRESS) + - --leader-election + - --default-fstype=ext4 + env: + - name: ADDRESS + value: /csi/csi.sock + image: public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner:v2.1.1-eks-1-21-4 + name: csi-provisioner + resources: {} + volumeMounts: + - mountPath: /csi + name: socket-dir + dnsPolicy: Default + serviceAccountName: vsphere-csi-controller + tolerations: + - effect: NoSchedule + key: node-role.kubernetes.io/master + operator: Exists + - effect: NoSchedule + key: node-role.kubernetes.io/control-plane + operator: Exists + volumes: + - name: vsphere-config-volume + secret: + secretName: csi-vsphere-config + - emptyDir: {} + name: socket-dir +kind: ConfigMap +metadata: + name: test-vsphere-csi-controller + namespace: eksa-system +--- +apiVersion: v1 +data: + data: | + apiVersion: v1 + data: + csi-migration: "false" + kind: ConfigMap + metadata: + name: internal-feature-states.csi.vsphere.vmware.com + namespace: kube-system +kind: ConfigMap +metadata: + name: test-internal-feature-states.csi.vsphere.vmware.com + namespace: eksa-system +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-cloud-controller-manager + namespace: eksa-system +stringData: + data: | + apiVersion: v1 + kind: ServiceAccount + metadata: + name: cloud-controller-manager + namespace: kube-system +type: addons.cluster.x-k8s.io/resource-set +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-cloud-provider-vsphere-credentials + namespace: eksa-system +stringData: + data: | + apiVersion: v1 + kind: Secret + metadata: + name: cloud-provider-vsphere-credentials + namespace: kube-system + stringData: + vsphere_server.password: "vsphere_password" + vsphere_server.username: "vsphere_username" + type: Opaque +type: addons.cluster.x-k8s.io/resource-set +--- +apiVersion: v1 +data: + data: | + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: system:cloud-controller-manager + rules: + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - update + - apiGroups: + - "" + resources: + - nodes + verbs: + - '*' + - apiGroups: + - "" + resources: + - nodes/status + verbs: + - patch + - apiGroups: + - "" + resources: + - services + verbs: + - list + - patch + - update + - watch + - apiGroups: + - "" + resources: + - serviceaccounts + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - watch + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: system:cloud-controller-manager + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:cloud-controller-manager + subjects: + - kind: ServiceAccount + name: cloud-controller-manager + namespace: kube-system + - kind: User + name: cloud-controller-manager + --- + apiVersion: v1 + data: + vsphere.conf: | + global: + secretName: cloud-provider-vsphere-credentials + secretNamespace: kube-system + thumbprint: "ABCDEFG" + insecureFlag: false + vcenter: + vsphere_server: + datacenters: + - 'SDDC-Datacenter' + secretName: cloud-provider-vsphere-credentials + secretNamespace: kube-system + server: 'vsphere_server' + thumbprint: 'ABCDEFG' + kind: ConfigMap + metadata: + name: vsphere-cloud-config + namespace: kube-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: servicecatalog.k8s.io:apiserver-authentication-reader + namespace: kube-system + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: extension-apiserver-authentication-reader + subjects: + - kind: ServiceAccount + name: cloud-controller-manager + namespace: kube-system + - kind: User + name: cloud-controller-manager + --- + apiVersion: v1 + kind: Service + metadata: + labels: + component: cloud-controller-manager + name: cloud-controller-manager + namespace: kube-system + spec: + ports: + - port: 443 + protocol: TCP + targetPort: 43001 + selector: + component: cloud-controller-manager + type: NodePort + --- + apiVersion: apps/v1 + kind: DaemonSet + metadata: + labels: + k8s-app: vsphere-cloud-controller-manager + name: vsphere-cloud-controller-manager + namespace: kube-system + spec: + selector: + matchLabels: + k8s-app: vsphere-cloud-controller-manager + template: + metadata: + labels: + k8s-app: vsphere-cloud-controller-manager + spec: + containers: + - args: + - --v=2 + - --cloud-provider=vsphere + - --cloud-config=/etc/cloud/vsphere.conf + image: public.ecr.aws/l0g8r8j6/kubernetes/cloud-provider-vsphere/cpi/manager:v1.21.0-eks-d-1-21-eks-a-v0.0.0-dev-build.158 + name: vsphere-cloud-controller-manager + resources: + requests: + cpu: 200m + volumeMounts: + - mountPath: /etc/cloud + name: vsphere-config-volume + readOnly: true + hostNetwork: true + serviceAccountName: cloud-controller-manager + tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: "true" + - effect: NoSchedule + key: node-role.kubernetes.io/master + - effect: NoSchedule + key: node-role.kubernetes.io/control-plane + - effect: NoSchedule + key: node.kubernetes.io/not-ready + volumes: + - configMap: + name: vsphere-cloud-config + name: vsphere-config-volume + updateStrategy: + type: RollingUpdate +kind: ConfigMap +metadata: + name: test-cpi-manifests + namespace: eksa-system diff --git a/pkg/providers/vsphere/testdata/expected_results_bottlerocket_kernel_config_md.yaml b/pkg/providers/vsphere/testdata/expected_results_bottlerocket_kernel_config_md.yaml new file mode 100644 index 000000000000..76b9a4b9b9e9 --- /dev/null +++ b/pkg/providers/vsphere/testdata/expected_results_bottlerocket_kernel_config_md.yaml @@ -0,0 +1,98 @@ +apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 +kind: KubeadmConfigTemplate +metadata: + name: test-md-0-template-1234567890000 + namespace: eksa-system +spec: + template: + spec: + joinConfiguration: + pause: + imageRepository: public.ecr.aws/eks-distro/kubernetes/pause + imageTag: v1.21.2-eks-1-21-4 + bottlerocketBootstrap: + imageRepository: public.ecr.aws/l0g8r8j6/bottlerocket-bootstrap + imageTag: v1-21-4-eks-a-v0.0.0-dev-build.158 + bottlerocket: + kernel: + sysctlSettings: + setting1: foo + setting2: bar + nodeRegistration: + criSocket: /var/run/containerd/containerd.sock + taints: [] + kubeletExtraArgs: + cloud-provider: external + read-only-port: "0" + anonymous-auth: "false" + cgroup-driver: systemd + tls-cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 + name: '{{ ds.meta_data.hostname }}' + preKubeadmCommands: + - hostname "{{ ds.meta_data.hostname }}" + - echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts + - echo "127.0.0.1 localhost" >>/etc/hosts + - echo "127.0.0.1 {{ ds.meta_data.hostname }}" >>/etc/hosts + - echo "{{ ds.meta_data.hostname }}" >/etc/hostname + users: + - name: ec2-user + sshAuthorizedKeys: + - 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1BK73XhIzjX+meUr7pIYh6RHbvI3tmHeQIXY5lv7aztN1UoX+bhPo3dwo2sfSQn5kuxgQdnxIZ/CTzy0p0GkEYVv3gwspCeurjmu0XmrdmaSGcGxCEWT/65NtvYrQtUE5ELxJ+N/aeZNlK2B7IWANnw/82913asXH4VksV1NYNduP0o1/G4XcwLLSyVFB078q/oEnmvdNIoS61j4/o36HVtENJgYr0idcBvwJdvcGxGnPaqOhx477t+kfJAa5n5dSA5wilIaoXH5i1Tf/HsTCM52L+iNCARvQzJYZhzbWI1MDQwzILtIBEQCJsl2XSqIupleY8CxqQ6jCXt2mhae+wPc3YmbO5rFvr2/EvC57kh3yDs1Nsuj8KOvD78KeeujbR8n8pScm3WDp62HFQ8lEKNdeRNj6kB8WnuaJvPnyZfvzOhwG65/9w13IBl7B1sWxbFnq2rMpm5uHVK7mAmjL0Tt8zoDhcE1YJEnp9xte3/pvmKPkST5Q/9ZtR9P5sI+02jY0fvPkPyC03j2gsPixG7rpOCwpOdbny4dcj0TDeeXJX8er+oVfJuLYz0pNWJcT2raDdFfcqvYA0B0IyNYlj5nWX4RuEcyT3qocLReWPnZojetvAG/H8XwOh7fEVGqHAKOVSnPXCSQJPl6s0H12jPJBDJMTydtYPEszl4/CeQ==' + sudo: ALL=(ALL) NOPASSWD:ALL + format: bottlerocket +--- +apiVersion: cluster.x-k8s.io/v1beta1 +kind: MachineDeployment +metadata: + labels: + cluster.x-k8s.io/cluster-name: test + name: test-md-0 + namespace: eksa-system +spec: + clusterName: test + replicas: 3 + selector: + matchLabels: {} + template: + metadata: + labels: + cluster.x-k8s.io/cluster-name: test + spec: + bootstrap: + configRef: + apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 + kind: KubeadmConfigTemplate + name: test-md-0-template-1234567890000 + clusterName: test + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: VSphereMachineTemplate + name: test-md-0-1234567890000 + version: v1.21.2-eks-1-21-4 +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: VSphereMachineTemplate +metadata: + name: test-md-0-1234567890000 + namespace: eksa-system +spec: + template: + spec: + cloneMode: linkedClone + datacenter: 'SDDC-Datacenter' + datastore: /SDDC-Datacenter/datastore/WorkloadDatastore + diskGiB: 25 + folder: '/SDDC-Datacenter/vm' + memoryMiB: 4096 + network: + devices: + - dhcp4: true + networkName: /SDDC-Datacenter/network/sddc-cgw-network-1 + numCPUs: 3 + resourcePool: '*/Resources' + server: vsphere_server + storagePolicyName: "vSAN Default Storage Policy" + template: /SDDC-Datacenter/vm/Templates/bottlerocket-1804-kube-v1.19.6 + thumbprint: 'ABCDEFG' + +--- diff --git a/pkg/providers/vsphere/vsphere_test.go b/pkg/providers/vsphere/vsphere_test.go index b234e249ccf7..8a9040056f86 100644 --- a/pkg/providers/vsphere/vsphere_test.go +++ b/pkg/providers/vsphere/vsphere_test.go @@ -3467,3 +3467,40 @@ func TestProviderGenerateDeploymentFileForBottlerocketWithBottlerocketSettingsCo test.AssertContentToFile(t, string(cp), "testdata/expected_results_bottlerocket_settings_config_cp.yaml") test.AssertContentToFile(t, string(md), "testdata/expected_results_bottlerocket_settings_config_md.yaml") } + +func TestProviderGenerateDeploymentFileForBottlerocketWithKernelConfig(t *testing.T) { + clusterSpecManifest := "cluster_bottlerocket_kernel_config.yaml" + mockCtrl := gomock.NewController(t) + setupContext(t) + kubectl := mocks.NewMockProviderKubectlClient(mockCtrl) + cluster := &types.Cluster{Name: "test"} + clusterSpec := givenClusterSpec(t, clusterSpecManifest) + datacenterConfig := givenDatacenterConfig(t, clusterSpecManifest) + ctx := context.Background() + govc := NewDummyProviderGovcClient() + vscb, _ := newMockVSphereClientBuilder(mockCtrl) + ipValidator := mocks.NewMockIPValidator(mockCtrl) + ipValidator.EXPECT().ValidateControlPlaneIPUniqueness(clusterSpec.Cluster).Return(nil) + v := NewValidator(govc, vscb) + govc.osTag = bottlerocketOSTag + provider := newProvider( + t, + datacenterConfig, + clusterSpec.Cluster, + govc, + kubectl, + v, + ipValidator, + ) + if err := provider.SetupAndValidateCreateCluster(ctx, clusterSpec); err != nil { + t.Fatalf("failed to setup and validate: %v", err) + } + + cp, md, err := provider.GenerateCAPISpecForCreate(context.Background(), cluster, clusterSpec) + if err != nil { + t.Fatalf("failed to generate cluster api spec contents: %v", err) + } + + test.AssertContentToFile(t, string(cp), "testdata/expected_results_bottlerocket_kernel_config_cp.yaml") + test.AssertContentToFile(t, string(md), "testdata/expected_results_bottlerocket_kernel_config_md.yaml") +} diff --git a/pkg/registrymirror/registrymirror.go b/pkg/registrymirror/registrymirror.go index 1726de7b971b..3fc7bb9e8aad 100644 --- a/pkg/registrymirror/registrymirror.go +++ b/pkg/registrymirror/registrymirror.go @@ -23,7 +23,6 @@ type RegistryMirror struct { CACertContent string // InsecureSkipVerify skips the registry certificate verification. // Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. - // Currently only supported for snow provider InsecureSkipVerify bool } diff --git a/pkg/validations/cluster.go b/pkg/validations/cluster.go index 303ec0365513..2e7b83716726 100644 --- a/pkg/validations/cluster.go +++ b/pkg/validations/cluster.go @@ -2,6 +2,7 @@ package validations import ( "context" + "errors" "fmt" "github.com/aws/eks-anywhere/pkg/api/v1alpha1" @@ -9,9 +10,30 @@ import ( "github.com/aws/eks-anywhere/pkg/config" "github.com/aws/eks-anywhere/pkg/features" "github.com/aws/eks-anywhere/pkg/logger" + "github.com/aws/eks-anywhere/pkg/providers" "github.com/aws/eks-anywhere/pkg/types" ) +// ValidateOSForRegistryMirror checks if the OS is valid for the provided registry mirror configuration. +func ValidateOSForRegistryMirror(clusterSpec *cluster.Spec, provider providers.Provider) error { + cluster := clusterSpec.Cluster + if cluster.Spec.RegistryMirrorConfiguration == nil { + return nil + } + + machineConfigs := provider.MachineConfigs(clusterSpec) + if !cluster.Spec.RegistryMirrorConfiguration.InsecureSkipVerify || machineConfigs == nil { + return nil + } + + for _, mc := range machineConfigs { + if mc.OSFamily() == v1alpha1.Bottlerocket { + return errors.New("InsecureSkipVerify is not supported for bottlerocket") + } + } + return nil +} + func ValidateCertForRegistryMirror(clusterSpec *cluster.Spec, tlsValidator TlsValidator) error { cluster := clusterSpec.Cluster if cluster.Spec.RegistryMirrorConfiguration == nil { @@ -79,3 +101,27 @@ func ValidateK8s126Support(clusterSpec *cluster.Spec) error { } return nil } + +// ValidateManagementClusterBundlesVersion checks if management cluster's bundle version +// is greater than or equal to the bundle version used to upgrade a workload cluster. +func ValidateManagementClusterBundlesVersion(ctx context.Context, k KubectlClient, mgmtCluster *types.Cluster, workload *cluster.Spec) error { + cluster, err := k.GetEksaCluster(ctx, mgmtCluster, mgmtCluster.Name) + if err != nil { + return err + } + + if cluster.Spec.BundlesRef == nil { + return fmt.Errorf("management cluster bundlesRef cannot be nil") + } + + mgmtBundles, err := k.GetBundles(ctx, mgmtCluster.KubeconfigFile, cluster.Spec.BundlesRef.Name, cluster.Spec.BundlesRef.Namespace) + if err != nil { + return err + } + + if mgmtBundles.Spec.Number < workload.Bundles.Spec.Number { + return fmt.Errorf("cannot upgrade workload cluster with bundle spec.number %d while management cluster %s is on older bundle spec.number %d", workload.Bundles.Spec.Number, mgmtCluster.Name, mgmtBundles.Spec.Number) + } + + return nil +} diff --git a/pkg/validations/cluster_test.go b/pkg/validations/cluster_test.go index cfc385383cc7..a10c9f61789b 100644 --- a/pkg/validations/cluster_test.go +++ b/pkg/validations/cluster_test.go @@ -13,16 +13,21 @@ import ( "github.com/aws/eks-anywhere/internal/test" anywherev1 "github.com/aws/eks-anywhere/pkg/api/v1alpha1" "github.com/aws/eks-anywhere/pkg/cluster" + "github.com/aws/eks-anywhere/pkg/constants" "github.com/aws/eks-anywhere/pkg/features" + "github.com/aws/eks-anywhere/pkg/providers" + providermocks "github.com/aws/eks-anywhere/pkg/providers/mocks" "github.com/aws/eks-anywhere/pkg/types" "github.com/aws/eks-anywhere/pkg/validations" "github.com/aws/eks-anywhere/pkg/validations/mocks" + releasev1alpha1 "github.com/aws/eks-anywhere/release/api/v1alpha1" ) type clusterTest struct { *WithT tlsValidator *mocks.MockTlsValidator kubectl *mocks.MockKubectlClient + provider *providermocks.MockProvider clusterSpec *cluster.Spec certContent string host, port string @@ -31,9 +36,11 @@ type clusterTest struct { type clusterTestOpt func(t *testing.T, ct *clusterTest) func newTest(t *testing.T, opts ...clusterTestOpt) *clusterTest { + ctrl := gomock.NewController(t) cTest := &clusterTest{ WithT: NewWithT(t), clusterSpec: test.NewClusterSpec(), + provider: providermocks.NewMockProvider(ctrl), } for _, opt := range opts { opt(t, cTest) @@ -151,6 +158,94 @@ func TestValidateAuthenticationForRegistryMirrorAuthValid(t *testing.T) { tt.Expect(validations.ValidateAuthenticationForRegistryMirror(tt.clusterSpec)).To(Succeed()) } +func TestValidateOSForRegistryMirrorNoRegistryMirror(t *testing.T) { + tt := newTest(t, withTLS()) + tt.clusterSpec.Cluster.Spec.RegistryMirrorConfiguration = nil + tt.Expect(validations.ValidateOSForRegistryMirror(tt.clusterSpec, tt.provider)).To(Succeed()) +} + +func TestValidateOSForRegistryMirrorInsecureSkipVerifyDisabled(t *testing.T) { + tt := newTest(t, withTLS()) + tt.clusterSpec.Cluster.Spec.RegistryMirrorConfiguration.InsecureSkipVerify = false + tt.provider.EXPECT().MachineConfigs(tt.clusterSpec).Return([]providers.MachineConfig{}) + tt.Expect(validations.ValidateOSForRegistryMirror(tt.clusterSpec, tt.provider)).To(Succeed()) +} + +func TestValidateOSForRegistryMirrorInsecureSkipVerifyEnabled(t *testing.T) { + tests := []struct { + name string + mirrorConfig *anywherev1.RegistryMirrorConfiguration + machineConfigs func() []providers.MachineConfig + wantErr string + }{ + { + name: "insecureSkipVerify no machine configs", + machineConfigs: func() []providers.MachineConfig { + return nil + }, + wantErr: "", + }, + { + name: "insecureSkipVerify on provider with ubuntu", + machineConfigs: func() []providers.MachineConfig { + configs := make([]providers.MachineConfig, 0, 1) + configs = append(configs, &anywherev1.VSphereMachineConfig{ + Spec: anywherev1.VSphereMachineConfigSpec{ + OSFamily: anywherev1.Ubuntu, + }, + }) + return configs + }, + wantErr: "", + }, + { + name: "insecureSkipVerify on provider with bottlerocket", + machineConfigs: func() []providers.MachineConfig { + configs := make([]providers.MachineConfig, 0, 1) + configs = append(configs, &anywherev1.SnowMachineConfig{ + Spec: anywherev1.SnowMachineConfigSpec{ + OSFamily: anywherev1.Bottlerocket, + }, + }) + return configs + }, + wantErr: "InsecureSkipVerify is not supported for bottlerocket", + }, + { + name: "insecureSkipVerify on provider with redhat", + machineConfigs: func() []providers.MachineConfig { + configs := make([]providers.MachineConfig, 0, 1) + configs = append(configs, &anywherev1.VSphereMachineConfig{ + Spec: anywherev1.VSphereMachineConfigSpec{ + OSFamily: anywherev1.RedHat, + }, + }) + return configs + }, + wantErr: "", + }, + } + + validationTest := newTest(t, func(t *testing.T, ct *clusterTest) { + ct.clusterSpec = test.NewClusterSpec(func(s *cluster.Spec) { + s.Cluster.Spec.RegistryMirrorConfiguration = &anywherev1.RegistryMirrorConfiguration{ + InsecureSkipVerify: true, + } + }) + }) + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + validationTest.provider.EXPECT().MachineConfigs(validationTest.clusterSpec).Return(test.machineConfigs()) + err := validations.ValidateOSForRegistryMirror(validationTest.clusterSpec, validationTest.provider) + if test.wantErr != "" { + validationTest.Expect(err).To(MatchError(test.wantErr)) + } else { + validationTest.Expect(err).To(BeNil()) + } + }) + } +} + func TestValidateManagementClusterNameValid(t *testing.T) { mgmtName := "test" tt := newTest(t, withKubectl()) @@ -232,3 +327,106 @@ func TestValidateK8s126SupportActive(t *testing.T) { os.Setenv(features.K8s126SupportEnvVar, "true") tt.Expect(validations.ValidateK8s126Support(tt.clusterSpec)).To(Succeed()) } + +func TestValidateManagementClusterBundlesVersion(t *testing.T) { + type testParam struct { + mgmtBundlesName string + mgmtBundlesNumber int + wkBundlesName string + wkBundlesNumber int + wantErr string + errGetEksaCluster error + errGetBundles error + } + + testParams := []testParam{ + { + mgmtBundlesName: "bundles-28", + mgmtBundlesNumber: 28, + wkBundlesName: "bundles-27", + wkBundlesNumber: 27, + wantErr: "", + }, + { + mgmtBundlesName: "bundles-28", + mgmtBundlesNumber: 28, + wkBundlesName: "bundles-29", + wkBundlesNumber: 29, + wantErr: "cannot upgrade workload cluster with bundle spec.number 29 while management cluster management-cluster is on older bundle spec.number 28", + }, + { + mgmtBundlesName: "bundles-28", + mgmtBundlesNumber: 28, + wkBundlesName: "bundles-27", + wkBundlesNumber: 27, + wantErr: "failed to reach cluster", + errGetEksaCluster: errors.New("failed to reach cluster"), + }, + { + mgmtBundlesName: "bundles-28", + mgmtBundlesNumber: 28, + wkBundlesName: "bundles-27", + wkBundlesNumber: 27, + wantErr: "failed to reach cluster", + errGetBundles: errors.New("failed to reach cluster"), + }, + } + + for _, p := range testParams { + tt := newTest(t, withKubectl()) + mgmtName := "management-cluster" + mgmtCluster := managementCluster(mgmtName) + mgmtClusterObject := anywhereCluster(mgmtName) + + mgmtClusterObject.Spec.BundlesRef = &anywherev1.BundlesRef{ + Name: p.mgmtBundlesName, + Namespace: constants.EksaSystemNamespace, + } + + tt.clusterSpec.Config.Cluster.Spec.BundlesRef = &anywherev1.BundlesRef{ + Name: p.wkBundlesName, + Namespace: constants.EksaSystemNamespace, + } + wkBundle := &releasev1alpha1.Bundles{ + Spec: releasev1alpha1.BundlesSpec{ + Number: p.wkBundlesNumber, + }, + } + tt.clusterSpec.Bundles = wkBundle + + mgmtBundle := &releasev1alpha1.Bundles{ + Spec: releasev1alpha1.BundlesSpec{ + Number: p.mgmtBundlesNumber, + }, + } + + ctx := context.Background() + tt.kubectl.EXPECT().GetEksaCluster(ctx, mgmtCluster, mgmtCluster.Name).Return(mgmtClusterObject, p.errGetEksaCluster) + if p.errGetEksaCluster == nil { + tt.kubectl.EXPECT().GetBundles(ctx, mgmtCluster.KubeconfigFile, mgmtClusterObject.Spec.BundlesRef.Name, mgmtClusterObject.Spec.BundlesRef.Namespace).Return(mgmtBundle, p.errGetBundles) + } + + if p.wantErr == "" { + err := validations.ValidateManagementClusterBundlesVersion(ctx, tt.kubectl, mgmtCluster, tt.clusterSpec) + tt.Expect(err).To(BeNil()) + } else { + err := validations.ValidateManagementClusterBundlesVersion(ctx, tt.kubectl, mgmtCluster, tt.clusterSpec) + tt.Expect(err.Error()).To(Equal(p.wantErr)) + } + } +} + +func TestValidateManagementClusterBundlesVersionMissingBundlesRef(t *testing.T) { + tt := newTest(t, withKubectl()) + wantErr := "management cluster bundlesRef cannot be nil" + mgmtName := "management-cluster" + mgmtCluster := managementCluster(mgmtName) + mgmtClusterObject := anywhereCluster(mgmtName) + + mgmtClusterObject.Spec.BundlesRef = nil + ctx := context.Background() + tt.kubectl.EXPECT().GetEksaCluster(ctx, mgmtCluster, mgmtCluster.Name).Return(mgmtClusterObject, nil) + + err := validations.ValidateManagementClusterBundlesVersion(ctx, tt.kubectl, mgmtCluster, tt.clusterSpec) + tt.Expect(err.Error()).To(Equal(wantErr)) +} diff --git a/pkg/validations/createvalidations/cluster_test.go b/pkg/validations/createvalidations/cluster_test.go index a2b1ebf07ba8..d59a237b0dfe 100644 --- a/pkg/validations/createvalidations/cluster_test.go +++ b/pkg/validations/createvalidations/cluster_test.go @@ -96,8 +96,8 @@ func TestValidateManagementClusterCRDs(t *testing.T) { cluster.Name = testclustername for _, tc := range tests { t.Run(tc.name, func(tt *testing.T) { - e.EXPECT().Execute(ctx, []string{"get", "crd", capiClustersResourceType, "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, tc.errGetClusterCRD).Times(tc.errGetClusterCRDCount) - e.EXPECT().Execute(ctx, []string{"get", "crd", eksaClusterResourceType, "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, tc.errGetEKSAClusterCRD).Times(tc.errGetEKSAClusterCRDCount) + e.EXPECT().Execute(ctx, []string{"get", "customresourcedefinition", capiClustersResourceType, "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, tc.errGetClusterCRD).Times(tc.errGetClusterCRDCount) + e.EXPECT().Execute(ctx, []string{"get", "customresourcedefinition", eksaClusterResourceType, "--kubeconfig", cluster.KubeconfigFile}).Return(bytes.Buffer{}, tc.errGetEKSAClusterCRD).Times(tc.errGetEKSAClusterCRDCount) err := createvalidations.ValidateManagementCluster(ctx, k, cluster) if tc.wantErr { assert.Error(tt, err, "expected ValidateManagementCluster to return an error", "test", tc.name) diff --git a/pkg/validations/createvalidations/preflightvalidations.go b/pkg/validations/createvalidations/preflightvalidations.go index b54c35eb16c9..a81dc17581dc 100644 --- a/pkg/validations/createvalidations/preflightvalidations.go +++ b/pkg/validations/createvalidations/preflightvalidations.go @@ -21,6 +21,13 @@ func (v *CreateValidations) PreflightValidations(ctx context.Context) []validati } createValidations := []validations.Validation{ + func() *validations.ValidationResult { + return &validations.ValidationResult{ + Name: "validate OS is compatible with registry mirror configuration", + Remediation: "please use a valid OS for your registry mirror configuration", + Err: validations.ValidateOSForRegistryMirror(v.Opts.Spec, v.Opts.Provider), + } + }, func() *validations.ValidationResult { return &validations.ValidationResult{ Name: "validate certificate for registry mirror", @@ -84,6 +91,13 @@ func (v *CreateValidations) PreflightValidations(ctx context.Context) []validati Err: validations.ValidateManagementClusterName(ctx, k, v.Opts.ManagementCluster, v.Opts.Spec.Cluster.Spec.ManagementCluster.Name), } }, + func() *validations.ValidationResult { + return &validations.ValidationResult{ + Name: "validate management cluster bundle version compatibility", + Remediation: fmt.Sprintf("upgrade management cluster %s before creating workload cluster %s", v.Opts.Spec.Cluster.ManagedBy(), v.Opts.WorkloadCluster.Name), + Err: validations.ValidateManagementClusterBundlesVersion(ctx, k, v.Opts.ManagementCluster, v.Opts.Spec), + } + }, ) } diff --git a/pkg/validations/createvalidations/preflightvalidations_test.go b/pkg/validations/createvalidations/preflightvalidations_test.go index acab76bf9982..76b024cfc81e 100644 --- a/pkg/validations/createvalidations/preflightvalidations_test.go +++ b/pkg/validations/createvalidations/preflightvalidations_test.go @@ -10,11 +10,14 @@ import ( "github.com/aws/eks-anywhere/internal/test" "github.com/aws/eks-anywhere/pkg/api/v1alpha1" + anywherev1 "github.com/aws/eks-anywhere/pkg/api/v1alpha1" "github.com/aws/eks-anywhere/pkg/cluster" + "github.com/aws/eks-anywhere/pkg/constants" "github.com/aws/eks-anywhere/pkg/types" "github.com/aws/eks-anywhere/pkg/validations" "github.com/aws/eks-anywhere/pkg/validations/createvalidations" "github.com/aws/eks-anywhere/pkg/validations/mocks" + releasev1alpha1 "github.com/aws/eks-anywhere/release/api/v1alpha1" ) type preflightValidationsTest struct { @@ -56,13 +59,12 @@ func TestPreFlightValidationsGitProvider(t *testing.T) { func TestPreFlightValidationsWorkloadCluster(t *testing.T) { tt := newPreflightValidationsTest(t) - tt.c.Opts.Spec.Cluster.SetManagedBy("mgmt-cluster") - tt.c.Opts.Spec.Cluster.Spec.ManagementCluster.Name = "mgmt-cluster" + mgmtClusterName := "mgmt-cluster" + tt.c.Opts.Spec.Cluster.SetManagedBy(mgmtClusterName) + tt.c.Opts.Spec.Cluster.Spec.ManagementCluster.Name = mgmtClusterName + tt.c.Opts.ManagementCluster.Name = mgmtClusterName - tt.k.EXPECT().GetClusters(tt.ctx, tt.c.Opts.WorkloadCluster).Return(nil, nil) - tt.k.EXPECT().ValidateClustersCRD(tt.ctx, tt.c.Opts.WorkloadCluster).Return(nil) - tt.k.EXPECT().ValidateEKSAClustersCRD(tt.ctx, tt.c.Opts.WorkloadCluster).Return(nil) - tt.k.EXPECT().GetEksaCluster(tt.ctx, tt.c.Opts.ManagementCluster, "mgmt-cluster").Return(&v1alpha1.Cluster{ + mgmt := &v1alpha1.Cluster{ ObjectMeta: v1.ObjectMeta{ Name: "mgmt-cluster", }, @@ -70,8 +72,25 @@ func TestPreFlightValidationsWorkloadCluster(t *testing.T) { ManagementCluster: v1alpha1.ManagementCluster{ Name: "mgmt-cluster", }, + BundlesRef: &anywherev1.BundlesRef{ + Name: "bundles-29", + Namespace: constants.EksaSystemNamespace, + }, + }, + } + + mgmtBundle := &releasev1alpha1.Bundles{ + Spec: releasev1alpha1.BundlesSpec{ + Number: tt.c.Opts.Spec.Bundles.Spec.Number + 1, }, - }, nil) + } + + tt.k.EXPECT().GetClusters(tt.ctx, tt.c.Opts.WorkloadCluster).Return(nil, nil) + tt.k.EXPECT().ValidateClustersCRD(tt.ctx, tt.c.Opts.WorkloadCluster).Return(nil) + tt.k.EXPECT().ValidateEKSAClustersCRD(tt.ctx, tt.c.Opts.WorkloadCluster).Return(nil) + tt.k.EXPECT().GetEksaCluster(tt.ctx, tt.c.Opts.ManagementCluster, mgmtClusterName).Return(mgmt, nil) + tt.k.EXPECT().GetEksaCluster(tt.ctx, tt.c.Opts.ManagementCluster, mgmtClusterName).Return(mgmt, nil) + tt.k.EXPECT().GetBundles(tt.ctx, tt.c.Opts.ManagementCluster.KubeconfigFile, mgmt.Spec.BundlesRef.Name, mgmt.Spec.BundlesRef.Namespace).Return(mgmtBundle, nil) tt.Expect(validations.ProcessValidationResults(tt.c.PreflightValidations(tt.ctx))).To(Succeed()) } diff --git a/pkg/validations/kubectl.go b/pkg/validations/kubectl.go index ab5b4af08bd4..f3edad7432f1 100644 --- a/pkg/validations/kubectl.go +++ b/pkg/validations/kubectl.go @@ -11,6 +11,7 @@ import ( "github.com/aws/eks-anywhere/pkg/executables" mockexecutables "github.com/aws/eks-anywhere/pkg/executables/mocks" "github.com/aws/eks-anywhere/pkg/types" + releasev1alpha1 "github.com/aws/eks-anywhere/release/api/v1alpha1" ) type KubectlClient interface { @@ -22,6 +23,7 @@ type KubectlClient interface { Version(ctx context.Context, cluster *types.Cluster) (*executables.VersionResponse, error) GetClusters(ctx context.Context, cluster *types.Cluster) ([]types.CAPICluster, error) GetEksaCluster(ctx context.Context, cluster *types.Cluster, clusterName string) (*v1alpha1.Cluster, error) + GetBundles(ctx context.Context, kubeconfigFile, name, namespace string) (*releasev1alpha1.Bundles, error) GetEksaGitOpsConfig(ctx context.Context, gitOpsConfigName string, kubeconfigFile string, namespace string) (*v1alpha1.GitOpsConfig, error) GetEksaFluxConfig(ctx context.Context, fluxConfigName string, kubeconfigFile string, namespace string) (*v1alpha1.FluxConfig, error) GetEksaOIDCConfig(ctx context.Context, oidcConfigName string, kubeconfigFile string, namespace string) (*v1alpha1.OIDCConfig, error) diff --git a/pkg/validations/mocks/kubectl.go b/pkg/validations/mocks/kubectl.go index 79f1d6fbf895..2346c8e0cba0 100644 --- a/pkg/validations/mocks/kubectl.go +++ b/pkg/validations/mocks/kubectl.go @@ -11,6 +11,7 @@ import ( v1alpha1 "github.com/aws/eks-anywhere/pkg/api/v1alpha1" executables "github.com/aws/eks-anywhere/pkg/executables" types "github.com/aws/eks-anywhere/pkg/types" + v1alpha10 "github.com/aws/eks-anywhere/release/api/v1alpha1" gomock "github.com/golang/mock/gomock" runtime "k8s.io/apimachinery/pkg/runtime" ) @@ -38,6 +39,21 @@ func (m *MockKubectlClient) EXPECT() *MockKubectlClientMockRecorder { return m.recorder } +// GetBundles mocks base method. +func (m *MockKubectlClient) GetBundles(ctx context.Context, kubeconfigFile, name, namespace string) (*v1alpha10.Bundles, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "GetBundles", ctx, kubeconfigFile, name, namespace) + ret0, _ := ret[0].(*v1alpha10.Bundles) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// GetBundles indicates an expected call of GetBundles. +func (mr *MockKubectlClientMockRecorder) GetBundles(ctx, kubeconfigFile, name, namespace interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetBundles", reflect.TypeOf((*MockKubectlClient)(nil).GetBundles), ctx, kubeconfigFile, name, namespace) +} + // GetClusters mocks base method. func (m *MockKubectlClient) GetClusters(ctx context.Context, cluster *types.Cluster) ([]types.CAPICluster, error) { m.ctrl.T.Helper() diff --git a/pkg/validations/upgradevalidations/immutableFields.go b/pkg/validations/upgradevalidations/immutableFields.go index c8ddaa5ec956..c1eae38c0859 100644 --- a/pkg/validations/upgradevalidations/immutableFields.go +++ b/pkg/validations/upgradevalidations/immutableFields.go @@ -57,6 +57,15 @@ func ValidateImmutableFields(ctx context.Context, k validations.KubectlClient, c return fmt.Errorf("spec.clusterNetwork.CNI/CNIConfig is immutable") } + // We don't want users to be able to toggle off SkipUpgrade until we've understood the + // implications so we are temporarily disallowing it. + + oCNI := prevSpec.Spec.ClusterNetwork.CNIConfig + nCNI := spec.Cluster.Spec.ClusterNetwork.CNIConfig + if oCNI != nil && oCNI.Cilium != nil && !oCNI.Cilium.IsManaged() && nCNI.Cilium.IsManaged() { + return fmt.Errorf("spec.clusterNetwork.cniConfig.cilium.skipUpgrade cannot be toggled off") + } + if !nSpec.ProxyConfiguration.Equal(oSpec.ProxyConfiguration) { return fmt.Errorf("spec.proxyConfiguration is immutable") } diff --git a/pkg/validations/upgradevalidations/immutableFields_test.go b/pkg/validations/upgradevalidations/immutableFields_test.go index b449a6be09a3..e9c7bdfcd968 100644 --- a/pkg/validations/upgradevalidations/immutableFields_test.go +++ b/pkg/validations/upgradevalidations/immutableFields_test.go @@ -11,9 +11,12 @@ import ( "github.com/aws/eks-anywhere/internal/test" "github.com/aws/eks-anywhere/pkg/api/v1alpha1" "github.com/aws/eks-anywhere/pkg/cluster" + pmock "github.com/aws/eks-anywhere/pkg/providers/mocks" "github.com/aws/eks-anywhere/pkg/types" + "github.com/aws/eks-anywhere/pkg/utils/ptr" "github.com/aws/eks-anywhere/pkg/validations/mocks" "github.com/aws/eks-anywhere/pkg/validations/upgradevalidations" + releasev1alpha1 "github.com/aws/eks-anywhere/release/api/v1alpha1" ) func TestValidateGitOpsImmutableFieldsRef(t *testing.T) { @@ -242,3 +245,118 @@ func TestValidateGitOpsImmutableFieldsFluxConfig(t *testing.T) { }) } } + +func TestValidateImmutableFields(t *testing.T) { + tests := []struct { + Name string + ConfigureCurrent func(current *v1alpha1.Cluster) + ConfigureDesired func(desired *v1alpha1.Cluster) + ExpectedError string + }{ + { + Name: "Toggle Spec.ClusterNetwork.CNIConfig.Cilium.SkipUpgrade on", + ConfigureCurrent: func(current *v1alpha1.Cluster) { + current.Spec.ClusterNetwork.CNIConfig = &v1alpha1.CNIConfig{ + Cilium: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + } + }, + ConfigureDesired: func(desired *v1alpha1.Cluster) { + desired.Spec.ClusterNetwork.CNIConfig = &v1alpha1.CNIConfig{ + Cilium: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + } + }, + }, + { + Name: "Spec.ClusterNetwork.CNIConfig.Cilium.SkipUpgrade unset", + ConfigureCurrent: func(current *v1alpha1.Cluster) { + current.Spec.ClusterNetwork.CNIConfig = &v1alpha1.CNIConfig{ + Cilium: &v1alpha1.CiliumConfig{}, + } + }, + ConfigureDesired: func(desired *v1alpha1.Cluster) { + desired.Spec.ClusterNetwork.CNIConfig = &v1alpha1.CNIConfig{ + Cilium: &v1alpha1.CiliumConfig{}, + } + }, + }, + { + Name: "Toggle Spec.ClusterNetwork.CNIConfig.Cilium.SkipUpgrade off", + ConfigureCurrent: func(current *v1alpha1.Cluster) { + current.Spec.ClusterNetwork.CNIConfig = &v1alpha1.CNIConfig{ + Cilium: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(true), + }, + } + }, + ConfigureDesired: func(desired *v1alpha1.Cluster) { + desired.Spec.ClusterNetwork.CNIConfig = &v1alpha1.CNIConfig{ + Cilium: &v1alpha1.CiliumConfig{ + SkipUpgrade: ptr.Bool(false), + }, + } + }, + ExpectedError: "spec.clusterNetwork.cniConfig.cilium.skipUpgrade cannot be toggled off", + }, + } + + clstr := &types.Cluster{} + + for _, tc := range tests { + t.Run(tc.Name, func(t *testing.T) { + g := NewWithT(t) + ctrl := gomock.NewController(t) + + current := &cluster.Spec{ + Config: &cluster.Config{ + Cluster: &v1alpha1.Cluster{ + Spec: v1alpha1.ClusterSpec{ + WorkerNodeGroupConfigurations: []v1alpha1.WorkerNodeGroupConfiguration{{}}, + }, + }, + }, + VersionsBundle: &cluster.VersionsBundle{ + VersionsBundle: &releasev1alpha1.VersionsBundle{}, + KubeDistro: &cluster.KubeDistro{}, + }, + Bundles: &releasev1alpha1.Bundles{}, + } + desired := current.DeepCopy() + + tc.ConfigureCurrent(current.Config.Cluster) + tc.ConfigureDesired(desired.Config.Cluster) + + client := mocks.NewMockKubectlClient(ctrl) + client.EXPECT(). + GetEksaCluster(gomock.Any(), clstr, current.Cluster.Name). + Return(current.Cluster, nil) + + provider := pmock.NewMockProvider(ctrl) + + // The algorithm calls out to the provider to validate the new spec only if it finds + // no errors in the generic validation first. + if tc.ExpectedError == "" { + provider.EXPECT(). + ValidateNewSpec(gomock.Any(), gomock.Any(), gomock.Any()). + Return(nil) + } + + err := upgradevalidations.ValidateImmutableFields( + context.Background(), + client, + clstr, + desired, + provider, + ) + + if tc.ExpectedError == "" { + g.Expect(err).To(Succeed()) + } else { + g.Expect(err).To(MatchError(ContainSubstring(tc.ExpectedError))) + } + }) + } +} diff --git a/pkg/validations/upgradevalidations/preflightvalidations.go b/pkg/validations/upgradevalidations/preflightvalidations.go index 67c54900543d..d123cf479b3b 100644 --- a/pkg/validations/upgradevalidations/preflightvalidations.go +++ b/pkg/validations/upgradevalidations/preflightvalidations.go @@ -22,6 +22,13 @@ func (u *UpgradeValidations) PreflightValidations(ctx context.Context) []validat KubeconfigFile: u.Opts.ManagementCluster.KubeconfigFile, } upgradeValidations := []validations.Validation{ + func() *validations.ValidationResult { + return &validations.ValidationResult{ + Name: "validate OS is compatible with registry mirror configuration", + Remediation: "please use a valid OS for your registry mirror configuration", + Err: validations.ValidateOSForRegistryMirror(u.Opts.Spec, u.Opts.Provider), + } + }, func() *validations.ValidationResult { return &validations.ValidationResult{ Name: "validate certificate for registry mirror", @@ -94,5 +101,17 @@ func (u *UpgradeValidations) PreflightValidations(ctx context.Context) []validat } }, } + + if u.Opts.Spec.Cluster.IsManaged() { + upgradeValidations = append( + upgradeValidations, + func() *validations.ValidationResult { + return &validations.ValidationResult{ + Name: "validate management cluster bundle version compatibility", + Remediation: fmt.Sprintf("upgrade management cluster %s before upgrading workload cluster %s", u.Opts.Spec.Cluster.ManagedBy(), u.Opts.WorkloadCluster.Name), + Err: validations.ValidateManagementClusterBundlesVersion(ctx, k, u.Opts.ManagementCluster, u.Opts.Spec), + } + }) + } return upgradeValidations } diff --git a/pkg/validations/upgradevalidations/preflightvalidations_test.go b/pkg/validations/upgradevalidations/preflightvalidations_test.go index 646da419aae4..9ec50c93cb83 100644 --- a/pkg/validations/upgradevalidations/preflightvalidations_test.go +++ b/pkg/validations/upgradevalidations/preflightvalidations_test.go @@ -11,8 +11,10 @@ import ( "github.com/aws/eks-anywhere/internal/test" "github.com/aws/eks-anywhere/pkg/api/v1alpha1" + anywherev1 "github.com/aws/eks-anywhere/pkg/api/v1alpha1" "github.com/aws/eks-anywhere/pkg/cluster" "github.com/aws/eks-anywhere/pkg/config" + "github.com/aws/eks-anywhere/pkg/constants" "github.com/aws/eks-anywhere/pkg/executables" "github.com/aws/eks-anywhere/pkg/filewriter" filewritermocks "github.com/aws/eks-anywhere/pkg/filewriter/mocks" @@ -25,6 +27,7 @@ import ( "github.com/aws/eks-anywhere/pkg/validations" "github.com/aws/eks-anywhere/pkg/validations/mocks" "github.com/aws/eks-anywhere/pkg/validations/upgradevalidations" + releasev1alpha1 "github.com/aws/eks-anywhere/release/api/v1alpha1" ) const ( @@ -1135,6 +1138,10 @@ func TestPreflightValidationsVsphere(t *testing.T) { "noproxy2", }, } + s.Cluster.Spec.BundlesRef = &anywherev1.BundlesRef{ + Name: "bundles-28", + Namespace: constants.EksaSystemNamespace, + } s.GitOpsConfig = defaultGitOps s.OIDCConfig = defaultOIDC @@ -1177,6 +1184,11 @@ func TestPreflightValidationsVsphere(t *testing.T) { GitVersion: tc.clusterVersion, }, } + bundlesResponse := &releasev1alpha1.Bundles{ + Spec: releasev1alpha1.BundlesSpec{ + Number: 28, + }, + } provider.EXPECT().DatacenterConfig(clusterSpec).Return(existingProviderSpec).MaxTimes(1) provider.EXPECT().ValidateNewSpec(ctx, workloadCluster, clusterSpec).Return(nil).MaxTimes(1) @@ -1187,6 +1199,10 @@ func TestPreflightValidationsVsphere(t *testing.T) { k.EXPECT().ValidateClustersCRD(ctx, workloadCluster).Return(tc.crdResponse) k.EXPECT().GetClusters(ctx, workloadCluster).Return(tc.getClusterResponse, nil) k.EXPECT().GetEksaCluster(ctx, workloadCluster, clusterSpec.Cluster.Name).Return(existingClusterSpec.Cluster, nil) + if opts.Spec.Cluster.IsManaged() { + k.EXPECT().GetEksaCluster(ctx, workloadCluster, workloadCluster.Name).Return(existingClusterSpec.Cluster, nil) + k.EXPECT().GetBundles(ctx, workloadCluster.KubeconfigFile, existingClusterSpec.Cluster.Spec.BundlesRef.Name, existingClusterSpec.Cluster.Spec.BundlesRef.Namespace).Return(bundlesResponse, nil) + } k.EXPECT().GetEksaGitOpsConfig(ctx, clusterSpec.Cluster.Spec.GitOpsRef.Name, gomock.Any(), gomock.Any()).Return(existingClusterSpec.GitOpsConfig, nil).MaxTimes(1) k.EXPECT().GetEksaOIDCConfig(ctx, clusterSpec.Cluster.Spec.IdentityProviderRefs[1].Name, gomock.Any(), gomock.Any()).Return(existingClusterSpec.OIDCConfig, nil).MaxTimes(1) k.EXPECT().GetEksaAWSIamConfig(ctx, clusterSpec.Cluster.Spec.IdentityProviderRefs[0].Name, gomock.Any(), gomock.Any()).Return(existingClusterSpec.AWSIamConfig, nil).MaxTimes(1) diff --git a/release/pkg/test/testdata/main-bundle-release.yaml b/release/pkg/test/testdata/main-bundle-release.yaml index 3eae0ba609f3..c17279f3c269 100644 --- a/release/pkg/test/testdata/main-bundle-release.yaml +++ b/release/pkg/test/testdata/main-bundle-release.yaml @@ -37,18 +37,18 @@ spec: arch: - amd64 description: Container image for bottlerocket-admin image - imageDigest: sha256:5af314144e2e349b0fdb504dba5c3bcac6f7594ea4051ce1365dd69a923cb57f + imageDigest: sha256:994f332ac4b2fb2a5c3acca75501e5c5ef34cc1bfdfcffb3d0af16ceb1bdb1ca name: bottlerocket-admin os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.9.4 + uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.10.0 control: arch: - amd64 description: Container image for bottlerocket-control image - imageDigest: sha256:d3dfdff919f32a5317ec82d06881c4151eaaf0d946112731674860a4229a66a2 + imageDigest: sha256:a91f9e9e7ab7056fab84c85c0d8d6ad38b16f9c9d3d01baecdcec856bdd19c8e name: bottlerocket-control os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.0 + uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.1 kubeadmBootstrap: arch: - amd64 @@ -57,7 +57,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-21-26-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-22-23-eks-a-v0.0.0-dev-build.1 certManager: acmesolver: arch: @@ -244,799 +244,13 @@ spec: bottlerocket: arch: - amd64 - description: Bottlerocket Ami image for EKS-D 1-21-26 release - name: bottlerocket-v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.0-amd64.img.gz + description: Bottlerocket Ami image for EKS-D 1-22-23 release + name: bottlerocket-v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-21/1-21-26/bottlerocket-v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.0-amd64.img.gz - channel: 1-21 - components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml - crictl: - arch: - - amd64 - description: cri-tools tarball for linux/amd64 - name: cri-tools-v0.0.0-dev-build.0-linux-amd64.tar.gz - os: linux - sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cri-tools/v1.26.0/cri-tools-v0.0.0-dev-build.0-linux-amd64.tar.gz - etcdadm: - arch: - - amd64 - description: etcdadm tarball for linux/amd64 - name: etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz - os: linux - sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/5b496a72af3d80d64a16a650c85ce9a5882bc014/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz - gitCommit: 0123456789abcdef0123456789abcdef01234567 - imagebuilder: - arch: - - amd64 - description: image-builder tarball for linux/amd64 - name: image-builder-v0.0.0-dev-build.0-linux-amd64.tar.gz - os: linux - sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/image-builder/0.1.2/image-builder-v0.0.0-dev-build.0-linux-amd64.tar.gz - kindNode: - arch: - - amd64 - - arm64 - description: Container image for kind-node image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kind-node - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.1 - kubeVersion: v1.21.14 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-21/kubernetes-1-21-eks-26.yaml - name: kubernetes-1-21-eks-26 - ova: - bottlerocket: - arch: - - amd64 - description: Bottlerocket Ova image for EKS-D 1-21-26 release - name: bottlerocket-v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.0-amd64.ova - os: linux - osName: bottlerocket - sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-21/1-21-26/bottlerocket-v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.0-amd64.ova - raw: - bottlerocket: - arch: - - amd64 - description: Bottlerocket Raw image for EKS-D 1-21-26 release - name: bottlerocket-v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.0-amd64.img.gz - os: linux - osName: bottlerocket - sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-21/1-21-26/bottlerocket-v1.21.14-eks-d-1-21-26-eks-a-v0.0.0-dev-build.0-amd64.img.gz - eksa: - cliTools: - arch: - - amd64 - - arm64 - description: Container image for eks-anywhere-cli-tools image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: eks-anywhere-cli-tools - os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.4-eks-a-v0.0.0-dev-build.1 - clusterController: - arch: - - amd64 - - arm64 - description: Container image for eks-anywhere-cluster-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: eks-anywhere-cluster-controller - os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.4-eks-a-v0.0.0-dev-build.1 - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.4/eksa-components.yaml - diagnosticCollector: - arch: - - amd64 - - arm64 - description: Container image for eks-anywhere-diagnostic-collector image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: eks-anywhere-diagnostic-collector - os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.4-eks-a-v0.0.0-dev-build.1 - version: v0.0.0-dev+build.0+abcdef1 - etcdadmBootstrap: - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/bootstrap-components.yaml - controller: - arch: - - amd64 - - arm64 - description: Container image for etcdadm-bootstrap-provider image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: etcdadm-bootstrap-provider - os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-eks-a-v0.0.0-dev-build.1 - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/metadata.yaml - version: v1.0.7+abcdef1 - etcdadmController: - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/bootstrap-components.yaml - controller: - arch: - - amd64 - - arm64 - description: Container image for etcdadm-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: etcdadm-controller - os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-eks-a-v0.0.0-dev-build.1 - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/metadata.yaml - version: v1.0.6+abcdef1 - flux: - helmController: - arch: - - amd64 - - arm64 - description: Container image for helm-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: helm-controller - os: linux - uri: public.ecr.aws/release-container-registry/fluxcd/helm-controller:v0.22.2-eks-a-v0.0.0-dev-build.1 - kustomizeController: - arch: - - amd64 - - arm64 - description: Container image for kustomize-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kustomize-controller - os: linux - uri: public.ecr.aws/release-container-registry/fluxcd/kustomize-controller:v0.26.3-eks-a-v0.0.0-dev-build.1 - notificationController: - arch: - - amd64 - - arm64 - description: Container image for notification-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: notification-controller - os: linux - uri: public.ecr.aws/release-container-registry/fluxcd/notification-controller:v0.24.1-eks-a-v0.0.0-dev-build.1 - sourceController: - arch: - - amd64 - - arm64 - description: Container image for source-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: source-controller - os: linux - uri: public.ecr.aws/release-container-registry/fluxcd/source-controller:v0.25.9-eks-a-v0.0.0-dev-build.1 - version: v0.31.3+abcdef1 - haproxy: - image: - arch: - - amd64 - - arm64 - description: Container image for haproxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: haproxy - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/haproxy:v0.17.0-eks-a-v0.0.0-dev-build.1 - kindnetd: - manifest: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/kind/manifests/kindnetd/v0.17.0/kindnetd.yaml - version: v0.17.0+abcdef1 - kubeVersion: "1.21" - nutanix: - clusterAPIController: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-provider-nutanix image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-provider-nutanix - os: linux - uri: public.ecr.aws/release-container-registry/nutanix-cloud-native/cluster-api-provider-nutanix:v1.1.3-eks-a-v0.0.0-dev-build.1 - clusterTemplate: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-nutanix/manifests/infrastructure-nutanix/v1.1.3/cluster-template.yaml - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-nutanix/manifests/infrastructure-nutanix/v1.1.3/infrastructure-components.yaml - kubeVip: - arch: - - amd64 - - arm64 - description: Container image for kube-vip image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-vip - os: linux - uri: public.ecr.aws/release-container-registry/kube-vip/kube-vip:v0.5.5-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-nutanix/manifests/infrastructure-nutanix/v1.1.3/metadata.yaml - version: v1.1.3+abcdef1 - packageController: - helmChart: - description: Helm chart for eks-anywhere-packages - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: eks-anywhere-packages - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.6-eks-a-v0.0.0-dev-build.1 - packageController: - arch: - - amd64 - - arm64 - description: Container image for eks-anywhere-packages image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: eks-anywhere-packages - os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.6-eks-a-v0.0.0-dev-build.1 - tokenRefresher: - arch: - - amd64 - - arm64 - description: Container image for ecr-token-refresher image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: ecr-token-refresher - os: linux - uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.6-eks-a-v0.0.0-dev-build.1 - version: v0.3.6+abcdef1 - snow: - bottlerocketBootstrapSnow: - arch: - - amd64 - - arm64 - description: Container image for bottlerocket-bootstrap-snow image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: bottlerocket-bootstrap-snow - os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-21-26-eks-a-v0.0.0-dev-build.1 - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/infrastructure-components.yaml - kubeVip: - arch: - - amd64 - - arm64 - description: Container image for kube-vip image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-vip - os: linux - uri: public.ecr.aws/release-container-registry/kube-vip/kube-vip:v0.5.5-eks-a-v0.0.0-dev-build.1 - manager: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-snow-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-snow-controller - os: linux - uri: public.ecr.aws/release-container-registry/aws/cluster-api-provider-aws-snow/manager:v0.1.25-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/metadata.yaml - version: v0.1.25+abcdef1 - tinkerbell: - clusterAPIController: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-provider-tinkerbell image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-provider-tinkerbell - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/cluster-api-provider-tinkerbell:v0.4.0-eks-a-v0.0.0-dev-build.1 - clusterTemplate: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-tinkerbell/manifests/infrastructure-tinkerbell/v0.4.0/cluster-template.yaml - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-tinkerbell/manifests/infrastructure-tinkerbell/v0.4.0/infrastructure-components.yaml - envoy: - arch: - - amd64 - - arm64 - description: Container image for envoy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: envoy - os: linux - uri: public.ecr.aws/release-container-registry/envoyproxy/envoy:v1.22.2.0-prod-eks-a-v0.0.0-dev-build.1 - kubeVip: - arch: - - amd64 - - arm64 - description: Container image for kube-vip image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-vip - os: linux - uri: public.ecr.aws/release-container-registry/kube-vip/kube-vip:v0.5.5-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-tinkerbell/manifests/infrastructure-tinkerbell/v0.4.0/metadata.yaml - tinkerbellStack: - actions: - cexec: - arch: - - amd64 - - arm64 - description: Container image for cexec image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cexec - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-v0.0.0-dev-build.1 - imageToDisk: - arch: - - amd64 - - arm64 - description: Container image for image2disk image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: image2disk - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-v0.0.0-dev-build.1 - kexec: - arch: - - amd64 - - arm64 - description: Container image for kexec image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kexec - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hub/kexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-v0.0.0-dev-build.1 - ociToDisk: - arch: - - amd64 - - arm64 - description: Container image for oci2disk image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: oci2disk - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hub/oci2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-v0.0.0-dev-build.1 - reboot: - arch: - - amd64 - - arm64 - description: Container image for reboot image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: reboot - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-v0.0.0-dev-build.1 - writeFile: - arch: - - amd64 - - arm64 - description: Container image for writefile image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: writefile - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-v0.0.0-dev-build.1 - boots: - arch: - - amd64 - - arm64 - description: Container image for boots image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: boots - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/boots:v0.8.1-eks-a-v0.0.0-dev-build.1 - hegel: - arch: - - amd64 - - arm64 - description: Container image for hegel image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: hegel - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hegel:v0.10.1-eks-a-v0.0.0-dev-build.1 - hook: - bootkit: - arch: - - amd64 - - arm64 - description: Container image for hook-bootkit image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: hook-bootkit - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hook-bootkit:03a67729d895635fe3c612e4feca3400b9336cc9-eks-a-v0.0.0-dev-build.1 - docker: - arch: - - amd64 - - arm64 - description: Container image for hook-docker image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: hook-docker - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hook-docker:03a67729d895635fe3c612e4feca3400b9336cc9-eks-a-v0.0.0-dev-build.1 - initramfs: - amd: - description: Tinkerbell operating system installation environment (osie) - component - name: initramfs-x86_64 - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/hook/03a67729d895635fe3c612e4feca3400b9336cc9/initramfs-x86_64 - arm: - description: Tinkerbell operating system installation environment (osie) - component - name: initramfs-aarch64 - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/hook/03a67729d895635fe3c612e4feca3400b9336cc9/initramfs-aarch64 - kernel: - arch: - - amd64 - - arm64 - description: Container image for hook-kernel image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: hook-kernel - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/hook-kernel:03a67729d895635fe3c612e4feca3400b9336cc9-eks-a-v0.0.0-dev-build.1 - vmlinuz: - amd: - description: Tinkerbell operating system installation environment (osie) - component - name: vmlinuz-x86_64 - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/hook/03a67729d895635fe3c612e4feca3400b9336cc9/vmlinuz-x86_64 - arm: - description: Tinkerbell operating system installation environment (osie) - component - name: vmlinuz-aarch64 - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/hook/03a67729d895635fe3c612e4feca3400b9336cc9/vmlinuz-aarch64 - rufio: - arch: - - amd64 - - arm64 - description: Container image for rufio image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: rufio - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.2.1-eks-a-v0.0.0-dev-build.1 - tink: - tinkController: - arch: - - amd64 - - arm64 - description: Container image for tink-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: tink-controller - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/tink/tink-controller:v0.8.0-eks-a-v0.0.0-dev-build.1 - tinkServer: - arch: - - amd64 - - arm64 - description: Container image for tink-server image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: tink-server - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/tink/tink-server:v0.8.0-eks-a-v0.0.0-dev-build.1 - tinkWorker: - arch: - - amd64 - - arm64 - description: Container image for tink-worker image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: tink-worker - os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/tink/tink-worker:v0.8.0-eks-a-v0.0.0-dev-build.1 - tinkerbellChart: - description: Helm chart for tinkerbell-chart - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: tinkerbell-chart - uri: public.ecr.aws/release-container-registry/tinkerbell/tinkerbell-chart:0.2.1-eks-a-v0.0.0-dev-build.1 - version: v0.4.0+abcdef1 - vSphere: - clusterAPIController: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-provider-vsphere image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-provider-vsphere - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/cluster-api-provider-vsphere/release/manager:v1.3.1-eks-a-v0.0.0-dev-build.1 - clusterTemplate: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-vsphere/manifests/infrastructure-vsphere/v1.3.1/cluster-template.yaml - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-vsphere/manifests/infrastructure-vsphere/v1.3.1/infrastructure-components.yaml - driver: - arch: - - amd64 - - arm64 - description: Container image for vsphere-csi-driver image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: vsphere-csi-driver - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/vsphere-csi-driver/csi/driver:v2.2.0-eks-a-v0.0.0-dev-build.1 - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - kubeVip: - arch: - - amd64 - - arm64 - description: Container image for kube-vip image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-vip - os: linux - uri: public.ecr.aws/release-container-registry/kube-vip/kube-vip:v0.5.5-eks-a-v0.0.0-dev-build.1 - manager: - arch: - - amd64 - - arm64 - description: Container image for cloud-provider-vsphere image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cloud-provider-vsphere - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes/cloud-provider-vsphere/cpi/manager:v1.21.3-eks-d-1-21-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-vsphere/manifests/infrastructure-vsphere/v1.3.1/metadata.yaml - syncer: - arch: - - amd64 - - arm64 - description: Container image for vsphere-csi-syncer image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: vsphere-csi-syncer - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/vsphere-csi-driver/csi/syncer:v2.2.0-eks-a-v0.0.0-dev-build.1 - version: v1.3.1+abcdef1 - - bootstrap: - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/bootstrap-kubeadm/v1.3.5/bootstrap-components.yaml - controller: - arch: - - amd64 - - arm64 - description: Container image for kubeadm-bootstrap-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kubeadm-bootstrap-controller - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/cluster-api/kubeadm-bootstrap-controller:v1.3.5-eks-a-v0.0.0-dev-build.1 - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/bootstrap-kubeadm/v1.3.5/metadata.yaml - version: v1.3.5+abcdef1 - bottlerocketHostContainers: - admin: - arch: - - amd64 - description: Container image for bottlerocket-admin image - imageDigest: sha256:5af314144e2e349b0fdb504dba5c3bcac6f7594ea4051ce1365dd69a923cb57f - name: bottlerocket-admin - os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.9.4 - control: - arch: - - amd64 - description: Container image for bottlerocket-control image - imageDigest: sha256:d3dfdff919f32a5317ec82d06881c4151eaaf0d946112731674860a4229a66a2 - name: bottlerocket-control - os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.0 - kubeadmBootstrap: - arch: - - amd64 - - arm64 - description: Container image for bottlerocket-bootstrap image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: bottlerocket-bootstrap - os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-22-22-eks-a-v0.0.0-dev-build.1 - certManager: - acmesolver: - arch: - - amd64 - - arm64 - description: Container image for cert-manager-acmesolver image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cert-manager-acmesolver - os: linux - uri: public.ecr.aws/release-container-registry/cert-manager/cert-manager-acmesolver:v1.11.0-eks-a-v0.0.0-dev-build.1 - cainjector: - arch: - - amd64 - - arm64 - description: Container image for cert-manager-cainjector image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cert-manager-cainjector - os: linux - uri: public.ecr.aws/release-container-registry/cert-manager/cert-manager-cainjector:v1.11.0-eks-a-v0.0.0-dev-build.1 - controller: - arch: - - amd64 - - arm64 - description: Container image for cert-manager-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cert-manager-controller - os: linux - uri: public.ecr.aws/release-container-registry/cert-manager/cert-manager-controller:v1.11.0-eks-a-v0.0.0-dev-build.1 - ctl: - arch: - - amd64 - - arm64 - description: Container image for cert-manager-ctl image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cert-manager-ctl - os: linux - uri: public.ecr.aws/release-container-registry/cert-manager/cert-manager-ctl:v1.11.0-eks-a-v0.0.0-dev-build.1 - manifest: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cert-manager/manifests/v1.11.0/cert-manager.yaml - version: v1.11.0+abcdef1 - webhook: - arch: - - amd64 - - arm64 - description: Container image for cert-manager-webhook image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cert-manager-webhook - os: linux - uri: public.ecr.aws/release-container-registry/cert-manager/cert-manager-webhook:v1.11.0-eks-a-v0.0.0-dev-build.1 - cilium: - cilium: - arch: - - amd64 - description: Container image for cilium image - imageDigest: sha256:197b3706173bffc9563f7211788dfd326a36d1ac99d04c42357f80b7143da0d7 - name: cilium - os: linux - uri: public.ecr.aws/isovalent/cilium:v1.11.10-eksa.2 - helmChart: - description: Helm chart for cilium-chart - imageDigest: sha256:252241c7986b52a4bec31c50e31daf349374b730dcb8aaa6db3a2b1cb025ff7c - name: cilium-chart - uri: public.ecr.aws/isovalent/cilium:1.11.10-eksa.2 - manifest: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cilium/manifests/cilium/v1.11.10-eksa.2/cilium.yaml - operator: - arch: - - amd64 - description: Container image for operator-generic image - imageDigest: sha256:ec2677f33c9aa9b6a8f600314c718a913eb1f0d0006a991f69ee0a775bb29e14 - name: operator-generic - os: linux - uri: public.ecr.aws/isovalent/operator-generic:v1.11.10-eksa.2 - version: v1.11.10-eksa.2 - cloudStack: - clusterAPIController: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-provider-cloudstack image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-provider-cloudstack - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/cluster-api-provider-cloudstack/release/manager:v0.4.9-rc4-eks-a-v0.0.0-dev-build.1 - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-cloudstack/manifests/infrastructure-cloudstack/v0.4.9-rc4/infrastructure-components.yaml - kubeRbacProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - kubeVip: - arch: - - amd64 - - arm64 - description: Container image for kube-vip image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-vip - os: linux - uri: public.ecr.aws/release-container-registry/kube-vip/kube-vip:v0.5.5-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-cloudstack/manifests/infrastructure-cloudstack/v0.4.9-rc4/metadata.yaml - version: v0.4.9-rc4+abcdef1 - clusterAPI: - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/cluster-api/v1.3.5/core-components.yaml - controller: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-controller - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/cluster-api/cluster-api-controller:v1.3.5-eks-a-v0.0.0-dev-build.1 - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/cluster-api/v1.3.5/metadata.yaml - version: v1.3.5+abcdef1 - controlPlane: - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/control-plane-kubeadm/v1.3.5/control-plane-components.yaml - controller: - arch: - - amd64 - - arm64 - description: Container image for kubeadm-control-plane-controller image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kubeadm-control-plane-controller - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/cluster-api/kubeadm-control-plane-controller:v1.3.5-eks-a-v0.0.0-dev-build.1 - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/control-plane-kubeadm/v1.3.5/metadata.yaml - version: v1.3.5+abcdef1 - docker: - clusterTemplate: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/infrastructure-docker/v1.3.5/cluster-template-development.yaml - components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/infrastructure-docker/v1.3.5/infrastructure-components-development.yaml - kubeProxy: - arch: - - amd64 - - arm64 - description: Container image for kube-rbac-proxy image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: kube-rbac-proxy - os: linux - uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 - manager: - arch: - - amd64 - - arm64 - description: Container image for cluster-api-provider-docker image - imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - name: cluster-api-provider-docker - os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/cluster-api/capd-manager:v1.3.5-eks-a-v0.0.0-dev-build.1 - metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api/manifests/infrastructure-docker/v1.3.5/metadata.yaml - version: v1.3.5+abcdef1 - eksD: - ami: - bottlerocket: - arch: - - amd64 - description: Bottlerocket Ami image for EKS-D 1-22-22 release - name: bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.0-amd64.img.gz - os: linux - osName: bottlerocket - sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-22/1-22-22/bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-22/1-22-23/bottlerocket-v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.0-amd64.img.gz channel: 1-22 components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml crictl: @@ -1056,7 +270,7 @@ spec: os: linux sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/5b496a72af3d80d64a16a650c85ce9a5882bc014/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/f089d308442c18f487a52d09fd067ae9ac7cd8f2/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz gitCommit: 0123456789abcdef0123456789abcdef01234567 imagebuilder: arch: @@ -1075,32 +289,32 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.1 kubeVersion: v1.22.17 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-22/kubernetes-1-22-eks-22.yaml - name: kubernetes-1-22-eks-22 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-22/kubernetes-1-22-eks-23.yaml + name: kubernetes-1-22-eks-23 ova: bottlerocket: arch: - amd64 - description: Bottlerocket Ova image for EKS-D 1-22-22 release - name: bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.0-amd64.ova + description: Bottlerocket Ova image for EKS-D 1-22-23 release + name: bottlerocket-v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.0-amd64.ova os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-22/1-22-22/bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.0-amd64.ova + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-22/1-22-23/bottlerocket-v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.0-amd64.ova raw: bottlerocket: arch: - amd64 - description: Bottlerocket Raw image for EKS-D 1-22-22 release - name: bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.0-amd64.img.gz + description: Bottlerocket Raw image for EKS-D 1-22-23 release + name: bottlerocket-v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-22/1-22-22/bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-22/1-22-23/bottlerocket-v1.22.17-eks-d-1-22-23-eks-a-v0.0.0-dev-build.0-amd64.img.gz eksa: cliTools: arch: @@ -1110,7 +324,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-build.1 clusterController: arch: - amd64 @@ -1119,9 +333,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.4/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -1130,11 +344,11 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-build.1 version: v0.0.0-dev+build.0+abcdef1 etcdadmBootstrap: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -1143,7 +357,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-bootstrap-provider os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -1154,11 +368,11 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/metadata.yaml - version: v1.0.7+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/metadata.yaml + version: v1.0.7-rc1+abcdef1 etcdadmController: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -1167,7 +381,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-controller os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -1178,8 +392,8 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/metadata.yaml - version: v1.0.6+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/metadata.yaml + version: v1.0.6-rc1+abcdef1 flux: helmController: arch: @@ -1264,7 +478,7 @@ spec: description: Helm chart for eks-anywhere-packages imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.9-eks-a-v0.0.0-dev-build.1 packageController: arch: - amd64 @@ -1273,7 +487,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.9-eks-a-v0.0.0-dev-build.1 tokenRefresher: arch: - amd64 @@ -1282,8 +496,8 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: ecr-token-refresher os: linux - uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.6-eks-a-v0.0.0-dev-build.1 - version: v0.3.6+abcdef1 + uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.9-eks-a-v0.0.0-dev-build.1 + version: v0.3.9+abcdef1 snow: bottlerocketBootstrapSnow: arch: @@ -1293,7 +507,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-22-22-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-22-23-eks-a-v0.0.0-dev-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/infrastructure-components.yaml kubeVip: @@ -1483,7 +697,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: rufio os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.2.1-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.3.0-eks-a-v0.0.0-dev-build.1 tink: tinkController: arch: @@ -1609,18 +823,18 @@ spec: arch: - amd64 description: Container image for bottlerocket-admin image - imageDigest: sha256:5af314144e2e349b0fdb504dba5c3bcac6f7594ea4051ce1365dd69a923cb57f + imageDigest: sha256:994f332ac4b2fb2a5c3acca75501e5c5ef34cc1bfdfcffb3d0af16ceb1bdb1ca name: bottlerocket-admin os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.9.4 + uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.10.0 control: arch: - amd64 description: Container image for bottlerocket-control image - imageDigest: sha256:d3dfdff919f32a5317ec82d06881c4151eaaf0d946112731674860a4229a66a2 + imageDigest: sha256:a91f9e9e7ab7056fab84c85c0d8d6ad38b16f9c9d3d01baecdcec856bdd19c8e name: bottlerocket-control os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.0 + uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.1 kubeadmBootstrap: arch: - amd64 @@ -1629,7 +843,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-23-17-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-23-18-eks-a-v0.0.0-dev-build.1 certManager: acmesolver: arch: @@ -1816,13 +1030,13 @@ spec: bottlerocket: arch: - amd64 - description: Bottlerocket Ami image for EKS-D 1-23-17 release - name: bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.0-amd64.img.gz + description: Bottlerocket Ami image for EKS-D 1-23-18 release + name: bottlerocket-v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-23/1-23-17/bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-23/1-23-18/bottlerocket-v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.0-amd64.img.gz channel: 1-23 components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml crictl: @@ -1842,7 +1056,7 @@ spec: os: linux sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/5b496a72af3d80d64a16a650c85ce9a5882bc014/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/f089d308442c18f487a52d09fd067ae9ac7cd8f2/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz gitCommit: 0123456789abcdef0123456789abcdef01234567 imagebuilder: arch: @@ -1861,32 +1075,32 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.1 kubeVersion: v1.23.16 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-23/kubernetes-1-23-eks-17.yaml - name: kubernetes-1-23-eks-17 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-23/kubernetes-1-23-eks-18.yaml + name: kubernetes-1-23-eks-18 ova: bottlerocket: arch: - amd64 - description: Bottlerocket Ova image for EKS-D 1-23-17 release - name: bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.0-amd64.ova + description: Bottlerocket Ova image for EKS-D 1-23-18 release + name: bottlerocket-v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.0-amd64.ova os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-23/1-23-17/bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.0-amd64.ova + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-23/1-23-18/bottlerocket-v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.0-amd64.ova raw: bottlerocket: arch: - amd64 - description: Bottlerocket Raw image for EKS-D 1-23-17 release - name: bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.0-amd64.img.gz + description: Bottlerocket Raw image for EKS-D 1-23-18 release + name: bottlerocket-v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-23/1-23-17/bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-23/1-23-18/bottlerocket-v1.23.16-eks-d-1-23-18-eks-a-v0.0.0-dev-build.0-amd64.img.gz eksa: cliTools: arch: @@ -1896,7 +1110,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-build.1 clusterController: arch: - amd64 @@ -1905,9 +1119,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.4/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -1916,11 +1130,11 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-build.1 version: v0.0.0-dev+build.0+abcdef1 etcdadmBootstrap: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -1929,7 +1143,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-bootstrap-provider os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -1940,11 +1154,11 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/metadata.yaml - version: v1.0.7+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/metadata.yaml + version: v1.0.7-rc1+abcdef1 etcdadmController: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -1953,7 +1167,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-controller os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -1964,8 +1178,8 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/metadata.yaml - version: v1.0.6+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/metadata.yaml + version: v1.0.6-rc1+abcdef1 flux: helmController: arch: @@ -2050,7 +1264,7 @@ spec: description: Helm chart for eks-anywhere-packages imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.9-eks-a-v0.0.0-dev-build.1 packageController: arch: - amd64 @@ -2059,7 +1273,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.9-eks-a-v0.0.0-dev-build.1 tokenRefresher: arch: - amd64 @@ -2068,8 +1282,8 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: ecr-token-refresher os: linux - uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.6-eks-a-v0.0.0-dev-build.1 - version: v0.3.6+abcdef1 + uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.9-eks-a-v0.0.0-dev-build.1 + version: v0.3.9+abcdef1 snow: bottlerocketBootstrapSnow: arch: @@ -2079,7 +1293,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-23-17-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-23-18-eks-a-v0.0.0-dev-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/infrastructure-components.yaml kubeVip: @@ -2269,7 +1483,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: rufio os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.2.1-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.3.0-eks-a-v0.0.0-dev-build.1 tink: tinkController: arch: @@ -2395,18 +1609,18 @@ spec: arch: - amd64 description: Container image for bottlerocket-admin image - imageDigest: sha256:5af314144e2e349b0fdb504dba5c3bcac6f7594ea4051ce1365dd69a923cb57f + imageDigest: sha256:994f332ac4b2fb2a5c3acca75501e5c5ef34cc1bfdfcffb3d0af16ceb1bdb1ca name: bottlerocket-admin os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.9.4 + uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.10.0 control: arch: - amd64 description: Container image for bottlerocket-control image - imageDigest: sha256:d3dfdff919f32a5317ec82d06881c4151eaaf0d946112731674860a4229a66a2 + imageDigest: sha256:a91f9e9e7ab7056fab84c85c0d8d6ad38b16f9c9d3d01baecdcec856bdd19c8e name: bottlerocket-control os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.0 + uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.1 kubeadmBootstrap: arch: - amd64 @@ -2415,7 +1629,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-24-12-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-24-13-eks-a-v0.0.0-dev-build.1 certManager: acmesolver: arch: @@ -2602,13 +1816,13 @@ spec: bottlerocket: arch: - amd64 - description: Bottlerocket Ami image for EKS-D 1-24-12 release - name: bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.0-amd64.img.gz + description: Bottlerocket Ami image for EKS-D 1-24-13 release + name: bottlerocket-v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-24/1-24-12/bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ami/1-24/1-24-13/bottlerocket-v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.0-amd64.img.gz channel: 1-24 components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml crictl: @@ -2628,7 +1842,7 @@ spec: os: linux sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/5b496a72af3d80d64a16a650c85ce9a5882bc014/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/f089d308442c18f487a52d09fd067ae9ac7cd8f2/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz gitCommit: 0123456789abcdef0123456789abcdef01234567 imagebuilder: arch: @@ -2647,32 +1861,32 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.1 kubeVersion: v1.24.10 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-24/kubernetes-1-24-eks-12.yaml - name: kubernetes-1-24-eks-12 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-24/kubernetes-1-24-eks-13.yaml + name: kubernetes-1-24-eks-13 ova: bottlerocket: arch: - amd64 - description: Bottlerocket Ova image for EKS-D 1-24-12 release - name: bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.0-amd64.ova + description: Bottlerocket Ova image for EKS-D 1-24-13 release + name: bottlerocket-v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.0-amd64.ova os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-24/1-24-12/bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.0-amd64.ova + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/ova/1-24/1-24-13/bottlerocket-v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.0-amd64.ova raw: bottlerocket: arch: - amd64 - description: Bottlerocket Raw image for EKS-D 1-24-12 release - name: bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.0-amd64.img.gz + description: Bottlerocket Raw image for EKS-D 1-24-13 release + name: bottlerocket-v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-24/1-24-12/bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-distro/raw/1-24/1-24-13/bottlerocket-v1.24.10-eks-d-1-24-13-eks-a-v0.0.0-dev-build.0-amd64.img.gz eksa: cliTools: arch: @@ -2682,7 +1896,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-build.1 clusterController: arch: - amd64 @@ -2691,9 +1905,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.4/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -2702,11 +1916,11 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-build.1 version: v0.0.0-dev+build.0+abcdef1 etcdadmBootstrap: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -2715,7 +1929,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-bootstrap-provider os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -2726,11 +1940,11 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/metadata.yaml - version: v1.0.7+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/metadata.yaml + version: v1.0.7-rc1+abcdef1 etcdadmController: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -2739,7 +1953,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-controller os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -2750,8 +1964,8 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/metadata.yaml - version: v1.0.6+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/metadata.yaml + version: v1.0.6-rc1+abcdef1 flux: helmController: arch: @@ -2836,7 +2050,7 @@ spec: description: Helm chart for eks-anywhere-packages imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.9-eks-a-v0.0.0-dev-build.1 packageController: arch: - amd64 @@ -2845,7 +2059,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.9-eks-a-v0.0.0-dev-build.1 tokenRefresher: arch: - amd64 @@ -2854,8 +2068,8 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: ecr-token-refresher os: linux - uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.6-eks-a-v0.0.0-dev-build.1 - version: v0.3.6+abcdef1 + uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.9-eks-a-v0.0.0-dev-build.1 + version: v0.3.9+abcdef1 snow: bottlerocketBootstrapSnow: arch: @@ -2865,7 +2079,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-24-12-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-24-13-eks-a-v0.0.0-dev-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/infrastructure-components.yaml kubeVip: @@ -3055,7 +2269,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: rufio os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.2.1-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.3.0-eks-a-v0.0.0-dev-build.1 tink: tinkController: arch: @@ -3181,18 +2395,18 @@ spec: arch: - amd64 description: Container image for bottlerocket-admin image - imageDigest: sha256:5af314144e2e349b0fdb504dba5c3bcac6f7594ea4051ce1365dd69a923cb57f + imageDigest: sha256:994f332ac4b2fb2a5c3acca75501e5c5ef34cc1bfdfcffb3d0af16ceb1bdb1ca name: bottlerocket-admin os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.9.4 + uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.10.0 control: arch: - amd64 description: Container image for bottlerocket-control image - imageDigest: sha256:d3dfdff919f32a5317ec82d06881c4151eaaf0d946112731674860a4229a66a2 + imageDigest: sha256:a91f9e9e7ab7056fab84c85c0d8d6ad38b16f9c9d3d01baecdcec856bdd19c8e name: bottlerocket-control os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.0 + uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.1 kubeadmBootstrap: arch: - amd64 @@ -3201,7 +2415,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-25-8-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-25-9-eks-a-v0.0.0-dev-build.1 certManager: acmesolver: arch: @@ -3405,7 +2619,7 @@ spec: os: linux sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/5b496a72af3d80d64a16a650c85ce9a5882bc014/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/f089d308442c18f487a52d09fd067ae9ac7cd8f2/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz gitCommit: 0123456789abcdef0123456789abcdef01234567 imagebuilder: arch: @@ -3424,10 +2638,10 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.25.6-eks-d-1-25-8-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.25.6-eks-d-1-25-9-eks-a-v0.0.0-dev-build.1 kubeVersion: v1.25.6 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-25/kubernetes-1-25-eks-8.yaml - name: kubernetes-1-25-eks-8 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-25/kubernetes-1-25-eks-9.yaml + name: kubernetes-1-25-eks-9 ova: bottlerocket: {} raw: @@ -3441,7 +2655,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-build.1 clusterController: arch: - amd64 @@ -3450,9 +2664,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.4/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -3461,11 +2675,11 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-build.1 version: v0.0.0-dev+build.0+abcdef1 etcdadmBootstrap: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -3474,7 +2688,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-bootstrap-provider os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -3485,11 +2699,11 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/metadata.yaml - version: v1.0.7+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/metadata.yaml + version: v1.0.7-rc1+abcdef1 etcdadmController: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -3498,7 +2712,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-controller os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -3509,8 +2723,8 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/metadata.yaml - version: v1.0.6+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/metadata.yaml + version: v1.0.6-rc1+abcdef1 flux: helmController: arch: @@ -3595,7 +2809,7 @@ spec: description: Helm chart for eks-anywhere-packages imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.9-eks-a-v0.0.0-dev-build.1 packageController: arch: - amd64 @@ -3604,7 +2818,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.9-eks-a-v0.0.0-dev-build.1 tokenRefresher: arch: - amd64 @@ -3613,8 +2827,8 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: ecr-token-refresher os: linux - uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.6-eks-a-v0.0.0-dev-build.1 - version: v0.3.6+abcdef1 + uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.9-eks-a-v0.0.0-dev-build.1 + version: v0.3.9+abcdef1 snow: bottlerocketBootstrapSnow: arch: @@ -3624,7 +2838,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-25-8-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-25-9-eks-a-v0.0.0-dev-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/infrastructure-components.yaml kubeVip: @@ -3814,7 +3028,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: rufio os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.2.1-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.3.0-eks-a-v0.0.0-dev-build.1 tink: tinkController: arch: @@ -3940,18 +3154,18 @@ spec: arch: - amd64 description: Container image for bottlerocket-admin image - imageDigest: sha256:5af314144e2e349b0fdb504dba5c3bcac6f7594ea4051ce1365dd69a923cb57f + imageDigest: sha256:994f332ac4b2fb2a5c3acca75501e5c5ef34cc1bfdfcffb3d0af16ceb1bdb1ca name: bottlerocket-admin os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.9.4 + uri: public.ecr.aws/bottlerocket/bottlerocket-admin:v0.10.0 control: arch: - amd64 description: Container image for bottlerocket-control image - imageDigest: sha256:d3dfdff919f32a5317ec82d06881c4151eaaf0d946112731674860a4229a66a2 + imageDigest: sha256:a91f9e9e7ab7056fab84c85c0d8d6ad38b16f9c9d3d01baecdcec856bdd19c8e name: bottlerocket-control os: linux - uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.0 + uri: public.ecr.aws/bottlerocket/bottlerocket-control:v0.7.1 kubeadmBootstrap: arch: - amd64 @@ -3960,7 +3174,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-26-4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-26-5-eks-a-v0.0.0-dev-build.1 certManager: acmesolver: arch: @@ -4164,7 +3378,7 @@ spec: os: linux sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/5b496a72af3d80d64a16a650c85ce9a5882bc014/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm/f089d308442c18f487a52d09fd067ae9ac7cd8f2/etcdadm-v0.0.0-dev-build.0-linux-amd64.tar.gz gitCommit: 0123456789abcdef0123456789abcdef01234567 imagebuilder: arch: @@ -4183,10 +3397,10 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.26.1-eks-d-1-26-4-eks-a-v0.0.0-dev-build.1 - kubeVersion: v1.26.1 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-26/kubernetes-1-26-eks-4.yaml - name: kubernetes-1-26-eks-4 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.26.2-eks-d-1-26-5-eks-a-v0.0.0-dev-build.1 + kubeVersion: v1.26.2 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-26/kubernetes-1-26-eks-5.yaml + name: kubernetes-1-26-eks-5 ova: bottlerocket: {} raw: @@ -4200,7 +3414,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-build.1 clusterController: arch: - amd64 @@ -4209,9 +3423,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.4/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -4220,11 +3434,11 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-build.1 version: v0.0.0-dev+build.0+abcdef1 etcdadmBootstrap: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -4233,7 +3447,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-bootstrap-provider os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-bootstrap-provider:v1.0.7-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -4244,11 +3458,11 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7/metadata.yaml - version: v1.0.7+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-bootstrap-provider/manifests/bootstrap-etcdadm-bootstrap/v1.0.7-rc1/metadata.yaml + version: v1.0.7-rc1+abcdef1 etcdadmController: components: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/bootstrap-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/bootstrap-components.yaml controller: arch: - amd64 @@ -4257,7 +3471,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: etcdadm-controller os: linux - uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/aws/etcdadm-controller:v1.0.6-rc1-eks-a-v0.0.0-dev-build.1 kubeProxy: arch: - amd64 @@ -4268,8 +3482,8 @@ spec: os: linux uri: public.ecr.aws/release-container-registry/brancz/kube-rbac-proxy:v0.14.0-eks-a-v0.0.0-dev-build.1 metadata: - uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6/metadata.yaml - version: v1.0.6+abcdef1 + uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/etcdadm-controller/manifests/bootstrap-etcdadm-controller/v1.0.6-rc1/metadata.yaml + version: v1.0.6-rc1+abcdef1 flux: helmController: arch: @@ -4354,7 +3568,7 @@ spec: description: Helm chart for eks-anywhere-packages imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:0.3.9-eks-a-v0.0.0-dev-build.1 packageController: arch: - amd64 @@ -4363,7 +3577,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-packages os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.6-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-packages:v0.3.9-eks-a-v0.0.0-dev-build.1 tokenRefresher: arch: - amd64 @@ -4372,8 +3586,8 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: ecr-token-refresher os: linux - uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.6-eks-a-v0.0.0-dev-build.1 - version: v0.3.6+abcdef1 + uri: public.ecr.aws/release-container-registry/ecr-token-refresher:v0.3.9-eks-a-v0.0.0-dev-build.1 + version: v0.3.9+abcdef1 snow: bottlerocketBootstrapSnow: arch: @@ -4383,7 +3597,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-26-4-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-26-5-eks-a-v0.0.0-dev-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.25/infrastructure-components.yaml kubeVip: @@ -4573,7 +3787,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: rufio os: linux - uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.2.1-eks-a-v0.0.0-dev-build.1 + uri: public.ecr.aws/release-container-registry/tinkerbell/rufio:v0.3.0-eks-a-v0.0.0-dev-build.1 tink: tinkController: arch: diff --git a/release/pkg/test/testdata/release-0.14-bundle-release.yaml b/release/pkg/test/testdata/release-0.14-bundle-release.yaml index eab1a89258a9..5c50d695978b 100644 --- a/release/pkg/test/testdata/release-0.14-bundle-release.yaml +++ b/release/pkg/test/testdata/release-0.14-bundle-release.yaml @@ -324,7 +324,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 clusterController: arch: - amd64 @@ -333,9 +333,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.3/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -344,7 +344,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 version: v0.0.0-dev-release-0.14+build.0+abcdef1 etcdadmBootstrap: components: @@ -843,7 +843,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-22-22-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-22-21-eks-a-v0.0.0-dev-release-0.14-build.1 certManager: acmesolver: arch: @@ -1030,13 +1030,13 @@ spec: bottlerocket: arch: - amd64 - description: Bottlerocket Ami image for EKS-D 1-22-22 release - name: bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + description: Bottlerocket Ami image for EKS-D 1-22-21 release + name: bottlerocket-v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ami/1-22/1-22-22/bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ami/1-22/1-22-21/bottlerocket-v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz channel: 1-22 components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml crictl: @@ -1075,32 +1075,32 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.1 kubeVersion: v1.22.17 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-22/kubernetes-1-22-eks-22.yaml - name: kubernetes-1-22-eks-22 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-22/kubernetes-1-22-eks-21.yaml + name: kubernetes-1-22-eks-21 ova: bottlerocket: arch: - amd64 - description: Bottlerocket Ova image for EKS-D 1-22-22 release - name: bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova + description: Bottlerocket Ova image for EKS-D 1-22-21 release + name: bottlerocket-v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ova/1-22/1-22-22/bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ova/1-22/1-22-21/bottlerocket-v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova raw: bottlerocket: arch: - amd64 - description: Bottlerocket Raw image for EKS-D 1-22-22 release - name: bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + description: Bottlerocket Raw image for EKS-D 1-22-21 release + name: bottlerocket-v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/raw/1-22/1-22-22/bottlerocket-v1.22.17-eks-d-1-22-22-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/raw/1-22/1-22-21/bottlerocket-v1.22.17-eks-d-1-22-21-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz eksa: cliTools: arch: @@ -1110,7 +1110,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 clusterController: arch: - amd64 @@ -1119,9 +1119,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.3/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -1130,7 +1130,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 version: v0.0.0-dev-release-0.14+build.0+abcdef1 etcdadmBootstrap: components: @@ -1293,7 +1293,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-22-22-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-22-21-eks-a-v0.0.0-dev-release-0.14-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.24/infrastructure-components.yaml kubeVip: @@ -1629,7 +1629,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-23-17-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-23-16-eks-a-v0.0.0-dev-release-0.14-build.1 certManager: acmesolver: arch: @@ -1816,13 +1816,13 @@ spec: bottlerocket: arch: - amd64 - description: Bottlerocket Ami image for EKS-D 1-23-17 release - name: bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + description: Bottlerocket Ami image for EKS-D 1-23-16 release + name: bottlerocket-v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ami/1-23/1-23-17/bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ami/1-23/1-23-16/bottlerocket-v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz channel: 1-23 components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml crictl: @@ -1861,32 +1861,32 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.1 kubeVersion: v1.23.16 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-23/kubernetes-1-23-eks-17.yaml - name: kubernetes-1-23-eks-17 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-23/kubernetes-1-23-eks-16.yaml + name: kubernetes-1-23-eks-16 ova: bottlerocket: arch: - amd64 - description: Bottlerocket Ova image for EKS-D 1-23-17 release - name: bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova + description: Bottlerocket Ova image for EKS-D 1-23-16 release + name: bottlerocket-v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ova/1-23/1-23-17/bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ova/1-23/1-23-16/bottlerocket-v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova raw: bottlerocket: arch: - amd64 - description: Bottlerocket Raw image for EKS-D 1-23-17 release - name: bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + description: Bottlerocket Raw image for EKS-D 1-23-16 release + name: bottlerocket-v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/raw/1-23/1-23-17/bottlerocket-v1.23.16-eks-d-1-23-17-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/raw/1-23/1-23-16/bottlerocket-v1.23.16-eks-d-1-23-16-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz eksa: cliTools: arch: @@ -1896,7 +1896,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 clusterController: arch: - amd64 @@ -1905,9 +1905,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.3/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -1916,7 +1916,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 version: v0.0.0-dev-release-0.14+build.0+abcdef1 etcdadmBootstrap: components: @@ -2079,7 +2079,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-23-17-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-23-16-eks-a-v0.0.0-dev-release-0.14-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.24/infrastructure-components.yaml kubeVip: @@ -2415,7 +2415,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-24-12-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-24-11-eks-a-v0.0.0-dev-release-0.14-build.1 certManager: acmesolver: arch: @@ -2602,13 +2602,13 @@ spec: bottlerocket: arch: - amd64 - description: Bottlerocket Ami image for EKS-D 1-24-12 release - name: bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + description: Bottlerocket Ami image for EKS-D 1-24-11 release + name: bottlerocket-v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ami/1-24/1-24-12/bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ami/1-24/1-24-11/bottlerocket-v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz channel: 1-24 components: https://distro.eks.amazonaws.com/crds/releases.distro.eks.amazonaws.com-v1alpha1.yaml crictl: @@ -2647,32 +2647,32 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.1 kubeVersion: v1.24.10 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-24/kubernetes-1-24-eks-12.yaml - name: kubernetes-1-24-eks-12 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-24/kubernetes-1-24-eks-11.yaml + name: kubernetes-1-24-eks-11 ova: bottlerocket: arch: - amd64 - description: Bottlerocket Ova image for EKS-D 1-24-12 release - name: bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova + description: Bottlerocket Ova image for EKS-D 1-24-11 release + name: bottlerocket-v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ova/1-24/1-24-12/bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/ova/1-24/1-24-11/bottlerocket-v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.ova raw: bottlerocket: arch: - amd64 - description: Bottlerocket Raw image for EKS-D 1-24-12 release - name: bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + description: Bottlerocket Raw image for EKS-D 1-24-11 release + name: bottlerocket-v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz os: linux osName: bottlerocket sha256: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef sha512: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/raw/1-24/1-24-12/bottlerocket-v1.24.10-eks-d-1-24-12-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-distro/raw/1-24/1-24-11/bottlerocket-v1.24.10-eks-d-1-24-11-eks-a-v0.0.0-dev-release-0.14-build.0-amd64.img.gz eksa: cliTools: arch: @@ -2682,7 +2682,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 clusterController: arch: - amd64 @@ -2691,9 +2691,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.3/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -2702,7 +2702,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 version: v0.0.0-dev-release-0.14+build.0+abcdef1 etcdadmBootstrap: components: @@ -2865,7 +2865,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-24-12-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-24-11-eks-a-v0.0.0-dev-release-0.14-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.24/infrastructure-components.yaml kubeVip: @@ -3201,7 +3201,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-25-8-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap:v1-25-7-eks-a-v0.0.0-dev-release-0.14-build.1 certManager: acmesolver: arch: @@ -3424,10 +3424,10 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: kind-node os: linux - uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.25.6-eks-d-1-25-8-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/kubernetes-sigs/kind/node:v1.25.6-eks-d-1-25-7-eks-a-v0.0.0-dev-release-0.14-build.1 kubeVersion: v1.25.6 - manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-25/kubernetes-1-25-eks-8.yaml - name: kubernetes-1-25-eks-8 + manifestUrl: https://distro.eks.amazonaws.com/kubernetes-1-25/kubernetes-1-25-eks-7.yaml + name: kubernetes-1-25-eks-7 ova: bottlerocket: {} raw: @@ -3441,7 +3441,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cli-tools os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cli-tools:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 clusterController: arch: - amd64 @@ -3450,9 +3450,9 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-cluster-controller os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-cluster-controller:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 components: - uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.3/eksa-components.yaml + uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/eks-anywhere/manifests/cluster-controller/v0.14.5/eksa-components.yaml diagnosticCollector: arch: - amd64 @@ -3461,7 +3461,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: eks-anywhere-diagnostic-collector os: linux - uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.3-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/eks-anywhere-diagnostic-collector:v0.14.5-eks-a-v0.0.0-dev-release-0.14-build.1 version: v0.0.0-dev-release-0.14+build.0+abcdef1 etcdadmBootstrap: components: @@ -3624,7 +3624,7 @@ spec: imageDigest: sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef name: bottlerocket-bootstrap-snow os: linux - uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-25-8-eks-a-v0.0.0-dev-release-0.14-build.1 + uri: public.ecr.aws/release-container-registry/bottlerocket-bootstrap-snow:v1-25-7-eks-a-v0.0.0-dev-release-0.14-build.1 components: uri: https://release-bucket/artifacts/v0.0.0-dev-release-0.14-build.0/cluster-api-provider-aws-snow/manifests/infrastructure-snow/v0.1.24/infrastructure-components.yaml kubeVip: diff --git a/release/triggers/brew-version-release/CLI_RELEASE_VERSION b/release/triggers/brew-version-release/CLI_RELEASE_VERSION index 6cf9fa53d64d..574be6000719 100644 --- a/release/triggers/brew-version-release/CLI_RELEASE_VERSION +++ b/release/triggers/brew-version-release/CLI_RELEASE_VERSION @@ -1 +1 @@ -v0.14.4 +v0.14.5 diff --git a/scripts/golden_create_pr.sh b/scripts/golden_create_pr.sh index 41e329d0dbdc..2b5eebf3b036 100755 --- a/scripts/golden_create_pr.sh +++ b/scripts/golden_create_pr.sh @@ -22,11 +22,11 @@ REPO="eks-anywhere" ORIGIN_ORG="eks-distro-pr-bot" UPSTREAM_ORG="aws" -PR_TITLE="Generate release test file" -COMMIT_MESSAGE="[PR BOT] Generate release test file" +PR_TITLE="Generate release testdata files" +COMMIT_MESSAGE="[PR BOT] Generate release testdata files" PR_BODY=$(cat <