Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how providers should deploy shared CRDs #567

Closed
davidewatson opened this issue Oct 30, 2018 · 11 comments
Closed

Document how providers should deploy shared CRDs #567

davidewatson opened this issue Oct 30, 2018 · 11 comments
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@davidewatson
Copy link
Contributor

davidewatson commented Oct 30, 2018

With the move from AAs to CRDs there are now two sets of CRDs which must be deployed to create a functioning Cluster API cluster:

One way this has been done is by constructing a providercomponents.yaml containing both sets of CRDs. For example:

https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/f5be8acd8abdd64c9a29a5a72156a871d6c4e7a1/Makefile#L122

Since CRDs are not namespaced, only one provider can create the Cluster API CRDs, though multiple providers may apply the same CRDs. There are clusters which run multiple Cluster API providers so we need a documented a convention for how these CRDs should be deployed.

This issue was created from this comment.

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 4, 2018
@roberthbailey roberthbailey added this to the v1alpha1 milestone Jan 11, 2019
@roberthbailey roberthbailey added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 11, 2019
@roberthbailey roberthbailey modified the milestones: v1alpha1, Next Jan 11, 2019
@detiber
Copy link
Member

detiber commented Feb 28, 2019

/kind documentation

@k8s-ci-robot k8s-ci-robot added the kind/documentation Categorizes issue or PR as related to documentation. label Feb 28, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2019
@ncdc
Copy link
Contributor

ncdc commented May 30, 2019

Assuming we proceed with removing actuators and move to a true split between generic cluster-api CRDs and controllers vs. provider CRDs and controllers, this issue probably becomes "document how to deploy cluster-api". Agree?

@ncdc
Copy link
Contributor

ncdc commented May 30, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 30, 2019
@detiber
Copy link
Member

detiber commented Jun 5, 2019

Assuming we proceed with removing actuators and move to a true split between generic cluster-api CRDs and controllers vs. provider CRDs and controllers, this issue probably becomes "document how to deploy cluster-api". Agree?

Yes

@timothysc timothysc modified the milestones: Next, v1alpha2 Jun 21, 2019
@asauber
Copy link

asauber commented Jun 21, 2019

cc myself

@chuckha
Copy link
Contributor

chuckha commented Aug 29, 2019

Agreed with the above points. No YAML should be deployed. The only thing providers (including CAPI) should do is make sure that the patches they provide at the tagged commit point to the correct image tag.

These can be consumed with kustomize's remote URLs feature.

This makes deploying a management cluster a 3 kustomize-command operation. This is a reasonable approach to getting started, clusterctl is probably the better thing to document to take a user from nothing to a pivoted management cluster on some cloud.

But questions that pop up for me:

  • What's the scope of this documentation? Is it from nothing to CAPI cluster?
  • Is clusterctl the recommended production deployment tool?

@detiber
Copy link
Member

detiber commented Aug 29, 2019

Another way that we can approach this is to publish the generated yaml as part of a release.

That would allow users to either:

  • Deploy the core provider components directly w/ kubectl create -f
  • Download the files to concatenate with additional yaml for deploying with clusterctl and a monolithic provider components

The benefit of this approach is that we do not require that the manager image patch file have any particular contents, and it would also insulate us from potential kustomize version skew requirements across multiple repos.

@vincepri
Copy link
Member

+💯 on what @detiber suggested, a familiar approach for Kubernetes users is probably going to be the best one :)

@chuckha
Copy link
Contributor

chuckha commented Aug 29, 2019

works for me.

@timothysc
Copy link
Member

Closing this in favor of updated quickstart docs.

jayunit100 pushed a commit to jayunit100/cluster-api that referenced this issue Jan 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests