Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update clusterctl create when machines objects have a provisioning indication #253

Closed
jessicaochen opened this issue May 30, 2018 · 9 comments
Assignees
Labels
area/clusterctl Issues or PRs related to clusterctl priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@jessicaochen
Copy link
Contributor

Seeing a node in the machine status cannot be purely relied upon because the provisioned master may not be registering with the stack that provisions it.

@davidewatson
Copy link
Contributor

Currently clusterctl requires an annotation set by the provider to determine when nodes have been created.

Annotations are usually used to extend the spec, and conditions are can be used to extend the status. If #483 is merged then maybe the required annotation which is used for this now should be replaced by a condition.

Note that there would still be one place where an annotation is optionally used by the core controller code:

val, ok := node.ObjectMeta.Annotations[machineAnnotationKey]

@davidewatson
Copy link
Contributor

I originally thought it would be possible to reuse the Node condition KubeletReady to indicate a Machine had been created, but now I am not so sure. One problem is this coupling is not trivial when the corresponding Machine is in a different cluster than the Node.

When we were working on our ssh provisioner, we had to ssh from the manager cluster (where our controllers run) to the managed master in order to generate the kubeadm token. This was only necessary when we did not pivot. I think individual providers will have to do something similar if they choose to allow Cluster API resources to exist outside of the cluster they manage.

Will have to think more about this.

@timothysc timothysc added this to the v1alpha1 milestone Jan 10, 2019
@timothysc timothysc added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 10, 2019
@davidewatson
Copy link
Contributor

Propose we leave this as is for v1alpha1.

@vincepri
Copy link
Member

/milestone Next

@k8s-ci-robot k8s-ci-robot modified the milestones: v1alpha1, Next Feb 27, 2019
@roberthbailey roberthbailey added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Feb 27, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2019
@vincepri
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2019
@ncdc
Copy link
Contributor

ncdc commented May 30, 2019

I'm not entirely clear on what this issue is about. Could someone please clarify? Thank you!

@davidewatson
Copy link
Contributor

davidewatson commented Jun 7, 2019

A Ready "status" is only one of the states we need to expose for Bootstrapping. Therefore we'll close this issue and handle the general problem within the context of Bootstrapping.

/close

@k8s-ci-robot
Copy link
Contributor

@davidewatson: Closing this issue.

In response to this:

A Ready "status" is only one of the states we need to expose for Bootstrapping. Therefore we'll close this and handle this there.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

chuckha pushed a commit to chuckha/cluster-api that referenced this issue Oct 2, 2019
…lt-version

✨ Notify when failing back to default container image
chuckha pushed a commit to chuckha/cluster-api that referenced this issue Oct 2, 2019
jayunit100 pushed a commit to jayunit100/cluster-api that referenced this issue Jan 31, 2020
…-sigs#253)

- This change adds new fields to the VsphereMachineProviderConfig CRD
  for NTP support
- It also fixes the CRD for the VsphereClusterProviderConfig as it was
  missing the vsphereCredentialSecret property recently added to the spec

Issue kubernetes-sigs#244

Change-Id: I0cb436d3e778686ab70ad01e119c94eb4407e28d
@killianmuldoon killianmuldoon added area/clusterctl Issues or PRs related to clusterctl and removed kind/clusterctl labels May 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

9 participants