-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update clusterctl create when machines objects have a provisioning indication #253
Comments
Currently Annotations are usually used to extend the spec, and conditions are can be used to extend the status. If #483 is merged then maybe the required annotation which is used for this now should be replaced by a condition. Note that there would still be one place where an annotation is optionally used by the core controller code:
|
I originally thought it would be possible to reuse the When we were working on our ssh provisioner, we had to ssh from the manager cluster (where our controllers run) to the managed master in order to generate the Will have to think more about this. |
Propose we leave this as is for v1alpha1. |
/milestone Next |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I'm not entirely clear on what this issue is about. Could someone please clarify? Thank you! |
A /close |
@davidewatson: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…lt-version ✨ Notify when failing back to default container image
🎉 Promote @SataQiu to a maintainer
…-sigs#253) - This change adds new fields to the VsphereMachineProviderConfig CRD for NTP support - It also fixes the CRD for the VsphereClusterProviderConfig as it was missing the vsphereCredentialSecret property recently added to the spec Issue kubernetes-sigs#244 Change-Id: I0cb436d3e778686ab70ad01e119c94eb4407e28d
Seeing a node in the machine status cannot be purely relied upon because the provisioned master may not be registering with the stack that provisions it.
The text was updated successfully, but these errors were encountered: