-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync with upstream v0.10 #303
Conversation
…g_ip_pool ✨ IPAM provider for floating ips
e2e: remove "allow-all-in-cluster-traffic" patch
🐛 controllers: do not return a RequeueAfter and an error at the same time
Signed-off-by: Max Rantil <max.rantil@est.tech>
…-capo/max 🌱 Enhance Tilt integration with CAPO using a ClusterClass template
🐛 Fill up OpenStack cluster ReferencedResources with Image ID
This adds a patch to set the image automatically based on the kubernetes version. It also removes the requirement to have the CLUSTER_NAME variable set for the ClusterClass. Finally, the docs are updated to reflect this and with a few additions. Signed-off-by: Lennart Jern <lennart.jern@est.tech>
…-class-update 🌱 Update the dev-test ClusterClass
The e2e tests are currently very unstable. I suspect lack of resources in the devstack to be the issue. In an attempt to solve this, the number of control-plane nodes are reduced in this commit. Now only the multi-az test will use 3 CP. The rest will have 1 CP only. Signed-off-by: Lennart Jern <lennart.jern@est.tech>
While we're changing it, we also add validation that keys and values don't exceed 255 characters.
⚠️ Convert ServerMetadata from a map to a list
…e-down 🌱 Decrease number of machines in e2e tests
Migrate Dockerfile to use golang 1.21 to match go.mod in project
🐛Fix a stacktrace in LB logic by removing listener name from an error message when not set
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
…r-update 🌱 test: bump Flatcar version
…block-device-volume-name fix: fix the block device type name in doc
Add --tls-min-version and --tls-max-versin configuration flags. Same flags can be found in k8s, CAPI, CAPM3 etc. Co-authored-by: Jawad Zaheer <jawad.zaheer@est.tech> Signed-off-by: Tuomo Tanskanen <tuomo.tanskanen@est.tech>
When a reconcile loop for the bastion is requeued, we have this error: ``` OpenStackCluster.infrastructure.cluster.x-k8s.io \"cluster-e2e-rha0r3\" is invalid: ready: Required value" ``` The OpenStackMachine.Status is false by default now, so if the status has not been set to anything, patching the object will not fail with the previous error.
We now have a webhook that checks that a bastion has been disabled if a change has to be made (update or delete) in the bastion field. We also document it better. Also, we added some code to prevent that we don't have a nil pointer if the Spec.Bastion or Status.Bastion are unset.
🐛 api: openstackcluster.status default to false
…tcleanups 🌱 Minor improvements to api validation tests
Previously when loadbalacer was created it used the same network/subnet as the control plane nodes for the VIP. This was not always the right assumption as some users might want to be able to customize this according to their env. This commit fixes the above by adding two fields into OpenStackClusterSpec/Status two fields `network` and `subnets` under `APIServerLoadBalancer` so that user can define which network/subnet to use for allocation of the loadbalancer. Signed-off-by: Ondrej Blazek <ondrej.blazek@firma.seznam.cz>
…n_instanceid 📖 Document the change of instanceID in v1beta1
This fixes a bug where if we created a machine for a cluster which never became ready, we would never be able to 'resolve' the machine and therefore never delete it. We address this situation in several layers: Firstly, we move the point in the machine controller at which we add the finalizer in the first place. We don't add the finalizer until we're writing resolved, so this situation can never occur for newly created machines. This makes sense because we don't create resources until we've observed that both the finalizer has been added and resolved is up to date, so we don't need the finalizer to protect resources which can't have been created yet. Secondly, we shortcut the delete flow if the cluster is not ready. This is safe for the same reason as above, but is only relevant to machines created before v0.10. Lastly we surface and restrict the circumstances in which 'Resolved' is required on delete anyway. On closer inspection, this is only required in the very specific circumstance that the machine has volumes defined, and we are deleting it without the machine having been created. To make this more obvious we split volume deletion out of DeleteInstance and only resolve the machine spec in the event that it's required. 2 other factors make this change larger than it might otherwise be. We hit a cyclomatic complexity limit in reconcileDelete(), requiring a refactor. We remove the DeleteInstance tests which, after separating out DeleteVolumes, are quite trivial, and replace them with much more comprehensive set of tests for reconcileDelete.
…network feat: add configurable loadbalancer network
🐛 Include more device_owners when looking for a port for floating ip
🌱 Cleanup security group created by e2e test
🐛 Don't try to resolve machine on delete if cluster not ready
This deprecates k8s.io/util/pointer, which we replace with k8s.io/util/ptr to keep the linter happy.
Add opt to use AZ for APISeverLoadBalancer
Replaces an AvailabilityZone string for volumes with a VolumeAvailabilityZone struct which allows more flexibility in defaulting behaviour. Specifically it enables us to express both the current default behaviour where we take the volume AZ from the Machine, and a new default behaviour where to don't specify a volume AZ at all. In making this change to both RootVolume and AdditionalBlockDevices we use common code for both APIs. This has the result of updating RootVolume to be consistent with AdditionalBlockDevices.
🌱 Bump k8s dep to 0.29.3
⚠️ Allow explicitly empty volume AZ
🐛 Fix idempotent restore when setting ControlPlaneEndpoint
🐛 Fix crash on delete with no bastion
Somehow the carry patches don't seem to have applied. |
f2dd4df
to
b1e4390
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is the result of:
|
@pierreprinetti: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Pull the upstream tag v0.10