You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We will be having a Contributor Summit at KubeCon EU, on May 16th, which will be mostly in an unconference format. Stay tuned for more information as we get closer.
We will be disabling new Beta APIs by default in 1.24, since if an API is on by default it’s kinda sorta production and not really beta anymore. Old beta APIs will not be disabled, though.
Finally, we had a long discussion around how to improve Kubernetes reliability long-term. Many contributors shared ideas, including emulating SIG-Node’s CI project, more stringent enforcement of test requirements on new features, and making TestGrid easier to understand. Since declining reliability is a problem five years in development, we’ll need more than a few months of effort to change it. Share your thoughts on the KEP or the dev thread.
Video and transcripts for the Community Meeting will be up soon.
Release Schedule
Next Deadline: Code Freeze, March 30th
All of the enhancements, including ones with exceptions, are done. We’re now in the countdown to code freeze, so get your last bit of code tweaked and your tests written.
CI Signal for 1.24 is still not happy, with four flakes and a failure in Blocking, and lots more in Informing. Chief among these is the skew failure, which has now been failing for a month, and appears to be a test code issue. If you’re familiar with the skew tests, please pitch in. Folks also resolved a Containerd issue in the scalability tests.
Patch releases 1.23.5, 1.22.8, and 1.21.11. There are a lot of backported bugfixes in these updates, so please apply them as soon as you can.
One of the reasons CEL was selected for the new expression-based validation feature is that despite being a programming environment, CEL code can be statically analyzed to determine a worst case runtime. And as a bonus, this analysis process can also be given a worst case timeout. Put together, this means that even a hostile CustomResourceDefinition (which is still a very bad thing, so don’t go trying this in production) won’t drag down apiserver performance too much. This PR was one of severalto setup the cost estimation feature. If you have a complex CEL program to try that you think might run into costing issues, now would be a good time to test out the current thresholds and see how well they work for your use cases.
The Pod Topology Spread Constraints system is the latest iteration of “please don’t pack all my pods onto one node so they don’t all go down together”. This system built on top of the older inter-pod affinities system to allow for more than one pod per node while still requiring an even spread. It does this by configuring a “maximum skew”, limiting how out of balance the scheduling is allowed to be. This works great for a fixed number of running nodes where the scheduler knows all the resources it has to play with but as a new feature, you can now configure the minimum number of nodes (really, domains) that should exist before even considering skew. All together this allows for both complex IaaS autoscaling and also keeping your AZs (or similar) in balance.
Other Merges
kube-proxy stops holding open node ports it uses, but you’ll still break stuff if you put something else on those ports
Developer News
We will be having a Contributor Summit at KubeCon EU, on May 16th, which will be mostly in an unconference format. Stay tuned for more information as we get closer.
Last week was Community Meeting week. Topics discussed included:
It’s been a year since we changed to 3 releases a year, so it’s time to evaluate if the change will be permanent. Per initial survey results so far, most people like it. But there’s still time to fill out the survey.
We will be disabling new Beta APIs by default in 1.24, since if an API is on by default it’s kinda sorta production and not really beta anymore. Old beta APIs will not be disabled, though.
Finally, we had a long discussion around how to improve Kubernetes reliability long-term. Many contributors shared ideas, including emulating SIG-Node’s CI project, more stringent enforcement of test requirements on new features, and making TestGrid easier to understand. Since declining reliability is a problem five years in development, we’ll need more than a few months of effort to change it. Share your thoughts on the KEP or the dev thread.
Video and transcripts for the Community Meeting will be up soon.
Release Schedule
Next Deadline: Code Freeze, March 30th
All of the enhancements, including ones with exceptions, are done. We’re now in the countdown to code freeze, so get your last bit of code tweaked and your tests written.
CI Signal for 1.24 is still not happy, with four flakes and a failure in Blocking, and lots more in Informing. Chief among these is the skew failure, which has now been failing for a month, and appears to be a test code issue. If you’re familiar with the skew tests, please pitch in. Folks also resolved a Containerd issue in the scalability tests.
Patch releases 1.23.5, 1.22.8, and 1.21.11. There are a lot of backported bugfixes in these updates, so please apply them as soon as you can.
Featured PRs
#108482: Add CEL runtime cost into CR validation
One of the reasons CEL was selected for the new expression-based validation feature is that despite being a programming environment, CEL code can be statically analyzed to determine a worst case runtime. And as a bonus, this analysis process can also be given a worst case timeout. Put together, this means that even a hostile CustomResourceDefinition (which is still a very bad thing, so don’t go trying this in production) won’t drag down apiserver performance too much. This PR was one of severalto setup the cost estimation feature. If you have a complex CEL program to try that you think might run into costing issues, now would be a good time to test out the current thresholds and see how well they work for your use cases.
#107674: Add MinDomains API to TopologySpreadConstraints field
The Pod Topology Spread Constraints system is the latest iteration of “please don’t pack all my pods onto one node so they don’t all go down together”. This system built on top of the older inter-pod affinities system to allow for more than one pod per node while still requiring an even spread. It does this by configuring a “maximum skew”, limiting how out of balance the scheduling is allowed to be. This works great for a fixed number of running nodes where the scheduler knows all the resources it has to play with but as a new feature, you can now configure the minimum number of nodes (really, domains) that should exist before even considering skew. All together this allows for both complex IaaS autoscaling and also keeping your AZs (or similar) in balance.
Other Merges
Promotions
Deprecated
--deserialization-cache-size
flag is removedThe text was updated successfully, but these errors were encountered: