-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUJ: How to implements defaults for resourcequota and limitrange in namespace provisioning use-case #3294
Comments
Thanks. It's useful to make this concrete. Quick comment: Approach 2 is a generator: #2528. The behavior you describe is the same as for the upstream case, which is consistent with previous discussions. Supporting update-like merge logic for generators would be broadly useful. If done with plain get, it would use a patch-on-fork methodology. |
Why would set-default-limits be invoked onDeploy? It seems simpler to invoke the function imperatively on the blueprint and commit the results. An updated function applied to the blueprint would then update it. Downstream packages would get updates if they hadn't overridden the defaults, which is the behavior we'd want. In general, I think of creation, such as via a UI or CLI, at blueprint authoring time as an imperative act. The blueprint captures the desired defaults. I don't want to fall back to the approach where the default values are embedded only in monolithic generator implementations, and all changes are expected to be made to the exclusive, monolithic generator. That's where IaC tools are today. Maybe a case could be made for generators capturing default values of new attributes. In an API, default changes are considered non-backwards-compatible, requiring a major version bump. I definitely don't want changes to blueprint values to override values in downstream packages that have been changed, much as Turbotax allows overrides in forms of values set during the step-by-step mode. But I wouldn't use a generator in a downstream package to simply generate defaults that should originate upstream. Also, our current scenario supports one level of derivation: abstract blueprint -> deployable instance. We KNOW we will need to support multiple levels of derivation, such as to support multiple clusters, multiple regions, and multiple environments. |
quick comments:
The intention is to reduce the manual steps and knowledge required for the package consumer. Typically, if package consumers always have to execute
This works well for the blueprint authoring part. I think it becomes an issue if creating a deployable instance involves creating the resources, package consumers stamping out resources to make an instance deployable using CLI or UI becomes manual and then people look for automating those and generators starts looking attractive.
I think making |
Generation via UI or CLI isn't manual. As many attributes can be filled in automatically as we like. Boilerplate, YAML indentation, value validation, etc. can be done by the tools, also. Think of them as generator functions. What information would need to be specified to a different type of generator? It sounds like the struct constructor pattern. If every variant needs the resources, they belong in the blueprint. |
Current package update behavior is just plain wrong. Before thinking about making it flexible, we should think about how to achieve the most desirable behavior. |
The Kubernetes Resource Model is a serializable representation of declarative intent. Yet it needs to be authored somehow, to capture user intent, as well as whatever choices are made by automation. Blueprints are intended to capture such choices, which can then be inherited by downstream packages. |
Ack. Currently most obvious case is broken, so we need to prioritize Note to self: I should have probably started a public google doc for this. It's very difficult to have discussion on github issue. |
Note: I am creating this issue/proposal to make some of our discussions in other issues more concrete (and I also think this is good CUJ related to namespace provisioning that exposes gaps in the toolchain).
Platform teams wants to implement best practice for cpu/memory/storage consumption for resources to be provisioned in a namespace. Kubernetes offers Resourcequota and LimitRange primitives to assist with that. Platform team would like
namespace provisioning
solution to implement the following:pkg update
,fn render
etc.Pl. suggest other requirements if I missed any.
...
Some proposal and solutions:
Note: We will refer to
basens
as the upstream package (aka abstract package) andvariants
,downstream package
,deployable instance
are used interchangeably.Approach 1: Upstream package has defaults
One obvious way will be that the upstream package
basens
can have resourcequota and limitrange resources with the sensible defaults. Downstream packages get these defaults automatically whenkpt pkg get <basens> --for-deployment
is used to stamp out a variant ofbasens
.kpt pkg update <downstream-pkg>
will not preserve the values in downstream pkg (currently update strategy is upstream wins if it has different values). So this doesn't satisfy the 2nd requirement completely.basens
package. Downstream packages using default values will get the new default automatically today because upstream wins duringkpt pkg update
.basens
package can include a validator function to check forunbounded or unreasonably high
limits that will be invoked on eachfn render
run.So this approch doesn't work because
kpt pkg update
overrides the downstream customization.Approach 2: function set-defaults-limits
There are two parts in this approach:
basens
will contain resourcequota and limitrange resources with placeholder valueexample
as the defaults, the upstream package is not complete in itself.set-default-limits
with following behavior:force-reset
flag that updates the values to new default ignoring the current values.Let's analyze how well it satisfies our requirements:
kpt pkg get <downstream-pkg> --for-deployment
and then runningkpt fn eval -i set-default-limits <downstream-pkg>
to set the defaults. So, downstream users need to know that they need to run this function to ensure the defaults are in place (not great).kpt pkg update
orkpt fn render
. This is primarily because upstream package always have placeholder values.set-default-limits
that has new default values for resource quote and limitrange. It is also aware of previous default values. So, on a downstream package, runningkpt fn eval -i set-default-limits:v1.0.1 <downstream-pkg>
will ensure new defaults if defaults are not customized in the downstream package. Since upstream package doesn't contain resourcequota/limitrange resources,kpt pkg update
doesn't interfere with these values.basens
package can have theresource-limit-validator
function. This function ignores the placeholder valuesexample
.Approch 2.1: function set-default-limits with deployment-hook
This approach builds on
approch 2
and improves on our 1st requirement by removing the need of users to invokeset-default-limits
manually after stamping out the downstream package. Currentlykpt
supports mechanism to invoke builtin function to generate packge context. We can make the mechanism flexible to invoke any function for deployment instances.So platform team, author of
basens
in this case can specify a hook as shown below:So, with this, set-default-limits will be invoked when the downstream package is created by running
kpt pkg get basens <downstream-pkg> --for-deployment
and defaults are set for new variant. It addresses the 1st requirement.Rest of the steps remains same as in
approach-2
.cc: @bgrant0607 @justinsb
The text was updated successfully, but these errors were encountered: