-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Epic: WYSIWYG Kubernetes Application Configuration #3351
Comments
A few scenarios to try:
|
I agree that this is the direction that our users want us to go in, and I think this is a great list. We've started contributing some dogfooding PRs, which enable us to explore and prioritize the list of items you've identified here. |
Not that we're lacking for example applications to try, but here's another: It uses plain yaml, has a skaffold config, and has separate directories for prod and dev configs. There's also: |
I took another look at the kubernetes dashboard, portainer, k8syaml.com, k8od.io, lens, monokle, octant, the GKE UI, the Ambassador Labs clickops experience, and Humanitec. If starting from scratch, starting with required fields makes sense to me. For a blueprint, we can stick with "example" for the names. In most cases other than the container image we could probably provide some defaults, such as labels, selectors, and ports. If we want lots of defaults even in the blueprint authoring experience, I think the best way to provide those is with an upstream package. After required fields, we should allow adding and editing of arbitrary fields, but we may want to group them by topic, such as scheduling or security, and sort them by frequency of use. Dealing with multiple resources together should provide opportunities for autocompletion. For instance, the service selector could be the same as the deployment's selector by default, ports could be defaulted, etc., similar to kubectl run --expose. Adding a ConfigMap could optionally mount it as a volume or inject the contents as environment variables. We might be able to guess which by looking at its contents. We may also want to be able to upload files and convert them to ConfigMaps using a function (#3119). Our rule of thumb is that single values could just be edited in place. We'll need to think about how to direct users to those values that may need to be modified (#3145). We should identify cases where values need to be propagated to multiple places, which may suggest we need functions. |
I thought about just optimizing for starting blueprints from blueprints, but that would create a chicken and egg scenario of how to create base blueprints. We could potentially provide some, but it still feels unsatisfying. The k8syaml experience does feel like this, though. In combination with the ability to select resources to enable/disable, such as Service, Ingress, and HPA, it could enable selecting from base blueprints for stateless apps, stateful apps (StatefulSet, PVC, and headless Service), and daemons. |
The GKE Deploy experience starts with the container image, which it can autocomplete from GCR or AR. It allows, but doesn't require, specification of the entry point and inline env vars. Next it supports other configuration, which all has default values: name, namespace, labels. The labels are used for the pod template and selector. It also creates a HPA, but not Service or Ingress, which I'd add options for. The Services and Ingresses page provides an option to select LoadBalancer Services to create an Ingress for. From the details of a specific Deployment in the GKE UI, once it has already been created, there is a list of actions: autoscale (creates HPA), scale (edit replicas and resources), expose (create service), and update the image and update strategy knobs. I'd also add advanced options, to expose the rest of the attributes, rather than just falling back to yaml editing. We possibly could do this via a generic form editor, which we will need to handle arbitrary CRDs: GoogleContainerTools/kpt-backstage-plugins#68. For organizing a large number of options, I like the way the GKE cluster creation page breaks down options into groups of related attributes. k8syaml.com does a flavor of this also. Here's an Openshift UI example: |
Some projects have embraced the struct-constructor approach. gimlet.io builds a UI on top of this chart (and any chart with jsonschema for the values): The UI can support other charts that provide JSON Schema, though. kapitan similarly takes a one-generator approach, but uses jsonnet rather than helm templates, and supports multiple components, kind of like helmfile: Effectively these are both thin abstractions over Kubernetes types that enable representation of resources as maps of attributes. Kapitan supports some kustomize-like features, such as setting common values ("global defaults") across components. Kapitan normalizes the configuration experience across multiple different tools and generated artifacts, but AFAICT lacks an explicit schema for the inventory. It aims to simplify human authoring, but does not aim to support automation above Kapitan. https://github.com/zalando-incubator/stackset-controller does support automation on top because it's implemented using CRDs. https://github.com/clastix/capsule takes a similar all-in-one resource approach for Namespace provisioning. |
There are certain common operations that you may want to do on an application deployment but not build into the base blueprint. For example, creating a PodDisruptionBudget or HPA. It would be useful to have functions to generate these and other similar "decorations" on a deployment. |
Just to gather things in one place, a couple other examples for package dependencies:
|
One issue will be providing guidance on when should we just expect the user to add a resource and not provide any particular help, and when we should use a function to generate the resource, and whether that function should just be run imperatively or should be a declarative function. |
Deployment of packages across cloud providers is another common thing we may want to see if we can help with. In some cases, this can be handled with separate packages that are dependencies as discussed above with cloud provider SAs, KSAs, and workload identity. But there are other provider-specific tweaks, often controlled by annotations. For example creating a LoadBalancer service can be scoped to the local VPC in GKE with a special annotation (https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing). Presumably similar concepts exist in other cloud providers. Is this just left as an exercise for the author and/or deployer, or is there some assistance we can provide? |
There are a lot of best-practice validators as well as security policy enforcement tools. It would be great it we could show not just validating those practices, but applying them -- make it so. datree.io is one such tool, as shown in Viktor's video. Here's a video specifically about that: https://www.youtube.com/watch?v=3jZTqCETW2w. Obviously there's gatekeeper, which has a mutation mode now. And: Here's a list of tools: |
Regarding "decorations": I agree that's a reasonable concept. kubectl has some operations, like expose (creates a Service) and autoscale (creates HPA), that are designed with that philosophy. Ingress could be similarly generated for a Service of the appropriate type. Other resources that may be referenced from a Pod could be similar, such as ServiceAccounts, ConfigMaps (probably via generation), Secrets (via external secrets), and PVCs. These could have full sub-flows or we could add the reference, generate a minimal resource, possibly using a function, then steer the user to go edit the generated resources after, such as with a mechanism similar to the flagging mechanism used in the prototype, if any required values need to be checked or provided. I'm pretty sure that resource creation should typically be imperative and interactive. |
The kubernetes dashboard asks for the name, image, replicas, service type (none, internal, external), port (if service is selected). Show advanced options expands: description (added as an annotation -- that's nice), labels, namespace, image pull secret, cpu and memory requests, command and args, environment variables, and whether to run as privileged. Still not all the options. It puts documentation next to each form field, which is nice. It also supports copy/pasting yaml and uploading from a file. |
Discussion about a "default" pod spec: https://twitter.com/BretFisher/status/1550326044577730560 |
I could imagine wanting different UX for "create a blueprint for a specific off-the-shelf application", such as cert-manager, and "create a blueprint for a category of similar applications", such as Spring Boot apps, and maybe a "just deploy my app across multiple environments" scenario. We've observed that, with a few exceptions (e.g. Prometheus, ElasticSearch), helm charts (mainly) and other config formats are much more widely used than Operators for running off-the-shelf applications. Of course, the Operators themselves somehow also need to be installed, but this suggests that Operators are not the main alternative. |
We'll also want the UI guidance for creating and editing deployments to be different than for creating a blueprint. |
This post describes a couple concrete application change scenarios: |
I would go so far as to say that the UI should focus on the "light authoring" workflows that are consumption-oriented. For "from scratch" package authoring, I think lower-level, CLI-based tooling will allow authors to use the IDEs and other tools of their choice. |
Serious question: In the consumption-oriented UX, what is the delightful "wow" experience we'd be aiming for, compared to a UI form for entering helm chart values? Also:
|
I think the "wow" is captured in "light authoring". Taking an off-the-shelf package, tweaking it with the "decorators" we have been discussing - like adding a PDB, or enabling/disabling TLS on an Ingress, adding an HPA, or even just tweaking a few fields here and there without any need for "package inputs" or similar rigidity. The magic to me is "I can make a change without changing the code of the upstream templates" - because of course we don't have "templates". The ability to diverge from upstream but still maintain the connection is powerful. Full package authoring is done more rarely, by a smaller set of users. Those are also users more deeply soaked in config management, etc. Deriving, tweaking, and otherwise customizing packages is done by many users and we should have a much lower threshold of knowledge needed to perform those actions. That light authoring generally won't require creation of new functions, for example, but instead the discovery and execution of existing functions. I think we can get a significant "wow" from a broader audience by focusing on those.
There won't be until we (the kpt community) produce probably at least two dozen good examples and show the power.
I think so, but not necessarily "general purpose". What I mean here is that the UX for "full package authoring" should allow those authors to come with their existing tools and integrate with our tools that focus on the kpt package authoring aspects. For example, @justinsb and @droot mentioned yesterday wanting some automated support in "dehydrating" packages. So, they can work on manifests in their test clusters, then run the tool to make it back into an abstract package. These are
Yeah, this one I am more ambivalent on. I do hear it from Nephio folks too. And given the massive investment in Helm charts it makes sense. But I still would focus on the previous point first, as then the workflow can be "render helm chart, then use those tools from the previous point". |
We need distinct off-the-shelf package and bespoke app tracks. (I defined these terms in https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/declarative-application-management.md) Examples of bespoke app deployment: kubectl run, GKE UI, Kubernetes dashboard, Skaffold, Openshift UI (https://www.youtube.com/watch?v=jBDmX85IjLM), Ambassador Labs ClickOps over GitOps, Gimlet's OneChart, Kapitan, tanka, cdk8s, etc. Probably most Kubernetes deployment and CI/CD tools. In kpt, a blueprint is likely needed to promote across environments, not unlike a kustomize base or helm chart. For off-the-shelf apps/components, those are predominately helm charts. They may be deployed via app catalogs, such as artifacthub.io, kubeapps.dev, plural.sh, Rancher's app marketplace, Lens, etc., or just specify the chart and values, as in the ArgoCD UI. The end consumer, the deployer, typically would just provide values. What I showed in my talk was a mix of the platform team adaptation experience and the deployer experience. The "light authoring" would likely happen at the adaptation stage -- bringing an off-the-shelf component into an org and operationalizing it. A lot of charts are already prepared for that. For instance: |
For the bespoke track, we previously used Spring Boot as a canonical class of application. Here's a trivial tutorial that uses kubectl to deploy: Here's one that has yaml to copy/paste: Here's one that also uses copy/paste, and includes istio: Here's an example using helm and argocd: Here's an example that I don't think has deployment config: There is a tool that generates basic deployment and service configs from source code annotations: Typical application config (hundreds of knobs): |
Speaking of bespoke apps, Jenkins X has ways to promote across environments and such, and is using kpt in some capacity in jx project import and jx gitops upgrade. They are working to migrate to kpt v1. It also uses kyaml in some commands to modify configuration in WYSIWYG style, which is nice, though it doesn't use KRM functions: |
In the bespoke app track, we may want to try kubevela.io and knative, for comparison. Installing those could serve as examples of off-the-shelf apps. |
Generation of skaffold.yaml: |
Though we are using off-the-shelf apps as example app configurations, an argument in favor of starting with bespoke apps for the UI-based demo is that we wouldn't need to build up a huge catalog of apps, helm is less dominant, the solution space is most fragmented, and UIs are at least sometimes used to author resources, which maybe has some benefit of familiarity. Obviously that's also where some users want higher-level abstractions, PaaSes, and so on. And CI/CD. It also helps me to think about the user journeys, UX approach, and separation of concerns. |
I started to sort out how I would manage the large surface area in the UI in the blueprint creation flow for a single bespoke application. There is more to do, but this is the working doc. More guidance and structure will be needed than just forms that mirror the API spec. Pods have a large number of attributes. We may also want flows that involve multiple resource types. Several are involved here that build on each other and reference each other. And we'll need to decide where to leverage functions, as opposed to implementing functionality directly in the UI, and which functions should be used imperatively rather than being added to the Kptfile pipeline. The UI could depend on some well known functions, such as set-namespace. |
vscode plugins we could compare with: |
Most Kubernetes users are interested in configuring applications. That's the primary original purpose of Kubernetes, running containerized workloads. Even cluster services / add-ons are applications. GitOps is primarily focused on deploying applications, as well. Obviously it's the core use case for Helm.
So, what do we need to address in order to be able to handle applications in kpt?
My current opinion is that multi-cluster specialization and multi-cluster rollout is somewhat independent, but I may change that opinion as we dig into this more.
We plan to look at these common cluster services / add-ons as test cases:
We should also try deploying all our own components: porch server and controllers, config sync, resource group controller, backstage.
At some point, we should also try the ghost application (chart, rendered) we looked at previously. That involved multiple components, so that's another case for dependencies and/or app of apps or static subpackages or dynamic dependencies. It's kind of unusual in that it's an off-the-shelf app rather than a bespoke app or off-the-shelf cluster component or off-the-shelf app platform like knative, kubevela, spark, kubeflow, etc.
Once we figure out how to natively handle applications, we can look into automating helm chart import, rendering and patching helm charts, and so on.
@selfmanagingresource @justinsb @droot
The text was updated successfully, but these errors were encountered: