Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Kubernetes ApplySets #5

Open
NiklasRosenstein opened this issue Jul 31, 2024 · 4 comments
Open

Support for Kubernetes ApplySets #5

NiklasRosenstein opened this issue Jul 31, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@NiklasRosenstein
Copy link
Collaborator

Kubernetes "ApplySets" are objects that group resources that are applied together, allowing resources that are removed in a subsequent apply to be pruned or even pruning the entire group of objects.

It would be nice if Nyl supported ApplySets in such a fashion that they are straight forward to use when applying manifests with kubectl, allowing users to easily upgrade applications and pruning removed resources.

In an ideal scenario, one would simply run something like:

$ nyl template . | KUBECTL_APPLYSET=true -f - --prune

Though it may be necessary to reference the apply set with --applyset=kind/name. If the invokation of kubectl is error prone, we could also consider adding a command or option to Nyl to actually run kubectl for you.

$ nyl template . --apply

Nyl can provide it's own parent resource (e.g. ApplySet.nyl.io/v1). It can automatically detect the presence of the apply set resource in the file, and as part of a source file where manifests are loaded from, Nyl can automatically assign the part-of labels (although that may not be what the ApplySet spec intended/how kubectl implements it, I'm not 100% sure yet).

apiVersion: nyl.io
kind: ApplySet
metadata:
  name: my-applyset

Relevant resources:

@NiklasRosenstein
Copy link
Collaborator Author

NiklasRosenstein commented Jul 31, 2024

The current implementation is a bit bumpy.

  • Nyl detects if there is a ApplySet.nyl.io/v1 resource in a manifest source file. If yes:

    • It ensure that it's applyset.kubernetes.io/id label is set correctly.

    • It sets

      • applyset.kubernetes.io/tooling to kubectl/1.30
      • applyset.kubernetes.io/contains-group-kinds to the kinds from all resources that belong to the set.
    • That is because kubectl can't create the ApplySet custom resource, and complains if the resource does not have these labels:

      error: ApplySet parent object "applysets.nyl.io/argocd" already exists and is missing required annotation "applyset.kubernetes.io/tooling"`
      error: parsing ApplySet annotation on "applysets.nyl.io/argocd": kubectl requires the "applyset.kubernetes.io/contains-group-kinds" annotation to be set on all ApplySet parent objects
      
    • On a subsequent run with --applyset, it complains that the field manager for these labels don't match

      WARNING: failed to update ApplySet: Apply failed with 2 conflicts: conflicts with "kubectl-client-side-apply" using nyl.io/v1:
      - .metadata.annotations.applyset.kubernetes.io/contains-group-kinds
      - .metadata.annotations.applyset.kubernetes.io/tooling
      ApplySet field manager kubectl-applyset should own these fields. Retrying with conflicts forced.
      
  • Because kubectl can't create the custom ApplySet resource, it must first be applied manually. Otherwise, it complains:

    error: custom resource ApplySet parents cannot be created automatically

    • But Nyl outputs the ApplySet resource along with the rest, thus the first --applyset-less apply will contain all other resources as well (hence no resource is getting the applyset.kubernetes.io/part-of label assigned by kubectl).

      • We could set the applyset.kubernetes.io/part-of label on all resources before spitting them out. This works on a normal apply, but with --applyset, kubectl will complain that the input resources already have the label defined.

        error: ApplySet label "applyset.kubernetes.io/part-of" already set in input data

Presumably this works a bit better with Secret or ConfigMap as the apply set resource, because kubectl can create those. Will have to try it, but they are both namespaces and I think having cluster-scoped apply sets could be quite useful.

@NiklasRosenstein
Copy link
Collaborator Author

I've added a nyl template --apply option that does the following (assuming there is an ApplySet in the manifest source file):

  • It implies the new --no-applyset-part-of option, which will have Nyl not add the applyset.kubernetes.io/part-of label to the generated resources.
  • Ensure the ApplySet resource exists with kubectl apply --server-side --force-conflicts.
  • Apply all generated manifests via kubectl apply --server-side --force-conflicts --applyset=... --prune

What's not so pretty yet:

  • In the beginning, you always get this warning from kubectl:

    W0801 02:36:26.632531   85080 applyset.go:447] WARNING: failed to update ApplySet: Apply failed with 1 conflict: conflict with "kubectl": .metadata.annotations.applyset.kubernetes.io/tooling
    ApplySet field manager kubectl-applyset should own these fields. Retrying with conflicts forced.
    
  • We need to use --force-conflict, otherwise we get conflicts with the ApplySet annotations:

    error: Apply failed with 1 conflict: conflict with "kubectl-applyset": .metadata.annotations.applyset.kubernetes.io/tooling
    

@NiklasRosenstein
Copy link
Collaborator Author

I've added a nyl template --diff option in #7, but again it doesn't work well with apply sets and deleted resources. I was expecting that kubectl diff -f <(echo) -l applyset.kubernetes.io/part-of=... would include all resources that belong to the apply set to show as deleted, but they don't.

@NiklasRosenstein
Copy link
Collaborator Author

There's also an issue that maybe stems from generating a wrong kind in the annotation for ArgoCD CronWorkflows:

2025-01-07 11:35:16.894 | INFO     | Kubectl-apply ApplySet resource applysets.nyl.io/s3-backup from s3-backup.yaml.
applyset.nyl.io/s3-backup serverside-applied
2025-01-07 11:35:17.240 | INFO     | Kubectl-apply 3 manifest(s) from 's3-backup.yaml'
error: parsing ApplySet annotation on "applysets.nyl.io/s3-backup": could not find mapping for kind in "applyset.kubernetes.io/contains-group-kinds" annotation: no matches for kind "cronworkflows" in group "argoproj.io"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant