You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We run all yaml changes through a git repo which is then later applied via CI/CD running:kubectl apply -k .
Unfortunately, when a yaml file is removed, kubectl will not delete the resource and it becomes orphaned, and the yaml is no longer declarative. Contrast this with tools like Terraform and AWS Cloudformation - both of those will remove resource that are no longer declared.
The approach of kubectl seems more inline with e.g. puppet file handling, where you have managed files (some puppet code exists) and unmanaged files (no puppet code exists). When converting from managed to unmanaged, the file still exists on the system, it is just no longer manipulated by puppet. But puppet also has a mechanism for explicitly stating that a file should not exist.
I propose that kubectl have a similar mechanism for identifying resources that SHOULD NOT exist. Proposed syntax:
---
apiVersion: v1
kind: Pod
metadata:
name: my-app
lifecyle: destroy
The text was updated successfully, but these errors were encountered:
The prune option is nice, but it makes a lot of assumptions and requires a lot of overhead in setting it up correctly. In particular, it moves filtering from the resource-specific yamls into the general-purpose command line. It causes the kubectl command line to be highly coupled to the yamls, which seems like a bad idea.
We run all yaml changes through a git repo which is then later applied via CI/CD running:
kubectl apply -k .
Unfortunately, when a yaml file is removed, kubectl will not delete the resource and it becomes orphaned, and the yaml is no longer declarative. Contrast this with tools like Terraform and AWS Cloudformation - both of those will remove resource that are no longer declared.
The approach of kubectl seems more inline with e.g. puppet file handling, where you have managed files (some puppet code exists) and unmanaged files (no puppet code exists). When converting from managed to unmanaged, the file still exists on the system, it is just no longer manipulated by puppet. But puppet also has a mechanism for explicitly stating that a file should not exist.
I propose that kubectl have a similar mechanism for identifying resources that SHOULD NOT exist. Proposed syntax:
The text was updated successfully, but these errors were encountered: