-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add kep for better pruning in apply #810
Conversation
/assign @pwittrock @soltysh @seans3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
The conceptualized root-object seems to be a grouping construct that is similar to application object (https://github.com/kubernetes-sigs/application) albeit different purpose. I would prefer if it was application object.
-
Consider injecting an unique label when applying. Say apply introduces "label-applied-context" uuid. Its the same on all objects in that apply command. This could help with tracking objects that are removed from dir but exist in the server side. If the label exists for even 1 object server side it is reused for all new objects as well.
|
||
`kubectl apply` operates an aggregation of Kubernetes Objects. While `kubectl apply` is widely used in different workflows, users expect it to be able to sync the objects on the cluster with configuration saved in files, directories or repos. That means, when `kubectl apply` is run on those configurations, the corresponding objects are expected to be correctly created, updated and pruned. | ||
|
||
Currently `kubectl apply --prune` is still in Alpha. It is based on label matching. Users need to be really careful to use this command, so that they don't delete objects unintentionally. There is a list of [issues]([kubernetes/kubectl#572]) observed by users. The most noticeable issue happens when changing directories or working with directories with the same labels. When two directories share the same label. `kubectl apply --prune` from one directory will delete objects from the other directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Link to issues broken. Would you please add the issues you think need to be addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the issue kubernetes/kubectl#572. I'll update the link
|
||
`kubectl apply` operates an aggregation of Kubernetes Objects. While `kubectl apply` is widely used in different workflows, users expect it to be able to sync the objects on the cluster with configuration saved in files, directories or repos. That means, when `kubectl apply` is run on those configurations, the corresponding objects are expected to be correctly created, updated and pruned. | ||
|
||
Currently `kubectl apply --prune` is still in Alpha. It is based on label matching. Users need to be really careful to use this command, so that they don't delete objects unintentionally. There is a list of [issues]([kubernetes/kubectl#572]) observed by users. The most noticeable issue happens when changing directories or working with directories with the same labels. When two directories share the same label. `kubectl apply --prune` from one directory will delete objects from the other directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is supporting working across directories an explicit goal.
If an user has conflicting labels across directories isn't that an user error ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is supporting working across directories an explicit goal.
Yes
If an user has conflicting labels across directories isn't that an user error ?
It could be, but conflicting labels are sometimes not easy to discover. The point here is that we need to provide a solution in kubectl, which will avoid errors that are easy to make and hard to detect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
User error can be challenging to avoid. It is hard to differentiate whether the error is really an error or a user intent. IMO, we should at least express it clearly how prune
works for some popular user errors.
|
||
|
||
### Root Object | ||
To solve the first challenge, a concept of root object is proposed. A root object is a ConfigMap object that contains the GVKN information for all the objects that are to be applied. The root ConfigMap objects should be created by other tools such as kustomize, Helm. When generating resources to be consumed by kubectl apply, those tools are responsible to generate the associated root ConfigMap object to utilize better pruning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we leverage https://github.com/kubernetes-sigs/application for this ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer a resource 100% of clusters are guaranteed to have installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another issue is that apply isn't just for applications. It's for arbitrary resource types and groupings.
The presence of a root object indicates apply need to handle prune. When kubectl apply is given such a root object, it finds the live root object with the same name. | ||
- If a live root object is not found, Kubectl will create the root object and the corresponding set of objects | ||
- If a live root object is found, then the set of live objects will be found by reading the data field of the live root object. Kubectl compares the live set with new set of objects and takes following steps: | ||
- set the root object data as the union of two sets and update the root object |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a client side operator (kubectl) or a server side one ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is kubectl. There is not server side actions.
|
||
|
||
### Root Object | ||
To solve the first challenge, a concept of root object is proposed. A root object is a ConfigMap object that contains the GVKN information for all the objects that are to be applied. The root ConfigMap objects should be created by other tools such as kustomize, Helm. When generating resources to be consumed by kubectl apply, those tools are responsible to generate the associated root ConfigMap object to utilize better pruning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ConfigMap is namespace scoped, does that mean --prune
can only be used for deleting namespace scoped objects?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, --prune
will be used for all objects. ConfigMap
is chosen since it has data
field which can hold arbitrary data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the question is, what namespace with the configmap live in for non-namespaced objects?
data: | ||
object1: namespace1:group1:version1:kind1:somename | ||
object2: namespace1:group2:version2:kind2:secondname | ||
object3: namespace3:group3:version3:kind3:thirdname |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As ConfigMap is namespace scoped, is namespace is still required in each entry? From the example, is it allowed to specify objects in a different namespace from the ConfigMap?
If we allow Root object to specify resources across multiple namespaces, or even cluster scoped resources, it's better to use a cluster scoped CRD, instead of namespace scoped ConfigMap. But using a cluster scoped CRD requires additional care of setting up the correct RBAC to allow user tools to work properly (limiting the access on cluster level is more complicated and needs to be more careful than inside a namespace).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The namespace of the root object itself can be different from the namespace listed in data
. So namespace
is required in each entry.
Using a CRD requirest that the CRD is installed in the cluster. In addition, CRD based approach doesn't work with old version of clusters.
``` | ||
|
||
The presence of a root object indicates apply need to handle prune. When kubectl apply is given such a root object, it finds the live root object with the same name. | ||
- If a live root object is not found, Kubectl will create the root object and the corresponding set of objects |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this imply the existing behavior of --prune
when root object is missing? Will it be safer if root object is always required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this imply the existing behavior of --prune when root object is missing?
When the root object is missing, kubectl will skip pruning part and no object is deleted. Since the proposed solution is quite different from existing behavior. I don't think it's good to keep both of them.
Will it be safer if root object is always required?
It could be. If the root object is not present, no object is deleted at each apply time. Eventually the cluster will need a manual clean up. If the user has a good strategy of cleaning up, it is safe for them to do so. Otherwise, using a root object every time is better.
- set the root object data as the union of two sets and update the root object | ||
- create new objects | ||
- prune old objects | ||
- remove the pruned objects from root object’s data field |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new objects and old objects are confusing to me (sorry, if I missed something). My understanding is the actual objects are used to construct one set, and those defined in root object data field are used to construct the other set. Which one is treated as new set and which one is the old set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new set is the set of objects to be applied. The old set is the set of objects running on cluster, which were applied previously.
The new objects are the child objects only in the new set. The old objects are the child objects only in the old set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the point why name of resource is needed for combining the new and old sets. If we want to integrate with App CRD, will the extra listing operation (based on GK
details, assuming all underlying resources are within same namespace as App CR) expensive?
As synced by person, users may not know name of all underlying resources, but we can leverage controllers to generate App CRs, so supporting name for App CRD is a viable option too (i.e., I agree with you).
labels: | ||
kubectl.kubernetes.io/applyOwner: v1:ConfigMap:predictable-name-foo | ||
``` | ||
The presence of this label is to confirm that a live object is safe to delete when it is to be pruned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So every object must have this label to be associated with the root object and thus the tool is able to construct the new set and old set? I guess I understand how the new set and old set are constructed.
But I still feel it will be safer that the root object must be consistent with the actual objects in the system before --prune
is allowed to proceed. The tools creating child objects are responsible for updating the root object to make it consistent. Otherwise it's still possible for making mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I still feel it will be safer that the root object must be consistent with the actual objects in the system before --prune is allowed to proceed. The tools creating child objects are responsible for updating the root object to make it consistent. Otherwise it's still possible for making mistakes.
By this proposal, Kubectl will assume the root object is consistent with the actual objects. I agree with you that the tools are responsible to make sure this is true.
The objects passed to apply will have labels or annotations pointing back to the root object for the same apply. | ||
``` | ||
labels: | ||
kubectl.kubernetes.io/applyOwner: v1:ConfigMap:predictable-name-foo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won’t work - config map names can be longer than allowed label length.
You’ll need to use an annotation if you want to do data like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iiuc the plan would be to auto-generate a fixed length name for the "root object" based on the (I'm guessing) the has of its contents and deal with collisions the same way we do with replica-sets. We could mandate that implementations ensure that the name of the ConfigMap for a "root object" is no greater than the allowable 63 chars.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for pointing out. I'll update it to use annotation.
- Find out correct objects to delete. When a directory is applied after some updates, it loses the track of objects that are previously applied from the same directory. | ||
- Delete objects at proper time. Assume the set of objects to be deleted is available, when to delete them is another challenge. Those objects could be still needed during rolling out other objects. | ||
|
||
In this design, we focus on a solution for the first challenge. For the second challenge, we can blacklist certain types such as ConfigMaps and Secrets for deletion. Generally deleting objects at proper time can be better addressed by Garbage Collection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are plenty of other extension types now like configmap and secret. How do they get blacklisted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iiuc the objects that will participate in the prune must also be white listed by having them select the root object and by including them in the root object by GVKN? Does this proposal allow users to create objects (e.g. ConfigMaps and Secrets) that will not participate in pruning? If so are you really saying that you will (in addition to requiring white listing) allow for the implementation of explicit black listing in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also add the blacklist types into data
to allow customizing the blacklist.
data:
object1: ...
object2:...
object3:...
blacklist1: namespace1:group1:version1:kind1
blacklist2: namespace2:group2:version2:kind2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of blacklisting, would it work to only perform deletion after rollout has been completed if we could figure out how to detect that? I can imagine other resources being dependencies that shouldn't be deleted until a rollout is complete.
There are plenty of other extension types now like configmap and secret. How do they get blacklisted?
I believe the issue with configmaps and secrets that is being addressed is due to 2 factors - 1) some tools generate them based on the contents of another file - so they are likely to be replaced by a new instance without the user explicitly deleting them and 2) the old instances will be referenced by Pods that may not have been rolled out during a rolling update. Pruning them would then delete them before they stopped being used by these Pods.
Are you thinking that this issue or other issues might apply to extension or other types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nm, I think we could do this without whitelisting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone has multiple applications (think of it as a term for grouping) in a namespace, will a prune of application A cause the assets for application B to be deleted? Unless they know about each other and have them blacklisted from pruning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, nevermind... I see the diff discussed elsewhere. Ignore the prev comment.
The objects passed to apply will have labels or annotations pointing back to the root object for the same apply. | ||
``` | ||
labels: | ||
kubectl.kubernetes.io/applyOwner: v1:ConfigMap:predictable-name-foo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iiuc the plan would be to auto-generate a fixed length name for the "root object" based on the (I'm guessing) the has of its contents and deal with collisions the same way we do with replica-sets. We could mandate that implementations ensure that the name of the ConfigMap for a "root object" is no greater than the allowable 63 chars.
``` | ||
The presence of this label is to confirm that a live object is safe to delete when it is to be pruned. | ||
|
||
### Risks and Mitigations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this operation is idempotent. If it is you should call that out. If it isn't you should explain where multiple applications may lead to a different result. I can't, off the top of my head, think of any circumstance where an intermittent failure can't be corrected by subsequent invocations of the command on the same input. This seems to be the desired intent of the design. If so, you should call this out specifically. If not, it deserves careful thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was considering a scenario when an object, belonging to some root object, was updated/applied by other means(either manually, or name conflicting with an object belonging to another root object). In that case, the label/annotation will be different. This gives us a signal that this object has been touched by others, or owned by others. It may not be safe to delete it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will UUID
a solution here?
- Find out correct objects to delete. When a directory is applied after some updates, it loses the track of objects that are previously applied from the same directory. | ||
- Delete objects at proper time. Assume the set of objects to be deleted is available, when to delete them is another challenge. Those objects could be still needed during rolling out other objects. | ||
|
||
In this design, we focus on a solution for the first challenge. For the second challenge, we can blacklist certain types such as ConfigMaps and Secrets for deletion. Generally deleting objects at proper time can be better addressed by Garbage Collection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iiuc the objects that will participate in the prune must also be white listed by having them select the root object and by including them in the root object by GVKN? Does this proposal allow users to create objects (e.g. ConfigMaps and Secrets) that will not participate in pruning? If so are you really saying that you will (in addition to requiring white listing) allow for the implementation of explicit black listing in the future?
|
||
|
||
### Root Object | ||
To solve the first challenge, a concept of root object is proposed. A root object is a ConfigMap object that contains the GVKN information for all the objects that are to be applied. The root ConfigMap objects should be created by other tools such as kustomize, Helm. When generating resources to be consumed by kubectl apply, those tools are responsible to generate the associated root ConfigMap object to utilize better pruning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need V. GKN is sufficient. The version of the resource can be selected by the client and available versions can be discovered dynamically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for pointing out. I'll update it.
### Challenges | ||
The challenges of pruning are mainly from two aspects. | ||
- Find out correct objects to delete. When a directory is applied after some updates, it loses the track of objects that are previously applied from the same directory. | ||
- Delete objects at proper time. Assume the set of objects to be deleted is available, when to delete them is another challenge. Those objects could be still needed during rolling out other objects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you give an example of this scenario ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One example is a ConfigMap object being referred in a Deployment object. When rolling update happens, new pods refer to new ConfigMap and old pods are still running and referring to old ConfigMap. The old ConfigMap can't be deleted until the rolling up successfully finishes.
|
||
|
||
### Child Objects | ||
The objects passed to apply will have labels or annotations pointing back to the root object for the same apply. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to require higher-level tools to generate a unique string to use as the ConfigMap name. If you assume that such a string has already been generated and given to kubectl, couldn't you then add that unique string in a label/annotation on every object created during kubectl apply
, and then do label-based pruning like in alpha, except that the user does not manually provide a label selector? That is, we instead generate a selector based on this unique string we've assumed to exist, and which identifies a particular "apply group" as defined by the user or user's higher-level tool.
What are the additional benefits to maintaining an explicit, denormalized list of live objects in a ConfigMap? Is it just concern over selecting matching objects across all resources and namespaces? That may be enough justification, but it would be good to make that trade-off clear if so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the additional benefits to maintaining an explicit, denormalized list of live objects in a ConfigMap? Is it just concern over selecting matching objects across all resources and namespaces? That may be enough justification, but it would be good to make that trade-off clear if so.
If we use a unique label to find the objects as in alpha --prune
, we have to query all namespaces, all types to find the matched objects. With a list of objects, the query step is not necessary any more.
- it can find the correct set of objects to delete | ||
- it can avoid mess up when changing directories | ||
|
||
### Non-Goals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For people who currently use apply
without --prune
, I imagine it might be common to do something like:
kubectl apply -f dir/
# later ...
cd dir
edit fileA.yaml
kubectl apply -f fileA.yaml
If the user wants to start using the new prune feature, is this workflow still supported? Or is it a non-goal to support this, and the user will need to always kubectl apply -f dir
(the whole directory)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be supported.
By kubectl apply -f fileA.yaml
, I guess the user only wants to update the resources in fileA.yaml.
Let's consider apply with pruning.
kubectl apply --prune -f dir/ # if there is a root object, it prunes obsolete objects
# later ...
cd dir
edit fileA.yaml
kubectl apply --prune -f fileA.yaml # if there is a root object, it triggers pruning actions. Assume the root object doesn't change, no object is deleted.
Thanks for all your comments. I'd like to address a common one here: why not using an Application object as the root object. There are some overlaps between the purpose of a root ConfigMap object and an Application object:
Regardless the difference of core type v.s. a CRD or existence of other spec fields or status, the major difference between a root ConfigMap object and an Application object is how they are connected to the child objects.
In this design, we avoid selecting objects by labels when dealing with prune. It relies on users/tools to get the explicit list of child objects and write it to a root ConfigMap object. So using Application object with label selector doesn't fit with this purpose. However, if application objects can list the child objects explicitly, it can act the same as a ConfigMap root object. The API need to be updated to include the child objects names, such as
By looking at the application object itself, kubectl can get the set of child objects. I noticed that Application CRD may need assigning owner reference to make sure garbage collection can clean up child objects. But that part need to be handled by controller. If the application CRD can include child object names, @barney-s, @kow3ns Let me know what you think about this. Thank you. |
@Liujingfang1: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
- add functions to identify root object | ||
- add a different code path to handle the case when root object is present | ||
- add unit test | ||
- add integration test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we prioritize end-to-end tests rather than integration tests? The integration tests are a must-have, but I think they does not test the real-world situation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right. I'll update it.
object3: namespace3:group3:version3:kind3:thirdname | ||
``` | ||
|
||
The presence of a root object indicates apply need to handle prune. When kubectl apply is given such a root object, it finds the live root object with the same name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean: "When kubectl apply is given name of a root object"?
For example: kubectl apply -f --prune -r <some-root-object-name>
(adding -r
flag for inputting the root object name)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't introduce new flags for this. It will be kubectl apply --prune -f
. As long as a ConfigMap object is present in the resources passed by -f
with a proper lable, Kubectl will treat it as a root object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks!
|
||
Deciding the correct set of objects to change is hard. `--prune` Alpha tended to solve that problem in kubectl itself. In this new approach, kubectl is not going to decide such set. Instead it delegates the job to users or tools. Those who need to use `--prune` will be responsible to create a root object as a record of the correct set of objects. Kubectl consumes this root object and triggers the pruning action. | ||
|
||
This approach requires users/tools to do more work than just specifying a label. which might discourage the uage of `--prune`. On the other side, existing tools such as Kustomize, Helm can utilize this approach by generating a root object when expanding or generating the resources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: uage
to usage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, requiring creation of a root object may potentially discourage most users. Furthermore, the users may set up the root object wrongly (e.g., missing some intended objects).
Does Kustomize
and/or Helm
support old clusters? If so, we may direct the users to them when the users want to perform --prune
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kustomize
doesn't talk with server. It only expands the resources and passes the output YAMLs to kubectl apply
. helm
has its own cli, and I believe it support old version clusters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noted. Should we also consider integrating --prune
with helm
? If not, is there any reason why not? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not direct integration. We should provide a pruning solution in Kubectl that can be easily leveraged by other tools such as helm
.
We may need to restrict this to cases where all the resources are in the same namespace, or create 1 configmap per namespace. An alternative to using a configmap is store the same data but in an annotation on one of the objects. This object would need to be the "root object", and couldn't be deleted without changing the root object first. |
@pwittrock Creating one ConfigMap per namespace seems not necessary. Is it to keep different namespaces isolated? Considering a |
@pwittrock Storing data in annotations looks good, but I don't think we can drop the root configmap object completely. It helps us to find live root object. This alternation works for Application, since application cr servers as the root object and it's not likely to be deleted when there are still other resources. For manifests without Application, we still need an extra object, such as a configmap object, to act as a root object. If user randomly pick one object from the resources as the root, it's easy to forget about reassigning the root object label. |
We're currently using prune in combination with some simple internal tooling for generating resources. Our tool automatically labels the resources with a unique label associated with that group of generated resources (similar to a helm "release name"), then uses kubectl apply --prune to update existing resources and delete any with the same release label that don't exist in the updated set of resources. Under this proposal, if I understand correctly, we would need to additionally create a root object Configmap listing all our resources, and for getting the set of existing objects we would have to parse the server's current copy of that same configmap then look up each object explicitly. This isn't unreasonable, but it's a much more complex implementation than the previous "add a label", and is harder for operators to inspect by hand using kubectl commands (as they can no longer simply search on a label). For example, suppose I have two root configmaps A and B, and they both list the same object. Perhaps due to user error, or perhaps because the object was previously generated in release A and we wish to migrate it to release B (this is a scenario that is admittedly rare, but we have faced in practice). If I first apply B, so the object's root-object label now points to configmap B, what happens if I subsequently apply A with that object removed? Under the old system, the answer is easy: The labels on the object have changed, so it's no longer part of the A release, so it's unaffected. But under the new system, I have no idea. It depends on the implementation. You could get back the same behaviour as the old system by always checking that the object points at the root configmap you're applying, but my point is that now we're dealing with duplicated state that can get out of sync and you need to start adding caveats and explanations to what should be a simple operation: Delete any matching things that I don't explicitly list. I realise that there's a desire to make pruning more useful when kubectl applying whole directories, but I don't understand the benefit to the configmap approach when we could backwards-compatibly tie it into the existing system simply by generating a unique label (which we would need to do anyway to produce a unique root configmap object name) and matching on it, as suggested in one of the comments above. The cost of doing a label query over all namespaces and types is mentioned, but I don't think the drastically increased complexity and number of edge cases is worth the minor performance benefit. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
In case someone needs an alternative to the current punning implementation, I made a kubectl plugin that prunes builtin resources as well as custom resources: https://github.com/stefanprodan/kustomizer |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
What's the latest on kubectl --prune? We'd like to use this in combination with kustomize |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This KEP proposes a different approach of
kubectl apply --prune
.fixes kubernetes/kubectl#572