-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to suspend an argo application #3039
Comments
Hm. You could have your application definition residing in a dedicated definition modeling the app of apps patterns with auto-sync disabled on the parent. Then you could cascade-delete the application in question ("child app"), and if you want it back, you just re-sync the parent application. |
@jannfis, thanks for the follow-up. I'll have a think on this for a few days and we'll maybe have a discussion internally re your suggestion. Doesn't feel quite right though - I'll get back to you. |
This "status" of disabled application could be really useful for development. I would be able (on my development machine) to stop working on project1 and start working on project2 with 2 clicks (or one commit) |
+1 - I'd love to see a stop, or suspend button for each application in the UI. It would make regular but intermittent workloads much easier to manage. - to extend the notion, a schedule of suspend/resume could be added to only run workloads during certain times and then auto shut them down would be especially nice. Autoscalers would then auto reduce nodes and thus running costs. |
+1 |
+1 - people with app of apps pattern with auto sync on need a way to pause both the parent and child application. |
+1 |
5 similar comments
+1 |
+1 |
+1 |
+1 |
+1 |
+1 |
2 similar comments
+1 |
+1 |
+1 Not for a prod deployment but for when debugging a dev cluster, pausing deployments would be super useful |
+1 ❤️ |
+1 |
+1 |
+1 would be useful |
+1 very useful. |
+1, would love to see this feature! |
+1 |
+1, It will be help full in production - DR cluster use case. We can disable the application in production cluster and start it in DR cluster in case of DR failover. Will reduce downtime. |
Can you not spam +1s? Just leave a thumbs up reaction on the issue. |
+1 |
I want to make sure we're all +1'ing the same feature request. Please react to indicate which feature you want:
Please no more +1s without describing your use case. The enthusiasm is appreciated, but keeping the thread tidy is good too. 😄 |
|
@canghai118 for me both options would be useful actually, |
I think this should be possible meanwhile, although it's not exposed through the UI or CLI (yet): https://argo-cd.readthedocs.io/en/stable/user-guide/skip_reconcile/ This should effectively implement the requested feature? |
@jannfis I think the issue title is too vague. Seems like most folks on the thread are interested in a per-app scale-down mechanism. |
Ooh. With the new actions enhancements in 2.8, we should do something like this: -- Loop over resources from the Application status. Scale down Deployments and StatefulSets.
actions = {}
if obj.status == nil then
return actions
end
for _, resource in ipairs(obj.status.resources or {}) do
if resource.health.status ~= "Missing" and (resource.kind == "Deployment" or resource.kind == "StatefulSet") then
action = {}
action.operation = "patch"
action.resource = {}
if resource.group == nil or resource.group == "" then
action.resource.apiVersion = resource.version
else
action.resource.apiVersion = resource.group .. "/" .. resource.version
end
action.resource.kind = resource.kind
action.resource.metadata = {}
action.resource.metadata.name = resource.name
action.resource.metadata.namespace = resource.namespace
action.resource.spec = {}
action.resource.spec.replicas = 0
table.insert(actions, action)
end
end
return actions |
|
Is anyone up for trying to implement the actions-based solution? I'd be happy to help. |
For my use case I work at a utility and they heavy hand the password, rotation is a pain, I used to just kubectl scale --replicas=0. But with Argo control it self heals. Which I love but now I have to remove my deploy, update password and then redeploy. Not huge lift but cumbersome for prod facing apps. This is specific to K8s plugin external-dns connecting to infoblox. I manage the external-dns plugin via an app of apps deploy so I just have to remove one line entry and argo sync so again not huge lift. Just would be so much nicer to have scale down feature. similar thread that lead me here |
Sounds like in your case you'd need both to both scale down and disable self-heal. |
Thank you I will go that route, still fairly new in the argo game. |
Hi, how is this issue going? It would also be awesome to be able to set scale down of scalable resources by on/off office hours, or by scale down/up cron. It could be configurable by completely new ArgoCD resource or directly in Application resource. |
@MioOgbeni you might wanna have a look at https://keda.sh/ and its cron scaler. |
an alternative, and maybe better solution is to use a generator, with a generator you can enable / disable and application with your own rules : https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Plugin/ |
I found workaround if you need temp solution, just make application manual sync, select the deployment object, select to edit it, in replica key:value shift value to 0. This will effectively shift application to 0 pod/replicas. when you want to scale back up you can either manual sync, it will bring back to previous replica, or update code then refresh and sync and this will bring pods/replicas back up. This is Janky but you kind of have temp scale down method. |
How to disable applications, lol. Ensure that your ApplicationSet
Note that this may cause gaps in your git history if you move files back and forth. Maybe a smarter person would use symlinks? This is just a joke/hack, but it works. Let's get a real feature added to do this! |
Yes. I need a feature equivalent to For flux apps, I'm using KEDA to launch a job to issue commands & patch scaledObjects in the K8s cluster. |
@jmichaelthurman wouldn't that just be "disable self-heal" and maybe "disable auto-sync"? If you need to prevent even imperative syncs, you could also add a sync window. |
hi another use case : test/validation environment. when we are not using our test / validation environment, the "powers that be" delete every application to free up ressources. so when we need the environement again we have to redeploy everything even if there was no change since the last deployment. having a way to restart the application from the argoCS UI without havaing to redeploy them would be great |
I'm not sure what you mean. If the powers-that-be have deleted your resources, the only way to get them back is to redeploy them. Or are you suggesting that, if a "suspend" feature existed, you could ask the powers-that-be to use that feature instead of deleting your stuff? |
yes! something that will allow the "power that be" to free ressources when the application are not used, and the devellopers to restart the application easily when needed |
Are you mostly concerned about Pods' resources? If so, one workaround could be to use KEDA for autoscaling and set annotations
https://keda.sh/docs/2.16/concepts/scaling-deployments/#pausing-autoscaling |
Our solution has been to add a |
Firstly, thanks for a great tool. It's made life much easier for us!
There are times where I want to 'suspend' an application. That is, I want to temporarily delete it from the cluster and then re-add it. Currently I have to delete the argo application completely and then re-add it into argo. It would be useful if there was a feature where I could suspend it in argo and that would delete the app in the cluster BUT keep the argo application definition ready to re-sync.
The text was updated successfully, but these errors were encountered: