-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement autoscaling/autoidling on Kubernetes #2471
Comments
See also discussion here kedacore/keda#538 |
on conversation with @shreddedbacon - we could implement a controller in our clusters that can be configured to monitor a status (from ElasticSearch, uptime or Prometheus) and potentially scale down a deployment to zero (saving previous replica spec) after a configurable time expiration. This controller would check the services in each individual environment for a custom resource to define it's idle behaviour (on/off or predefined period). This functionality may be replicable with the other tools, but they may not be appropriate for our needs (Osiris has been archived, and this functionality is currently under development in KEDA) Where our proposal would differ is that this controller would also implement a pod running in the cluster that would have the ingress services redirected to it for any scaled-down environment (and provide a user-friendly message). On a request for one of these routes, it would trigger a scale-up (to the original replica spec), watch the progress, and switch back routes once ready via a meta refresh. |
@tobybellwood can this be closed? |
now https://github.com/amazeeio/lagoon-idler exists, this can be closed |
In Kubernetes, Lagoon should be able to idle non-production environments (to zero) after a defined period of inactivity, and re-activating them on a subsequent visit (or deploy).
Potential options for consideration:
Keda
Osiris
Implement a service tailor-made to handle reading ingress requests and handling intermediate state.
The text was updated successfully, but these errors were encountered: