Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

datahub-gms autoscaling #11761

Open
7onn opened this issue Oct 30, 2024 · 3 comments
Open

datahub-gms autoscaling #11761

7onn opened this issue Oct 30, 2024 · 3 comments

Comments

@7onn
Copy link
Contributor

7onn commented Oct 30, 2024

I'm running DataHub via Helm Chart and during some large ingestion jobs, resources are being consumed so heavily that usage is reaching its limit and it starts throttling until it can't even answer the health check and the pod is terminated.

To fix this, I suppose I could just increase the resource limits and let it fly. But I also think that in moments like this, with heavy traffic, we could benefit from a second GMS instance to share the load. So I ask, would I have any problems running GMS with multiple replicas? Can it support running in parallel? In case it does, I'd be interested on contributing in the Helm Chart to enable autoscaling via HPA :)

@david-leifker
Copy link
Collaborator

david-leifker commented Oct 31, 2024

GMS does run with multiple replicas. Additionally, the primary consumers mce-consumer and mae-consumer can run as separate deployments (each of which can be run with multiple replicas).

@david-leifker
Copy link
Collaborator

I'd be interested in updates for HPA in the helm charts, either GMS and/or the standalone consumer groups!

@7onn
Copy link
Contributor Author

7onn commented Nov 3, 2024

opened a PR for datahub-gms: https://github.com/acryldata/datahub-helm/pull/517/files

i could do for the standalone consumers too, if i get a thumbs up on this approach by some DataHub maintainer,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants