[BUG] - Increase general node default size #1111
Labels
needs: investigation 🔍
Someone in the team needs to find the root cause and replicate this bug
provider: GCP
type: bug 🐛
Something isn't working
type: sprint-candidate 🏃
candidate for next sprint
OS system and architecture in which you are running QHub
MacOS / GCP
Expected behavior
General node should not reach 'critical memory level' after initial deployment.
Actual behavior
General node has 93% mem usage after deployment, according to K9s.
How to Reproduce the problem?
qhub init gcp
and then deploy to GCP.Command output
No response
Versions and dependencies used.
Qhub main version
Compute environment
GCP
Integrations
No response
Anything else?
This may be the same on the other clouds - I think conda store has pushed it over the edge!
The text was updated successfully, but these errors were encountered: