-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No space left on device blocks Provider upgrades #554
Comments
pulumi/pulumi-aws@0301de4 fixed the build error. This is a bandaid more then a cure, but it will get us going again. |
Unfortunately K8S repo now fails in test(go) target with the same OOD. This is in on the way to my P1s fixes so I'd like to take this and chase it down a bit deeper. |
We've leaned heavily into scheduling workloads on the pulumi-ubuntu-8core runner. This solution seems ok for now but may cause problems if the custom runner is out of capacity. In that case recommendation is to use the GitHub runners with more disk space. I was not able to full root cause for lack of time but I was able to measure K8s disk draw in the Go test job:
This runs out of 14G available on stock runners. There are multiple reasons Go is very resource hungry here but I don't have exact data. 1. Azure SDK and 2 AWS SDKs and GCP SDKs are pulled into the compilation unit via program test pulumi/pkg spurious dependencies on Pulumi state backends; 2. when tests are run, more disk space is used by ProgramTest creating project copies; there might be some cleanup opportunity that's being missed. |
I'll close for now as I'm not sure it affects us anymore atm with the workarounds in place. |
Cannot close issue:
Please fix these problems and try again. |
It's worth noting that pulumi-aws fails when building the complete go SDK, before it reaches go integration tests. |
Yes most providers just fail to build but K8S also fails to test. In both cases the compilation burden of either SDK or the tests with all the transitive dependencies is what I think sinks the runner. |
Some recent changes in Go SDK generation pushed the builds over the limit of disk space.
I've not investigated deeply but this can be also related to Go build dependency caching. If changes in dependencies invalidate the cache but we don't track cache key accurately, it can download the previous cache, and then download new packages again anyway during
go build
, this can psh things over the line. Workaround to try here is tinkering with cache keys to force a miss.Or it can be excessively chatty logging we might need to compress here.
The text was updated successfully, but these errors were encountered: