-
Hi, I’ve created an action workflow that builds our docker image for distribution. This image requires Tensorflow, CUDA, and a number of other libs so it’s reasonably weighty coming in at about 18GB. During a build run, my workflow fails part way through citing “##[error]No space left on device” I think I read somewhere that Github runners have about 16GB of space, so my question is Happy to explore alternative solutions to this…Just not sure why there seems to be very little content when I search for this specific issue. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 10 comments 5 replies
-
Hi @xerxesb , welcome to the GitHub Support Community! Yeah, GitHub-hosted runners only have 14GB of free space available, and there isn’t a way to increase that I’m afraid. docs.github.comSpecifications for GitHub-hosted runners - GitHub Docs//docs.github.com/en/actions/reference/specifications-for-github-hosted-runners The best solution I can think of if you need more space or a more powerful machine would be to host your own runner: docs.github.comAbout self-hosted runners - GitHub Docs//docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners You should be able to host this where you want—on a local machine, a VM in the cloud—and doing this should give you the flexibility to configure the specs you need for your workflows. |
Beta Was this translation helpful? Give feedback.
-
You don’t need to go hosted runner approach. You can free enough space for a docker image by using this script from Apache Flink: free_disk_space.sh. I am starting with 48GB available after running it, just make sure to also add monodoc-http before mono-devel in it as the script is probably a bit outdated. The bigger problem I see is that when the error in subj does happen, no logs are available for the step that failed so there is no way to investigate why you actually ran over allocated space. GitHub should at least preserve the on screen logs of the step as I regularly output data there to let me figure what exactly eat up all the space, but without that log, I’m just poking at lines of code in the dark |
Beta Was this translation helpful? Give feedback.
-
That’s fantastic! I ran it on our repo and free space went from 19% to 53% (I also removed some other packages in the top 100 that weren’t required for our use) Thanks. This has definitely helped! |
Beta Was this translation helpful? Give feedback.
-
Where could I run the shell free_disk_space.sh? |
Beta Was this translation helpful? Give feedback.
-
You run it in your workflow. I have it defined as one of the first steps in a job |
Beta Was this translation helpful? Give feedback.
-
I freed 10 GB by deleting foo:
runs-on: ubuntu-latest
steps:
- name: Delete huge unnecessary tools folder
run: rm -rf /opt/hostedtoolcache
- uses: actions/checkout@v3
- name: Run foo
uses: docker://foo-image
with:
args: /foo-script FYI these are the contents of the folder:
|
Beta Was this translation helpful? Give feedback.
-
This is awesome -- got me unstuck on building my cuda images <3 |
Beta Was this translation helpful? Give feedback.
-
This worked like a charm, thanks! |
Beta Was this translation helpful? Give feedback.
-
Thanks for this! |
Beta Was this translation helpful? Give feedback.
-
Went with this successfully and reclaimed over 12g: cd /opt
find . -maxdepth 1 -mindepth 1 '!' -path ./containerd '!' -path ./actionarchivecache '!' -path ./runner '!' -path ./runner-cache -exec rm -rf '{}' ';' Figured I'd keep the 4 items in the list as they had potential for my purposes. You can always spin up a tmate server in an action and hop into the runner to inspect what's going on in the land of denmark: https://github.com/josephcopenhaver/gha-debian-tmate |
Beta Was this translation helpful? Give feedback.
You don’t need to go hosted runner approach. You can free enough space for a docker image by using this script from Apache Flink: free_disk_space.sh.
I am starting with 48GB available after running it, just make sure to also add monodoc-http before mono-devel in it as the script is probably a bit outdated.
The bigger problem I see is that when the error in subj does happen, no logs are available for the step that failed so there is no way to investigate why you actually ran over allocated space. GitHub should at least preserve the on screen logs of the step as I regularly output data there to let me figure what exactly eat up all the space, but without that log, I’m just poking at lines o…