-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
com.docker.osxfs memory leak? #1815
Comments
The problem is definitelly there, after restarting Docker and re-running the build, in several hours Since my builds consist of a series of runs with different images, could you suggest a scriptable workaround to restart |
As for restart, a |
@yallop: what do you think? |
Thanks for the report, @ilg-ul. Could you post the script, along with other details needed to reproduce the problem (e.g. the |
the script is public, but is quite elaborated: https://github.com/gnu-mcu-eclipse/riscv-none-gcc-build/blob/master/scripts/build.sh
there are several images, each with its docker file, all available from https://github.com/ilg-ul/docker however the main script is still work in progress, and might need some small adjustments if you want to run it. I'll update the script and let you know. |
FYI, one run of the script, started yesterday, just completed and took around 20 hours. given I have 4 such runs, I would say that this solution is mostly useless :-( |
I upgraded one macMini to 16 GB and installed Docker stable; the behaviour is the same, the osxfs process slowly grows and occasionally it shrinks; sometimes it reaches several GB, sometimes only a few hundred MB. the idea to use I guess the build script should be now functional, you can try it. I suggest you start it with $ git clone https://github.com/gnu-mcu-eclipse/riscv-none-gcc-build.git ~/Downloads/riscv-none-gcc-build.git
$ caffeinate bash
$ exec bash ~/Downloads/riscv-none-gcc-build.git/scripts/build.sh --without-pdf --deb64 --win64 at first run the script will create a custom homebrew instance ( the script will create a subfolder in I started a new build with all libraries; I'll see in a few hours if the machine ran out of memory or not. |
about 24 hours later, and almost half into the build, Docker is still running. :-) it slowly grew to more than 5.5 GB. then I had to manually test some docker commands (like as for the slowness, I don't have a way to measure it, but apparently when the osxfs memory consumption is high, even without using any swap, Docker seems to slow down; probably accessing the large cache takes some time. |
unfortunately it didn't last that much. at a certain point it reached more than 8 GB, swap was used and the entire machine became almost unresponsive, so I had to restart it. :-( It seems Docker for macOS is not yet ready for prime time... I'll try to switch running Docker inside an Ubuntu virtual machine... |
Out of interest; if I understand your script correctly, all source (and some packages) are installed on the host, but execution is done inside a container. The script is quite complex, so I may not fully grasp it, but is there a reason you're not performing the actual build/execution steps in a Dockerfile? When building in a Dockerfile, possibly the build cache could help bring down build times, and it would definitely save the time spent on continuously syncing filesystem changes between the Docker VM, and the OS X host (multi-stage builds could help as well, and this proposal may be helpful; moby/moby#32507). Docker 17.07 will also have a new feature for sending the build-context incrementally to the daemon (moby/moby#32677) This doesn't take away that this should definitely be looked into, just looking at this use case from a slightly different perspective. Also, if you have thoughts on the proposal I linked, or other changes to the builder that's being worked on (see moby/moby#32925, and the https://github.com/moby/buildkit repository), input is really appreciated 👍 |
yes, because the script is currently under active development, and during development the build crashes quite often and I need to resume it. but the suggestion is valid, I'll consider adding an option to keep the build folder locally. |
as suggested, I changed the script to place the however, the problem might be still there, the with 16 GB of RAM this isn't unbearable, but if I move the build folder on a shared folder, sometimes the memory is no longer freed and the system starts to swap on disk, which is catastrophic in terms of speed. I'm using |
As suggested, I generally run the builds entirely in the Docker image, and only copy the result to the host, and generally the builds are ok. Today I needed the build in development mode, i.e. run the build inside the container, but with the build folders on the host, so I can restart the build if needed. The memory leak is still there. The I'm currently using 17.09.0-ce-mac35 (19611). |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so. Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. |
any process taking 18 GB of RAM on a 8 GB RAM system will render it unusable. |
I also noticed It is worth noting that I have disabled swap on my macbook |
I moved away from running Docker on macOS, since it is not usable for large builds. :-( I currently use Docker on an Ubuntu VM running on VirtualBox. Faster and more stable. |
The issue still exists today in the latest Docker Desktop stable (2.0.0.3, 31259). I run a couple of GitLab instances via docker, which, over a matter of days, accumulated 12 GB of memory leaks in com.docker.osxfs. Considering moving away from Docker for Mac, too, because of this. |
VirtualBox 6.0.6, Ubuntu 18 LTS, Docker for Linux. |
I'm having this problem as well. After several days, osxfs has ballooned to over 38GB and hyperkit to nearly 13GB. This is on a 2018 i7 mac mini with 16GB of RAM and 7 containers running. |
For best results, I moved my production builds to a separate physical box where Docker runs like a dream. |
Same Problem |
For very long builds (like the GCC buils that I do from time to time), an Intel NUC8i7BEH with 32 GB RAM and 512 GB M.2 SSD is a relatively afordable alternative to running Docker on macOS. It is incredibly small, yet well designed from a thermal point of view. Highly recommended! |
This should be fixed, not just closed |
Yep, it should be fixed, not just closed. Docker is not usable in production on Macs this way. And even just for use in local testing, one must quit and restart Docker on a regular basis to reclaim memory. 🙁 |
Fully agree. And it is not only the memory leak, using volumes mounted from Apple to Linux, which require this file system translation layer, is inherently slower, so even local testing should be done with caution, mostly inside the container and avoiding volumes as much as possible. |
Closed issues are locked after 30 days of inactivity. If you have found a problem that seems similar to this, please open a new issue. Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. |
Expected behavior
Actual behavior
Information
Steps to reproduce the behavior
This might also explain the slowness reported as #894.
The text was updated successfully, but these errors were encountered: