Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

com.docker.osxfs memory leak? #1815

Closed
ilg-ul opened this issue Jul 3, 2017 · 29 comments
Closed

com.docker.osxfs memory leak? #1815

ilg-ul opened this issue Jul 3, 2017 · 29 comments

Comments

@ilg-ul
Copy link

ilg-ul commented Jul 3, 2017

Expected behavior

  • process memory consumption to remain more or less constant

Actual behavior

  • process memory increases

Information

  • Full output of the diagnostics from "Diagnose & Feedback" in the menu: see D71F5326-0D23-4376-8A06-67DBD61AF38C
  • A snapshot with the Activity Monitor showing the memory consumption that reaches 8+ GB:
    screen shot 2017-07-03 at 20 02 17

Steps to reproduce the behavior

  1. run a very long build, which creates tens of thousands of files on the host filesystem; I use a script that builds GCC with multilib
  2. the initial memory usage is around 140 MB, but in time it slowly increases; after about a day, the memory usage reached 8+ GB; the system entered 4+ GB in swap, and performace degraded dramatically

This might also explain the slowness reported as #894.

@ilg-ul
Copy link
Author

ilg-ul commented Jul 4, 2017

The problem is definitelly there, after restarting Docker and re-running the build, in several hours com.docker.osxfs reached 2.5 GB.

Since my builds consist of a series of runs with different images, could you suggest a scriptable workaround to restart com.docker.osxfs between runs?

@thaJeztah
Copy link
Member

ping @dsheets @djs55 PTAL

@ilg-ul
Copy link
Author

ilg-ul commented Jul 4, 2017

As for restart, a killall com.docker.osxfs seems to do the job, but perhaps there is a less brutal method.

@djs55
Copy link
Contributor

djs55 commented Jul 4, 2017

@yallop: what do you think?

@yallop
Copy link
Contributor

yallop commented Jul 4, 2017

Thanks for the report, @ilg-ul. Could you post the script, along with other details needed to reproduce the problem (e.g. the Dockerfile or details of the image)?

@ilg-ul
Copy link
Author

ilg-ul commented Jul 4, 2017

post the script

the script is public, but is quite elaborated:

https://github.com/gnu-mcu-eclipse/riscv-none-gcc-build/blob/master/scripts/build.sh

details of the image

there are several images, each with its docker file, all available from https://github.com/ilg-ul/docker

however the main script is still work in progress, and might need some small adjustments if you want to run it. I'll update the script and let you know.

@ilg-ul
Copy link
Author

ilg-ul commented Jul 4, 2017

FYI, one run of the script, started yesterday, just completed and took around 20 hours. given I have 4 such runs, I would say that this solution is mostly useless :-(

@ilg-ul
Copy link
Author

ilg-ul commented Jul 5, 2017

I upgraded one macMini to 16 GB and installed Docker stable; the behaviour is the same, the osxfs process slowly grows and occasionally it shrinks; sometimes it reaches several GB, sometimes only a few hundred MB.

the idea to use killall to restart osxfs is not realistic; the process is killed, but it takes forever to restart, and atempts to start docker during this period will fail, crashing the entire script.

I guess the build script should be now functional, you can try it. I suggest you start it with --without-pdf, otherwise you'll need TeX, and the script to install TeX is not longer functional (TeX 2017 was recently released and I need to update the script) .

$ git clone https://github.com/gnu-mcu-eclipse/riscv-none-gcc-build.git ~/Downloads/riscv-none-gcc-build.git
$ caffeinate bash
$ exec bash ~/Downloads/riscv-none-gcc-build.git/scripts/build.sh --without-pdf --deb64 --win64

at first run the script will create a custom homebrew instance (${HOME}/opt/homebrew-gme), then download a large docker image.

the script will create a subfolder in ${HOME}/Work and all builds will happen there. depending on your machine, each platform may take several hours. if the machine has not enough ram, and swap will be used, the script will probably take forever.


I started a new build with all libraries; I'll see in a few hours if the machine ran out of memory or not.

@ilg-ul
Copy link
Author

ilg-ul commented Jul 6, 2017

about 24 hours later, and almost half into the build, Docker is still running. :-)

it slowly grew to more than 5.5 GB. then I had to manually test some docker commands (like docker run hello-world), and, after a while, when I checked again, the osxfs memory consumption was down to around 600 MB. I have no idea if the commands I used cleared the memory, or the 6 GB limit I set in the Docker preferences page was reached.

as for the slowness, I don't have a way to measure it, but apparently when the osxfs memory consumption is high, even without using any swap, Docker seems to slow down; probably accessing the large cache takes some time.

@ilg-ul
Copy link
Author

ilg-ul commented Jul 6, 2017

unfortunately it didn't last that much. at a certain point it reached more than 8 GB, swap was used and the entire machine became almost unresponsive, so I had to restart it. :-(

It seems Docker for macOS is not yet ready for prime time... I'll try to switch running Docker inside an Ubuntu virtual machine...

@thaJeztah
Copy link
Member

Out of interest; if I understand your script correctly, all source (and some packages) are installed on the host, but execution is done inside a container. The script is quite complex, so I may not fully grasp it, but is there a reason you're not performing the actual build/execution steps in a Dockerfile?

When building in a Dockerfile, possibly the build cache could help bring down build times, and it would definitely save the time spent on continuously syncing filesystem changes between the Docker VM, and the OS X host (multi-stage builds could help as well, and this proposal may be helpful; moby/moby#32507). Docker 17.07 will also have a new feature for sending the build-context incrementally to the daemon (moby/moby#32677)

This doesn't take away that this should definitely be looked into, just looking at this use case from a slightly different perspective.

Also, if you have thoughts on the proposal I linked, or other changes to the builder that's being worked on (see moby/moby#32925, and the https://github.com/moby/buildkit repository), input is really appreciated 👍

@ilg-ul
Copy link
Author

ilg-ul commented Jul 6, 2017

is there a reason you're not performing the actual build/execution steps in a Dockerfile?

yes, because the script is currently under active development, and during development the build crashes quite often and I need to resume it.

but the suggestion is valid, I'll consider adding an option to keep the build folder locally.

@ilg-ul
Copy link
Author

ilg-ul commented Sep 14, 2017

as suggested, I changed the script to place the build folder inside the container; this improved things, at least the build script is able to complete (it takes more than 30 hours to build a multilib GCC for 5 platforms!). the install folder, where the build results are stored, remained on a shared folder.

however, the problem might be still there, the com.docker.osxfs memory slowly increases, and sometimes is partly freed; I saw values in the 500-1500 MB range.

with 16 GB of RAM this isn't unbearable, but if I move the build folder on a shared folder, sometimes the memory is no longer freed and the system starts to swap on disk, which is catastrophic in terms of speed.

I'm using 17.06.2-ce-mac27 (19124).

@ilg-ul
Copy link
Author

ilg-ul commented Nov 27, 2017

As suggested, I generally run the builds entirely in the Docker image, and only copy the result to the host, and generally the builds are ok.

Today I needed the build in development mode, i.e. run the build inside the container, but with the build folders on the host, so I can restart the build if needed.

The memory leak is still there.

The .osxfs process started with a few tens of MBs, slowly grew at some hundreds of MBs, then something very wong happened and it grew to several GBs. By that time everything became very slow, and I had to cancel the build, restart the Docker process, and resume.

I'm currently using 17.09.0-ce-mac35 (19611).

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@ghost
Copy link

ghost commented Apr 22, 2018

I guess I run into the same issue with a totally different use case:

In my case, a 'borgbackup client container' causes the issue on MacOSX (High Sierra 10.13.4), 8GByte RAM, 1TB SDD. (There's a borg-server instance running elsewhere running on an RPi).

Yesterday around 2p.m. I started a full backup of my Mac (~800GByte of data). In the evening, my system was not usable anymore (e.g. websurfing was a real pain). This morning, while I tries to use some applications, Macos crashed.

Therefore, this morning (~11a.m.) I started the backup again. (Borgbackup has the ability to continue a previously failed backup). Today ~2p.m. my system was completely unusable again. I started to investigate a bit...

In short: Within the Docker container, everything looked quite "ok" (low CPU usage, low memory usage). But I noticed the huge memory usage of com.docker.osxfs:
screen shot 2018-04-22 at 14 59 24
This remains even after having stopped the borgbackup docker container.

I'm using Version 18.03.0-ce-mac60 (23751), officially installed (actually: upgraded). Thus: no self compiled binaries, no hacks, just vanilla Docker CE

This seems to be a severe bug, as it really renders the system unusable and I guess it'll damage my SSD over time as well due to the required swapping...

If it's of help, I could provide all the files and info for setting up the borgbackup client. Unfortunately, to reproduce it, you'd also need the according server, which also runs by Docker and thus I can provide the information as well - but you'd need additional time to setup the server as well setup the whole infrastructure.

@ilg-ul
Copy link
Author

ilg-ul commented Apr 22, 2018

a severe bug, as it really renders the system unusable

any process taking 18 GB of RAM on a 8 GB RAM system will render it unusable.

@lwouis
Copy link

lwouis commented Jul 31, 2018

I also noticed com.docker.hyperkit reaching 12GB of RAM on my machine after a few hours of building C++ code with gcc

It is worth noting that I have disabled swap on my macbook

@ilg-ul
Copy link
Author

ilg-ul commented Jul 31, 2018

I moved away from running Docker on macOS, since it is not usable for large builds. :-(

I currently use Docker on an Ubuntu VM running on VirtualBox. Faster and more stable.

@felix-schwarz
Copy link

felix-schwarz commented May 3, 2019

The issue still exists today in the latest Docker Desktop stable (2.0.0.3, 31259). I run a couple of GitLab instances via docker, which, over a matter of days, accumulated 12 GB of memory leaks in com.docker.osxfs.

Considering moving away from Docker for Mac, too, because of this.

@ilg-ul
Copy link
Author

ilg-ul commented May 4, 2019

VirtualBox 6.0.6, Ubuntu 18 LTS, Docker for Linux.

@glc650
Copy link

glc650 commented Jun 18, 2019

I'm having this problem as well. After several days, osxfs has ballooned to over 38GB and hyperkit to nearly 13GB. This is on a 2018 i7 mac mini with 16GB of RAM and 7 containers running.

@ilg-ul
Copy link
Author

ilg-ul commented Jun 18, 2019

For best results, I moved my production builds to a separate physical box where Docker runs like a dream.

@drexlma
Copy link

drexlma commented Aug 13, 2019

Same Problem

@ilg-ul
Copy link
Author

ilg-ul commented Aug 13, 2019

For very long builds (like the GCC buils that I do from time to time), an Intel NUC8i7BEH with 32 GB RAM and 512 GB M.2 SSD is a relatively afordable alternative to running Docker on macOS. It is incredibly small, yet well designed from a thermal point of view. Highly recommended!

@lwouis
Copy link

lwouis commented Apr 4, 2020

This should be fixed, not just closed

@felix-schwarz
Copy link

Yep, it should be fixed, not just closed. Docker is not usable in production on Macs this way. And even just for use in local testing, one must quit and restart Docker on a regular basis to reclaim memory. 🙁

@ilg-ul
Copy link
Author

ilg-ul commented Apr 4, 2020

Docker is not usable in production on Macs

Fully agree.

And it is not only the memory leak, using volumes mounted from Apple to Linux, which require this file system translation layer, is inherently slower, so even local testing should be done with caution, mostly inside the container and avoiding volumes as much as possible.

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jun 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants