-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically clean up old install bases #2109
Comments
|
The output base does not contain the install directory, but only a symlink to it. The actual install directory is not removed by |
Oh, that's true. I guess we could add an option, but you can just delete the directories, too. |
An option doesn't really help. If you know to use the option, you know enough to delete the directories manually. The problem is that we leave behind a separate installation directory for every single build of Blaze that the user has ever used, and they never get cleaned up, until the user starts running low on space and goes looking for things to delete. We should not burden the user with that; we should just clean up old installations periodically. |
That doesn't seem like Bazel's responsibility: if you wanted you could put the bazel directories on a filesystem that will delete things that haven't been used in a while. "We now slow down your build to delete some files taking up a little disk space" doesn't seem like a tradeoff most developers would want to make. |
Is it safe to delete the entire user tree, i.e. |
It's perfectly safe to delete the install directories (as long as you aren't running Bazel in parallel). Bazel just re-creates them on the next run if necessary. Deleting the entire bazel tree is also safe, but will cause Bazel to rebuild everything on the next run, and the bazel-* symlinks will all be dead links. |
@kchodorow: in general, ensuring that temporary files get deleted is the responsibility of whoever creates them. If this were a 10 MB cache, you could say "eh, whatever, it's not going to hurt to just leave it there", but dumping 14 GB of old temporary files on @pcj's disk is not a reasonable thing to do. We don't need to slow down builds at all, either. Bazel runs as a daemon, it can easily do the cleanup as an idle task when it's not building anything. BTW, if you are not seeing this problem on your own machine it's probably because your company has set up a cron job to clean up old bazel directories automatically. (Which could, in theory, slow down your build if it runs at the same time as bazel... have you ever noticed that issue?) But this is really a responsibility that Bazel should take on, so that it works by default for everyone. |
+1 to this. ran into the same thing recently myself - bazel had used 18GB of my disk for it's caching, on a vm with 60GB - which caused me to run out of space and go hunting for my gigabytes. If bazel is going to cache GBs of files, it should be responsible for doing some basic tracking of their usage and deleting them when they haven't been accessed in a while. I don't mind giving a few GB to bazel to use as a cache, but it needs to be respectful of my disk and not cause me to run out of space. |
@camillol I disagree. Google has a separate tool that basically takes care of this problem for you, which is why it isn't built into Bazel. Bazel is huge and complicated, we'd like to keep it focused on doing one thing (building) well. |
That's just the thing, though. Dumping tens of gigabytes of stale temp
files in the course of normal operation is not doing things "well".
And Bazel is an open source project. We can't say "well, to use Bazel
properly you need to get a copy of this internal tool, which we haven't
released, and install it".
Even if it were released, it still doesn't make sense for a Blaze
installation to behave incorrectly by default, and to require the
installation of a separate program to clean up after it. Things should work
out of the box. Defaults should be reasonable. This is a basic product
excellence issue.
2016年12月9日(金) 8:30 Kristina <notifications@github.com>:
… @camillol <https://github.com/camillol> I disagree. Google has a separate
tool that basically takes care of this problem for you, which is why it
isn't built into Bazel. Bazel is huge and complicated, we'd like to keep it
focused on doing one thing (building) well.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2109 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAL-sC1Bl5LoZ8rcDGeGB4vty8SVuTfYks5rGYI4gaJpZM4K2ewX>
.
|
I think this should be done. Either having the launcher clean the install dir or have the installer install such a service. Probably better to have this simpe code has part of the launcher. I agree with @camillol that the defaults of bazel should give an awesome user experience and this is not it. |
I just stumbled on this issue and would like to vote in favor of what @camillol is saying: Bazel created the garbage, so it's Bazel's responsibility to clean it up automatically. These whole "installation directories" are a very strange concept after all, so when Bazel abandons them, Bazel has to destroy them. And as @damienmg says, the current behavior is far from a great user experience. |
Gentle ping. I keep pruning it, but it grows back quickly. This is mostly within the last two months: $ du -sh /private/var/tmp/_bazel_camillol/install/ |
I would suggest reclassifying this from "feature request" to "bug". |
I'd also like to bump this ... our build agents have output bases of over 100G pretty quickly. Some notion of being more careful about leaving behind garbage would be great. |
@jgavris The output base is different than the install base. The output base is specific to your project and you are responsible for getting rid of it via |
@jmmv my bad ... you're right. We actually quickly hit 100GB of artifacts in CI in one day doing about 10 variant builds of a medium / large codebase (debug and release configs for 5 different architectures). |
What's the actual proposal here? I suppose it should still be possible to use several bazel versions on one machine without having to extract them on each new invocation. |
one option is to handle this in https://github.com/philwo/bazelisk and look at the time stamp of the install base |
Any work on this? I was wondering why my I think even a warning message would be a good start so users don't need to debug issues like this. |
Hi there! We're doing a clean up of old issues and will be closing this one. Please reopen if you’d like to discuss anything further. We’ll respond as soon as we have the bandwidth/resources to do so. |
@sgowroji Please note that we (people outside of the Bazel org) don't have permissions to reopen issues. This is still a real usability problem that should be addressed. It may not be a visible problem within Google due to how this is handled there (cron job... as someone mentioned above years ago), but we need a real solution for the general public. |
[Edit - moved to caching issues. Thanks @jmmv] |
@MilesCranmer Let's not conflate issues though. From your claim of "millions of files", I'm pretty sure you are referring to output trees, not the install directory. The install directory, which is covered by this issue, is truly a cache that needs automatic cleanup. But this directory grows much more slowly than everything else and doesn't take "a lot" of space (but that's subjective). For other types of outputs... see this other issue (and my comment), which tracks this problem more broadly: #1035 (comment) |
I think maybe we are speaking about the same thing? From what I recall, my cache (I think the install directory?) was only ~20-50 GB in terms of space, but the number of files was truly massive, in the few millions (of tiny files) – which slows down file indexing. This is why I have avoided using bazel on my institutional cluster, because I will hit the hard file limit very quickly. I could be misremembering though, as I haven't used bazel in a while (though I would like to, once this issue gets fixed!). |
We aren't. The install directory for a release takes ~170MB and contains ~700 files (today). If all you had in those 50GB were Bazel installs, you would fit 300 installs which amount to 210k files. And you wouldn't end up with 300 different installs unless you were developing Bazel itself, because there aren't that many releases out there. What you are talking about is the space used by output directories, which are stored in |
Ah, I see, thanks! Indeed I guess I got confused because they are in the |
I'm going to repurpose this issue for the "automatically clean up old install bases" project, which I intend to work on at some point before Bazel 8. The preliminary plan is to check for install bases that haven't been touched for a long time upon server startup and delete them. (We already update the mtime on the install base directory when the server starts up, so we can use that as the signal.) |
Status update: this will not make it into 8.0, but the work is planned for 8.1. |
…cks. Currently, we rely on CreateFile to effectively obtain an exclusive (write) lock on the entire file, which makes the later call to LockFileEx redundant. This CL makes it so that we open the file in shared mode, and actually use LockFileEx to lock it. This makes a client-side lock compatible with a server-side one obtained through the JVM (which defaults to opening files in shared mode and uses LockFileEx for locking). Even though this doesn't matter for the output base lock, which is only ever obtained from the client side (the server side doesn't use filesystem-based locks), it will be necessary to implement install base locking (as part of fixing #2109). Note that this means an older Bazel might immediately exit instead of blocking for the lock, if the latter was previously acquired by a newer Bazel (since the older Bazel will always CreateFile successfully, but treat the subsequent LockFileEx failure as an unrecoverable error). However, this only matters during the very small window during which the client-side lock is held (it's taken over by the server-side lock in very short order), so I believe this is a very small price to pay to avoid adding more complexity. RELNOTES[INC]: On Windows, a change to the output base locking protocol might cause an older Bazel invoked immediately after a newer Bazel (on the same output base) to error out instead of blocking for the lock, even if --block_for_lock is enabled. PiperOrigin-RevId: 692973056 Change-Id: Iaf1ccecfb4c138333ec9d7a694b10caf96b2917b
…h JVM locks. Currently, we rely on CreateFile to effectively obtain an exclusive (write) lock on the entire file, which makes the later call to LockFileEx redundant. This CL makes it so that we open the file in shared mode, and actually use LockFileEx to lock it. This makes a client-side lock compatible with a server-side one obtained through the JVM (which defaults to opening files in shared mode and uses LockFileEx for locking). Even though this doesn't matter for the output base lock, which is only ever obtained from the client side (the server side doesn't use filesystem-based locks), it will be necessary to implement install base locking (as part of fixing bazelbuild#2109). Note that this means an older Bazel might immediately exit instead of blocking for the lock, if the latter was previously acquired by a newer Bazel (since the older Bazel will always CreateFile successfully, but treat the subsequent LockFileEx failure as an unrecoverable error). However, this only matters during the very small window during which the client-side lock is held (it's taken over by the server-side lock in very short order), so I believe this is a very small price to pay to avoid adding more complexity. RELNOTES[INC]: On Windows, a change to the output base locking protocol might cause an older Bazel invoked immediately after a newer Bazel (on the same output base) to error out instead of blocking for the lock, even if --block_for_lock is enabled. PiperOrigin-RevId: 692973056 Change-Id: Iaf1ccecfb4c138333ec9d7a694b10caf96b2917b
They are currently only used to acquire a lock on the output base, but a future change will use them to lock the install base as well. Human-readable output is also amended to refer to the "output base lock" instead of the "client lock", as the latter term becomes ambiguous once multiple locks exist. Progress on #2109. PiperOrigin-RevId: 693354279 Change-Id: I2b39e6f5ddb83bbc2be15a31d7de9655358776c5
…h JVM locks. (#24210) Currently, we rely on CreateFile to effectively obtain an exclusive (write) lock on the entire file, which makes the later call to LockFileEx redundant. This CL makes it so that we open the file in shared mode, and actually use LockFileEx to lock it. This makes a client-side lock compatible with a server-side one obtained through the JVM (which defaults to opening files in shared mode and uses LockFileEx for locking). Even though this doesn't matter for the output base lock, which is only ever obtained from the client side (the server side doesn't use filesystem-based locks), it will be necessary to implement install base locking (as part of fixing #2109). Note that this means an older Bazel might immediately exit instead of blocking for the lock, if the latter was previously acquired by a newer Bazel (since the older Bazel will always CreateFile successfully, but treat the subsequent LockFileEx failure as an unrecoverable error). However, this only matters during the very small window during which the client-side lock is held (it's taken over by the server-side lock in very short order), so I believe this is a very small price to pay to avoid adding more complexity. RELNOTES[INC]: On Windows, a change to the output base locking protocol might cause an older Bazel invoked immediately after a newer Bazel (on the same output base) to error out instead of blocking for the lock, even if --block_for_lock is enabled. PiperOrigin-RevId: 692973056 Change-Id: Iaf1ccecfb4c138333ec9d7a694b10caf96b2917b
They are currently only used to acquire a lock on the output base, but a future change will use them to lock the install base as well. Human-readable output is also amended to refer to the "output base lock" instead of the "client lock", as the latter term becomes ambiguous once multiple locks exist. Progress on bazelbuild#2109. PiperOrigin-RevId: 693354279 Change-Id: I2b39e6f5ddb83bbc2be15a31d7de9655358776c5
…4223) They are currently only used to acquire a lock on the output base, but a future change will use them to lock the install base as well. Human-readable output is also amended to refer to the "output base lock" instead of the "client lock", as the latter term becomes ambiguous once multiple locks exist. Progress on #2109. PiperOrigin-RevId: 693354279 Change-Id: I2b39e6f5ddb83bbc2be15a31d7de9655358776c5
Progress on #2109. PiperOrigin-RevId: 700006410 Change-Id: Ifd0cfdca6d4124addfecb99b0dec5f488e3ffedd
$ du -sh /private/var/tmp/_bazel_camillol/install
2.3G /private/var/tmp/_bazel_camillol/install
There are 50 directories in there. The oldest dates back to November 24, 2015; the newest is from September 1, 2016. May 12 has five different folders.
Even though I ran "blaze clean" in all of my blaze clients, lots of these folders are left behind. They should be cleaned up somehow.
The text was updated successfully, but these errors were encountered: