Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark "Remote Output Service..." as approved. #311

Merged
merged 2 commits into from
Jun 12, 2023
Merged

Conversation

lberki
Copy link
Contributor

@lberki lberki commented Apr 3, 2023

The associated pull request
bazelbuild/bazel#12823 still seems to see some activity, but Ed and Chi are not responsive, so I suppose this is the right thing to do.

@lberki lberki requested a review from coeuvre April 3, 2023 07:06
@lberki lberki requested a review from jin as a code owner April 3, 2023 07:06
@coeuvre
Copy link
Member

coeuvre commented Apr 3, 2023

IIRC, we never had an agreement on how Bazel should interact with FUSE and since we are going to enable BwoB by default for 7.0, maybe this can be dropped. But I would like to hear from Ed.

@lberki
Copy link
Contributor Author

lberki commented Apr 3, 2023

Yep, I wouldn't merge this before I hear from @EdSchouten (unless I don't hear from him for a few days, for the show must go on!)

@lberki lberki removed the request for review from jin April 3, 2023 13:40
@EdSchouten
Copy link
Contributor

EdSchouten commented Apr 4, 2023

When I initially implemented bb_clientd and the accompanying patchset for Bazel, getting this all to work was my highest priority project. I had a full quarter to make it work. Unfortunately, that quarter is long finished. Nowadays my time is limited to just addressing any of the lingering issues we have with it (e.g., bazelbuild/bazel#17968), as in addition to bb_clientd I also need to:

  • maintain the rest of Buildbarn,
  • provide internal and Open Source community support, and
  • carry the pager of our production environment that's based on it, essentially 24/7.

What also doesn't help is that I only work on Java projects rarely, so the amount of guidance I'd need to make it upstreamable is large. This means that without more active participation from someone on the Bazel team, the window of opportunity has sort of slipped. This is a bit of a shame, as bb_clientd works great at this point. All of our builds use it. It literally saves us a terabyte of disk space on each of our CI workers.

This feature also no longer really aligns with how the Bazel team envisions things, meaning the probability of getting this landed is even smaller. The focus has shifted towards perfecting Builds without the Bytes. Though typical users may find this sufficient, I'm pretty sure that there is a silent minority of large/intensive users of Bazel that tend to disagree. Needing to constantly switch between --remote_download_minimal, --remote_download_toplevel and what have you is just poor UX. This is already demonstrated by the fact that companies like Aspect ship copies of Bazel that have Remote Output Service support included, regardless of it being part of upstream Bazel:

https://github.com/aspect-build/bazel/commits/6.0.0_aspect

One big limitation of Bazel, even with Remote Output Service, is that only bazel-out/ is virtualized. Most of the time/bandwidth in our builds is nowadays wasted on dealing with externals (e.g., needlessly fetching SDKs onto the client that are only used remotely, SHA summing full copies of SDKs every time Bazel restarts). The OutputService API doesn't help with that, nor does the Remote Asset API. So in that sense the Remote Output Service proposal only addresses Bazel's performance bottlenecks partially. I really hope the Bazel team is going to provide a vision at some point on how they attempt to tackle this in the grand scheme of things.

The question is where this specific proposal should go from here. As mentioned previously, I currently don't have the time to drag this across the finish line. If anyone else wants to take this over, I'd fully support it. If not, then there is only one logical solution: drop it, as @lberki suggested.

@lberki
Copy link
Contributor Author

lberki commented Apr 4, 2023

At the very least, the involvement of aspect.dev and the activity on #12823 indicates there is significant need in the community for functionality like this, so I don't want to summarily pull the plug.

I have a few questions to determine what should happen:

  • @EdSchouten : How does the interaction between the FUSE daemon and Bazel look like? Is it limited to the RPC service described in the design doc, i.e. does it work without Bazel ever calling file system operations on the FUSE file system? How and when does the FUSE file system actually get mounted? (the design doc doesn't quite mention this)
  • @coeuvre : How much work would it take to merge #12823 ?
  • If you can't contribute, maybe someone from aspect.dev could?

The reason why I'm asking these is that given the demand and the limitations of BwoB, I'm amenable to merging this, but I'm also worried about potential interactions with the rest of Bazel (e.g. BwoB, tree artifacts, whether this works on Linux/Mac OS/Windows) and the safe thing to do is not to merge it.

@EdSchouten
Copy link
Contributor

Hey @lberki,

How does the interaction between the FUSE daemon and Bazel look like? Is it limited to the RPC service described in the design doc, i.e. does it work without Bazel ever calling file system operations on the FUSE file system?

In my opinion we should eventually aim to get to a point where all communication is purely done through gRPC. The reason being that RPCs make it easy to do batching of requests, which cuts down on the number of context switches between Bazel and bb_clientd. gRPC also has richer error semantics than POSIX, of course. It would significantly reduce the pressure on the kernel's inode/directory cache.

Right now it's basically a hybrid model, where Bazel uses regular POSIX file operations for the things it can do efficiently (e.g., stat() a file to check its existence), while falling back to gRPC to do things that are either impossible (e.g., create a CAS backed file) or inefficient (e.g., compute the digest of a file).

We also have bazelbuild/bazel#14878, which moves the creation of runfiles directories from POSIX file system calls to gRPC.

How and when does the FUSE file system actually get mounted? (the design doc doesn't quite mention this)

Even though the protocol does provide some flexibility here, bb_clientd currently creates a single mount on startup that can be used by n copies of Bazel running on the same system. Bazel is thus not responsible for creating/attaching the mount. This model is pretty flexible, as it allows you to do things like:

  • Let CI run bb_clientd and Bazel as sidecar containers in a pod on a Kubernetes cluster.
  • Have a single copy of bb_clientd installed on your workstation that has a single cache for all of the builds taking place on your system.
    • Need to run Bazel in a container on your workstation? No worries, just use docker run -v ~/bb_clientd.

The reason why I'm asking these is that given the demand and the limitations of BwoB, I'm amenable to merging this, but I'm also worried about potential interactions with the rest of Bazel (e.g. BwoB, tree artifacts, whether this works on Linux/Mac OS/Windows) and the safe thing to do is not to merge it.

I get that. To comment specifically on the topics you raised:

  • BwoB: --remote_output_service cannot be used in conjunction with --remote_download_{minimal,toplevel}. But the patchset I have adds some logic to explicitly reject that.
  • Tree artifacts work just fine. We use them quite a lot, and some of them are also pretty big. That's why we also made some optimizations to REv2/bb_clientd to work well with them (see Replace message Tree with a topologically sorted varint delimited stream of Directory messages remote-apis#229).
  • OS support:
    • On Linux it works just great. We've been using it there for about ~2 years in production now.
    • On macOS it can be made to work if you know what to do. macFUSE has some deadlock/correctness issues that prevent it from working reliably. As macFUSE is no longer Open Source, we are not in a position to contribute our fixes back to its author. We are pursuing alternatives, such as NFSv4.
    • With regards to Windows support: I think the protocol would in theory allow it, but I can't share any experiences there.

@lberki
Copy link
Contributor Author

lberki commented Apr 4, 2023

Ack. I looked at the change at it's not too invasive at the first glance, although it would be strange to check in code that accesses an interface which doesn't have any implementation in the Bazel source tree.

Who should I contact from aspect.dev to ask them if they would be willing to contribute the labor of merging this patch?

@brentleyjones
Copy link

@alexeagle

@f0rmiga
Copy link

f0rmiga commented Apr 5, 2023

I'm the original author of the Aspect CLI. I'm actively looking into it. I'll post back as soon as I have an answer.

@meisterT
Copy link
Member

meisterT commented Apr 5, 2023

@EdSchouten and others: one thing I am concerned about is ongoing maintenance. If we add this functionality to Bazel, people will come to us and report bugs or ask for features in relation to it and we currently don't have the bandwidth or use-case to staff it.

What is the level of commitment from your side to help out with maintaining the functionality?

@meisterT
Copy link
Member

meisterT commented Apr 5, 2023

Since this proposal was created, a few things have changed. The proposal lists these three downsides of BwoB:

Bazel's memory usage has increased significantly. The ActionInputMap that gets created during build may easily consume multiple gigabytes of space for a sufficiently large project.

This should be reduced significantly by bazelbuild/bazel#17120

Incremental builds after a Bazel server restart are significantly slower. Because the ActionInputMap does not outlive the Bazel server, a new server has no output metadata, and thus has to regenerate all outputs from scratch via a clean build.

This should have been addressed by bazelbuild/bazel#13604

Users now need to make a conscious choice whether they want to download output files or not. This makes it harder to do ad hoc exploration.

This is not fully addressed, however we merged bazelbuild/bazel#15638 a while back.

I would argue that in most cases users don't need access to additional artifacts after the build.

In theory, we could either add another command to bazel that allows downloading intermediate artifacts or change the behavior of --experimental_remote_download_regex= to allow downloading after the invocation with an appropriate regular expression (or .* to materialize everything).


In summary, I am wondering how the additional value of this is, also given that not all remote execution service providers offer this functionality.

@alexeagle
Copy link
Contributor

alexeagle commented Apr 5, 2023

The "remote output service" in google3 is ObjFS, right? If I understand correctly this issue is about how other large companies need the same thing. I do see clients who have this need, in addition to Ed's employer.

Funding (a.k.a. "commit to maintain") has come up here. It feels to me that the reason we're lacking funding for this project is that the implementations diverged. It's hard to convince Google managers that you should spend time maintaining a thing Google doesn't need, but Google's solution for this problem isn't available to others.
Is it feasible to converge this?

  1. if BwoB really does solve the use cases, could google3 switch to that from ObjFS (at least in theory, I'm not suggesting that migration would make sense)
  2. If not, it suggests others need an ObjFS equivalent as well. Could ObjFS be refactored to use this same gRPC service so that the maintenance work you already do would support community needs as well?

@meisterT
Copy link
Member

meisterT commented Apr 5, 2023

if BwoB really does solve the use cases, could google3 switch to that from ObjFS (at least in theory, I'm not suggesting that migration would make sense)

Some of our developer builds are indeed on BwoB and not using ObjFs.

I do see clients who have this need, in addition to Ed's employer.

What drives this need mostly?

@alexeagle
Copy link
Contributor

alexeagle commented Apr 5, 2023

What drives this need mostly?

The scenario we run into is exactly what @EdSchouten wrote:

Needing to constantly switch between --remote_download_minimal, --remote_download_toplevel

At the time you run a build, you may not know yet which files from bazel-out you want to use. You might have an existing shell script which does a build and then executes a program from bazel-out, or you might have a wiki document that tells engineers how to find files in that folder after a build. Such a user is currently forced to download the entire build output, even though the files they need are trivially fetchable from the CAS in the remote cache if you only knew the key used to fetch them.

A specific case of wanting to read arbitrarily from bazel-out is mentioned above; in the Aspect CLI we support continuous delivery with an outputs command which supports a pseudo-mnemonic (ExecutableHash) to compute a hash of a binary and its runfiles. This tends to hit the worst-case, because CD references the deployable artifacts which are the largest files in the CAS (e.g. docker container image layers). This use case doesn't even want the bytes, just the hash (I recall in Sponge we used the extended filesystem attributes exposed by ObjFS FUSE for a similar requirement). So we are using a fork of Bazel with Ed's patches.

@meisterT
Copy link
Member

meisterT commented Apr 5, 2023

What drives this need mostly?

The scenario we run into is exactly what @EdSchouten wrote:

Needing to constantly switch between --remote_download_minimal, --remote_download_toplevel

At the time you run a build, you may not know yet which files from bazel-out you want to use. You might have an existing shell script which does a build and then executes a program from bazel-out, or you might have a wiki document that tells engineers how to find files in that folder after a build. Such a user is currently forced to download the entire build output, even though the files they need are trivially fetchable from the CAS in the remote cache if you only knew the key used to fetch them.

Right, my suggestion above was to improve Bazel's UX to give users a way to download the files they need instead of everything.

Improving this would help everyone instead of just users of remote execution services that support fuse.

A specific case of wanting to read arbitrarily from bazel-out is mentioned above; in the Aspect CLI we support continuous delivery with an outputs command which supports a pseudo-mnemonic (ExecutableHash) to compute a hash of a binary and its runfiles. This tends to hit the worst-case, because CD references the deployable artifacts which are the largest files in the CAS (e.g. docker container image layers). This use case doesn't even want the bytes, just the hash (I recall in Sponge we used the extended filesystem attributes exposed by ObjFS FUSE for a similar requirement). So we are using a fork of Bazel with Ed's patches.

So, you are requesting a way to access the metadata that Bazel has in RAM, right?

To make it explicit, I'm undecided what to do with this proposal and want to make sure we have discussed potential alternative ways to achieve the same.

@EdSchouten
Copy link
Contributor

Hi there! Let me try to respond to all of your messages in one go.

@EdSchouten and others: one thing I am concerned about is ongoing maintenance. If we add this functionality to Bazel, people will come to us and report bugs or ask for features in relation to it and we currently don't have the bandwidth or use-case to staff it.

What is the level of commitment from your side to help out with maintaining the functionality?

We are strongly dependent on having this feature working, as we are using it intensively. I therefore think it's also in our interest ensure it keeps working reliably. So with regards to bugs we can surely help out. With regards to adding new features to it, I can't make any commitments to that, as it would also depend on whether those features are realistic and to a certain extent align with our use case.

Bazel's memory usage has increased significantly. The ActionInputMap that gets created during build may easily consume multiple gigabytes of space for a sufficiently large project.
This should be reduced significantly by bazelbuild/bazel#17120

No, this is different as far as I know. The ActionInputMap is specific to BwoB bookkeeping, whereas the PR you linked was about reducing memory usage while computing Merkle trees of actions when doing cache lookups.

Right, my suggestion above was to improve Bazel's UX to give users a way to download the files they need instead of everything.

The point is that users don't always know what they need. For example, in our case it's not uncommon for people to launch GDB/LLDB sessions against some kind of executable that they built. How would a user know which files under bazel-bin/ need to be present to do that?

  • On macOS you need to make sure to download the dSYM directories.
  • What if some parts of the executable consist of generated code? Then in that case you may also need to have some of that generated code available locally to generate proper source listings. But you also don't need to download all of it. Only the parts that are relevant to the stack frames you're debugging.

Using a FUSE file system keeps those kinds of use cases working as they otherwise would, without requiring that the user has to jump through additional hoops.

@meisterT
Copy link
Member

meisterT commented Apr 6, 2023

Thanks for your reply, @EdSchouten!

The point is that users don't always know what they need.

I think this is the strongest argument in favor of this proposal. Do you have data on how often users actually access intermediate files?

I am interested to learn about more scenarios from you and others to see whether we can improve the UX for BwoB - which is a useful thing to do even if it might not solve all the pain points listed in your proposal.

My understanding is that your client daemon is Linux only, right? Do you plan to extend it to other platforms?

I would also like to hear from other remote execution service owners (cc @bazelbuild/remote-execution and @brentleyjones, also cc @jmmv who commented on the PR) whether they would implement a similar daemon once this proposal is implemented - and whether they would like to see any changes to the proposal.

@brentleyjones
Copy link

The point is that users don't always know what they need.

This partially falls under "Work seamlessly with IDEs" in bazelbuild/bazel#6862. The new regex stuff helps, for sure, but it can be a pain to get right (I've fixed many bugs in rules_xcodeproj around this), and the requirements can change over time making this hard to keep right. Also, if a user doesn't need those files every session, they are unnecessarily downloading them, when instead it would be better to download them on demand.

Also, if someone isn't using an IDE, or tooling that gets all of those flags right, then they are currently out of luck.

I would also like to hear from other remote execution service owners whether they would implement a similar daemon once this proposal is implemented.

When I was testing bb_clientd back when the implementation PR first appeared, the performance wasn't great for me. I'm using macOS and the FUSE performance was suboptimal. And by this I mean that a normal build was slower using this than using BwtB. I haven't tested it again since, but I've been hesitant because of the reliance on FUSE. AFAIK though, this proposal works with any REAPI compliant service, so we wouldn't need to implement our own daemon, and thus wouldn't want to block this proposal.

@EdSchouten
Copy link
Contributor

EdSchouten commented Apr 6, 2023

I think this is the strongest argument in favor of this proposal. Do you have data on how often users actually access intermediate files?

The use case that I sketched is used by us on a near daily basis.

My understanding is that your client daemon is Linux only, right? Do you plan to extend it to other platforms?

It also works on macOS, if you make sure you get the stars to align. bb_clientd supports both FUSE and NFSv4. This means that on macOS, you can either use it in combination with macFUSE or with the NFS client that's part of the kernel. Both approaches have their shortcomings, though:

  • As mentioned above, macFUSE has some bugs that causes it to lock up. People who run into these issues will need to work with the maintainer of macFUSE to get those fixed.
  • NFSv4 support only works reasonably well on macOS Ventura 13.3.

I would also like to hear from other remote execution service owners (cc https://github.com/orgs/bazelbuild/teams/remote-execution and @brentleyjones, also cc @jmmv who commented on the PR) whether they would implement a similar daemon once this proposal is implemented - and whether they would like to see any changes to the proposal.

Note that bb_clientd is part of the Buildbarn project, but not coupled against it. The goal is that you can also use it in combination with any other remote execution service.

@coeuvre
Copy link
Member

coeuvre commented Apr 6, 2023

The ActionInputMap is specific to BwoB bookkeeping,

Can you elaborate more on this? I don't think ActionInputMap is specific to BwoB, it's used for all the mode. Memory wise, I believe BwoB should be the same comparing to other modes.

@qc-benjamin
Copy link

qc-benjamin commented Apr 6, 2023

We felt the need to give the feedback that this feature is significant to Qualcomm for much the same reasons as have been brought up by others and we would much prefer it be a mainline feature rather than maintained in several unofficial forks.

The feature especially shines when building large monorepos with a wide net of targets (i.e. //...) this can then be performed once and then continue the CI pipeline and/or the developer workflow transparently with regard to whether the build was done locally or remotely.

In such situations remote_download_all is unfeasible for obvious reasons and listing all inputs possibly required by latter steps in the pipeline/workflow is also unrealistic.

We also feel the need to clarify that while bb-clientd is a good implementation of something that consumes the remote output service apis and most users of bb-clientd are likely to be using buildbarn. The actual pull request is independent of bb-clientd and bb-clientd itself is independent of buildbarn.

I.e. any REAPI compatible environment would benefit from the improved traffic flow from using bb-clientd and the remote output service can be consumed by things other than bb-clientd. It could for example be used to integrate with some other virtual filesystem. Without the information communicated by the remote output service this information becomes lost inside of bazel.

@lberki
Copy link
Contributor Author

lberki commented Apr 11, 2023

From this thread, it looks like that there is a lot of demand for the output tree to be on a FUSE file system of some sort. With that in mind, I'm fine with marking this design doc as approved and merging bazelbuild/bazel#12823 as long as @EdSchouten (or someone else, but he is the most obvious candidate) can commit to bringing it up to the standards of the code base of Bazel and putting it behind an experimental flag. WDYT?

@lberki
Copy link
Contributor Author

lberki commented Jun 1, 2023

Friendly ping! I'd like to decide one way or the other; to reiterate: I'm fine with merging #12823 as long as someone can commit to rebasing it and bringing it up to our quality standards.

@sluongng
Copy link

sluongng commented Jun 1, 2023

My take: It seems like we need a new sponsor for the proposal to move forward.

Let's put a deadline (1 month?) on this proposal to find a new sponsor to (a) rebase + fixup the PR and (b) help maintain it. I could ask during https://github.com/bazelbuild/remote-apis/ meeting to see if any of the BuildStream/BuildBox maintainers are willing to pick it up. There has been some similar interest expressed in https://groups.google.com/g/remote-execution-apis/c/qOSWWwBLPzo and a similar proposal https://docs.google.com/document/d/1SYialcjncU-hEWMoaxvEoNHZezUf5jG9DR-i67ZhWVw/edit#, which could take the proto API design down a slightly different route.

I don't think dropping the proposal needs to be a "final" state. A new sponsor could put forward a smaller proposal referencing the dropped proposal to continue the work. It's not like the code in the PR is getting deleted.

@lberki
Copy link
Contributor Author

lberki commented Jun 1, 2023

Yeah, I don't think dropping a proposal needs to mark the "final death" of said proposal, but it's a pretty strong signal.

I wanted to keep this open for a while because @EdSchouten has invested a lot of energy into it and IIUC they maintain a fork of Bazel with that patch anyway, so I thought it would be a net win for them to not have to maintain their patch at the cost of some upfront investment.

@lberki
Copy link
Contributor Author

lberki commented Jun 1, 2023

I'll keep this open for a while (say, a month, assuming I actually manage to remember the deadline) to signal that we are looking for volunteers and if there are none, I'll mark it as "dropped".

The associated pull request
bazelbuild/bazel#12823 still seems to see some
activity, but Ed and Chi are not responsive, so I suppose this is the
right thing to do.
@lberki lberki force-pushed the lberki-drop-remote-service branch from d028031 to 864aaaa Compare June 12, 2023 09:14
@lberki lberki changed the title Mark "Remote Output Service..." as dropped. Mark "Remote Output Service..." as approved. Jun 12, 2023
@lberki
Copy link
Contributor Author

lberki commented Jun 12, 2023

At the last moment, @stagnation and @Gormo stepped up. So I'll update this pull request to say "move to approved" and will ask @coeuvre to review it since he's the most likely candidate for the eventual code review.

@lberki lberki merged commit ae11d51 into main Jun 12, 2023
@lberki lberki deleted the lberki-drop-remote-service branch June 12, 2023 12:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants