Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC 0119] Formalize testing for nixpkgs packages #119

Merged
merged 14 commits into from
Jun 30, 2022
114 changes: 114 additions & 0 deletions rfcs/0119-testing-conventions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
---
feature: Defined conventions around testing of official Nixpkgs packages.
start-date: 2021-09-03
jonringer marked this conversation as resolved.
Show resolved Hide resolved
author: Jonathan Ringer
co-authors:
shepherd-team:
shepherd-leader:
related-issues:
- [RFC 0088 - Nixpkgs Breaking Change Policy](https://github.com/NixOS/rfcs/pull/88)
---

# Summary
[summary]: #summary

When updating or modifying packages, several conventions for testing regressions
have been adopted. However, these practices are not standard, and generally it's not well
defined how each testing method should be implemented. It would be beneficial to have
an unambiguous way to say that a given package, and all downstream dependencies, have
had as many automated test ran possible. This will give a high degree of certainty that
a given change is less likely to manifest regressions once introduced on a release
channel.

Another desire of this rfc is also to have a way for various review tools
(e.g. ofborg, hydra, nixpkgs-review) to have a standard way to determine if a
package has additional tests which can help verify it's correctness.
jonringer marked this conversation as resolved.
Show resolved Hide resolved

# Motivation
[motivation]: #motivation

Breakages are a constant painpoint for nixpkgs. It is a very poor user experience to
have a configuration broken because one or more packages fail to build. Often when
these breakages occur, it is because the change had a large impact on the entirity
jonringer marked this conversation as resolved.
Show resolved Hide resolved
of nixpkgs; and unless there's a dedicated hydra jobset for the pull request, it's
infeasible to expect pull request authors to verify every package affected
by a change they are proposing. However, it is feasible to specify packages that
are very likely to be affected by changes in another package, and use this information
to help mitigate regressions from appearing in release channels.

# Detailed design
[design]: #detailed-design
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe expand on what are the intended changes to the status quo?

passthru.tests is a name documented inside the manual, however nixosTests are recommended to be also put there.

(also, if sorting by resource consumption, maybe this split is not needed?)

Are we encouraged to compromise on something in the name of more test coverage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I wasn't sure what the status quo should be. My current thoughts are, "here is some addtional metadata you can add to ensure that people know how your package may break. Or add your package to the tests of other packages to ensure it's not broken."


Copy link
Member

@Artturin Artturin Apr 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recently people have started adding sensitive downstream dependencies passthru.tests
Example https://github.com/NixOS/nixpkgs/pull/167092/files

Cc @risicle

I propose adding a new section called testReverseDeps or so

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this technique was @jonringer 's idea in the first place 😉

The main thing I'm slightly uncomfortable with is adding the reverse dependencies as arguments - I imagine it causing wierdness for people who do a lot of overriding. But I can't think of a sensible way it could be avoided.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally the benefit is for nixpkgs testing. For the overriding case, it should be that an override would not trigger a downstream dependency to influence the build as passthru gets pruned before instantiation.

I agree it's odd that a package is aware of it's downstream dependencies, but I'm not sure of another way of having a package be aware of them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine it causing weirdness for people who do a lot of overriding.

A potential mitigation is to source these from a pkgs argument, so they're all bundled up. Not sure if there's a problem in the first place though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usage of pkgs is frowned upon in nixpkgs from a hygiene perspective, I would rather not involve it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nixpkgs-review builds all reverse dependencies. could we just use this mechanism in a testers.testBuildReverseDeps? but there can be thousands 😬

cc @Mic92

Standardize `passthru.tests.<name>` and `passthru.nixosTests.<name>` as a mechanism of
more expensive but automatic testing for nixpkgs. As well as encourage the usage of
`checkPhase` or `installCheckPhase` when packaging within nixpkgs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused where to draw the line between what should be done in installCheckPhase and non-vm passthru.tests. When reviewing this has been a source of confusion, see: NixOS/nixpkgs#153496 as an example. (Reading this I'm no longer convinced the decision to move it from installCheckPhase was the right one :>)

cc @thiagokokada so that you can add your thoughts

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, installCheckPhase is similar to checkPhase, in what both means that if this test fail I don't even want this package to be build (that as far I had understand in the NixOS/nixpkgs#153496, that was the case, since that was a very basic functionality check for that package).

passthru.tests, AFAIK, allows the package to be build regardless and would only fail if you try to run the package tests itself. That may be an advantage for packages that has too many tests or the tests uses too much resources to run (e.g.: maybe the tests uses much more memory than the build phase). Both are not the case of NixOS/nixpkgs#153496 though, since the check had a minimal impact on total build time.

So yeah, between this and the fact that to use passthru.tests we had to resort to a clear hack, I would much prefer that installCheckPhase was kept.

Copy link

@thiagokokada thiagokokada Jan 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, IMO, VM-less passthru.tests makes more sense for tests like the one described in this document: maybe we want to see how this particular package being build interacts with other packages, or some other kinda of integration testing.

But if the idea is test only the current package, installCheckPhase generally makes more sense to me (of course, there could be exceptions).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think ideally we want expensive build steps spread among different derivations, but that's hard. Expensive tests are sometimes feasible to split, so splitting them is good when it doesn't create a mess.

(Cheap checks with no new deps do not follow this logic, of course)

Also, separated tests make it more practical to support packages with different level of «works» on different platforms, because we do not need to touch the main build causing expensive rebuild on the «happy» platforms to add partial support (and its tests) on the more exotic ones. There is no single «package is broken», in the black-and-white view we have never had a non-broken LibreOffice packages, for example.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separated tests also make it easier to debug failures, since you can easily access the package (which may build successfully but then fail the test). I think we should follow something like a "principle of least privilege", where it is best practice to always use the test that makes the fewest assumptions about its environment.

checkPhase assumes it is run within the build environment. That's a lot of assumptions. This increases build time for quick iteration, makes it harder to debug failures and also makes the test results less meaningful since the environment differs from the end-user's environments.

installCheckPhase has somewhat fewer assumptions than checkPhase, but still many of the same disadvantages. The results are more meaningful though, since the environment is more similar to the end-user's environment.

passthru.tests makes the fewest assumptions. If the tests pass in this "clean environment", they will likely pass in the user's environment as well (unless tests are actually broken by some environment variable or something similar). They can be run as-needed (but should be run (semi-)automatically in some reliable fashion). They make debugging easier.

passthru.nixosTests assumes access to a nixos VM, but does not assume a special build environment.

As a result, I'd suggest to pick passthru.tests > passthru.nixosTests > installCheckPhase > checkPhase. Practicality is king of course: If upstream assumptions make it trivial to run the test suite in checkPhase but hard to run them in passthru.tests, then its better to have some tests in checkPhase than to have no tests at all.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have much ~ecosystem perspective on this, but I do have a few scattered thoughts from authoring and maintaining a small number of packages (broadly, my thinking has been aligning with timokau over time):

  1. I like passthru.tests at the front of the list. I can think of at least a few times I've been in the middle of some meaningful work and ended up derailed by an immediate or remote test or test-dependency failure:

    • I've started to see a check/test dependency on a linter or formatter as an antipattern in my own work. At least twice I've had a linter/formatter package turn up broken, or a bump in nixpkgs to trigger a rebuild that fails due to a lint/format check.

    • Recently I noticed that resholve's CLI arg parser dependency wasn't building on aarch64-darwin due to some failing tests. This was a pain since it was a platform I don't have readily available. I ended up overriding the dependency to disable checks for expediency, since resholve's own test suite (which I factored out into passthru.tests a while back) already exercises the CLI and should fail on borg/hydra if it did ultimately depend on whatever caused the test breaks.

  2. I've thought for a while that it would be nice to have a low-friction mechanism for collecting example uses (say, a nudge to submit them + a CLI/form/template for doing so? Examples themselves might just be a valid invocation of some CLI, a sample file that exercises some parser, etc.) from package users. It occurs to me now that if there was a ~standard passthru.tests.x attr for these, it could at least drive the nudge (i.e. example, if you open a nix-shell with something that has fewer than N?)

  3. I like prioritizing installCheckPhase over checkPhase. I have some minor insecurity about checkPhase providing false-confidence about something that installPhase or fixupPhase then breaks (especially since resholve presently runs in fixup).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've avoided installCheckPhase for fear of corrupting the outputs. NixOS/nixpkgs#143862 should counter that.


Usage for `passthru.nixosTests.<name>`
- Reserved for tests utilitizing the nixosTest utilties.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem a meaningful distinction to me. Maybe we should flip this.

  • Reserved for tests that are more resource intensive or that require additional system features such as kvm.
    • Generally these are tests using the nixosTest utilities.

Of course this makes the name less meaningful, but we can bikeshed that later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to keep it consistent in plurality with the already used passthru.tests. Also pkgs.nixosTests is how it's exposed in nixpkgs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The separate name makes testing everything slightly harder. Today, we can do nix-build -A postgresql.tests and expect everything to run. This won't be the case anymore.

This also has a runtime cost, whereas standardizing on pkg.tests.nixos does not, as the tests attr value won't be evaluated in normal use as a dependency of something. A separate attr has the overhead of an extra attr and an extra thunk, for each package. We've decided against a pkg.exe alias for this reason (abs path to mainProgram).

- Generally these are more resource intensive, and may require additional system features
such as kvm
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could have an automatic attribute like pkg.tests.light for this that filters out tests based on kvm in requiredFeatures and/or some other attribute. Heavy evaluations can be avoided since #92.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We decided in the meeting to use requiredSystemFeatures to influence the intensity of a review.

It re-uses existing conventions, and largely achieves the same goal. The burden of filtering the tests will be on the tooling side, not the maintainer side.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some in-nixpkgs provisions may still be good, as it can shrink the interface between the tooling and nixpkgs.


Usage for `passthru.tests.<name>`:
- Running tests which include downstream dependencies.
- This avoids cyclic dependency issues for test suites.
- Running lengthy or more resource expensive tests.
- There should be a priority on making package builds as short as possible.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hear you! 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I originally wrote this about sage, and sageWithTests. Which took even longer :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(looks sadly at NixOS/nix#3600 being unmerged) although for a given situation with a package, speeding up separate tests on their own would also not hurt

- This reduces the amount of compute required for everyone reviewing, building, or iterating on packages.
- Referencing downstream dependencies which are most likely to experience regressions.
- Most applicable to [RFC 0088 - Nixpkgs Breaking Change Policy](https://github.com/NixOS/rfcs/pull/88),
as this will help define what breakages a pull request author should take ownership.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only place corresponding to «tests have to pass»?

(BTW what about a test removal procedure? Explicit maintainer approval?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future work, there's mentions to add it to the PR template, and it's already part of ofborg.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all of this is vague enough that it remains unclear whether the RFC establishes a norm that tests should be run completely, and have to pass (hmmm, what about platform discrepancies…)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more meant to set the expectation. There's already somewhat of a convention, and I would like for that to be more explicit and expected.

nixpkgs-review goes a long way to finding regressions, but the problem is totality. For some changes, there may only be a few packages which may realistically be affected by the change, so I don't want to build 500+ packages to review the build implications of a change, I just the relevant package and the few direct downstream dependencies. Even better if there's a testcase for the package itself.

Copy link
Member

@7c6f434c 7c6f434c May 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So basically you do not want to implement something with the full extent of the point from the meeting notes?

Tests of the updated package should be become mandatory for acceptance of a PR

(Here my question can be treated as procedural: for the text «updated according to the meeting notes» I am fine both with the text that clearly claims what the point in the meeting notes claims, or with an explicit comment in the discussion that the meeting note point as written is too far-reaching, maybe because it is too brief of a summary of an agreed position; I just want clarity what interpretation of RFC gets covered by the eventual decision)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So basically you do not want to implement something with the full extent of the point from the meeting notes?

ofborg already runs passthru.tests if the commit message is formatted correctly. I think I'm mis-understanding what you're trying to say. My opinion is that the existing process already enforces the opt-in testing behavior. The main issue right now is that the usage of passthru.tests is the exception, not the norm.

BTW what about a test removal procedure?

This is a non-goal for the RFC. Personally, I believe this should be left up to the maintainer(s) to decide. The additional tests should be providing value to the review process, if they don't then they should probably be removed, but this can be decided by the maintainers.

Explicit maintainer approval?

Also a non-goal. Just concerned with the ability to test a PR, which may aide in the decision making process to merge a PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, do we ask that tests are run or that tests pass fully at least on one platform (maybe not the first try etc.…)?

Copy link
Contributor Author

@jonringer jonringer May 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, do we ask that tests are run or that tests pass fully at least on one platform (maybe not the first try etc.…)?

I think the tests should be ran; and if any fail, it should be an indicator that the package is unhealthy in context of the PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the position, and I think it could be made less vague in the RFC. Maybe put that statement that «tests should be run and a failing test is an indicator that the package is unhealthy in the context of the PR.» as a top-level statement in the detailed design?


Usage for mkDerivation's `checkPhase`:
- Quick "cheap" tests, which run units tests and maybe some addtional scenarios.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all test suites are quick or cheap, but running them should be a priority over quickness. If we can make running them in a separate derivation easy, that's worth considering, but it seems that the human overhead would not be worth it in the general case.
A lot could factor into this, so I think we should make this less prescriptive.

Suggested change
- Quick "cheap" tests, which run units tests and maybe some addtional scenarios.
- Preferably quick "cheap" tests, which run units tests and maybe some addtional scenarios.

- Since this contributes to the "build time" of a package, there should be some
emphasis on ensuring this phase isn't bloated.

Usage for mkDerivations `installCheckPhase`:
- A quick trivial example (e.g. `<command> --help`) to demonstrate that one (or more)
of the programs were linked correctly.
Comment on lines +65 to +66
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also testVersion introduced in NixOS/nixpkgs#121896.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Assert behavior post installation (e.g. python's native extensions only get installed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "only" in this sentence is confusing me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Native extensions only get installed. However, most test suites will consume the code in the build directory. So tests will fail because the compiled extensions will not be present.

I'm not sure how to word this better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe start with the phrase about the build directory?

and are not present in a build directory)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some programs are hard to test automatically so how about creating a new meta attribute like testingInstructions

Copy link
Contributor

@kevincox kevincox Apr 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly oppose this idea.

IIUC the goal of this RFC is to make it easier for changes to upstream packages to be tested. The end goal is that we have automatic tooling that can test packages, notify maintainers of breakages and eventually mark as broken if the maintainers are unable to fix the package in time. Adding required manual testing puts unacceptable levels of burden on core package maintainers (that are depended on by hundreds or thousands of other packages).

I think a testingInstructions attribute may be an interesting and useful idea but I think it would serve a different purpose as the formalized testing specified by this RFC. If you want to create a different RFC for an informational attribute I would support it.

TL;DR I don't want to require people to manually test loads of packages, if you want your package not to break due to changes in dependencies you need automated tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess testingInstructions would fit here if it came with «replacing this with a test that looks robust will be accepted even if the test implementation is ugly», but I do not believe such a commitment would be accepted into the attribute semantics…

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we could record merge conditions instead?
Some packages don't need manual testing, so all that's needed for a merge is a review of the changelog, but this information has not been recorded yet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have a CONTRIBUTING.md in a packages directory with documentation for package maintainers and contributors, like how to test, how to update...
In addition to that, a README.md with documentation for users.

# Examples and Interactions
[examples-and-interactions]: #examples-and-interactions

This section illustrates the detailed design. This section should clarify all
confusion the reader has from the previous sections. It is especially important
to counterbalance the desired terseness of the detailed design; if you feel
your detailed design is rudely short, consider making this section longer
instead.

jonringer marked this conversation as resolved.
Show resolved Hide resolved
# Drawbacks
[drawbacks]: #drawbacks

None? This is opt-in behavior for package maintainers.

# Alternatives
[alternatives]: #alternatives
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One alternative is that we consider all dependent packages as tests. We can have dependent packages that are just tests for example a testFoobar package to test foobar.

Then a PR author would responsible that all dependents build (aka pass) or are marked broken.

The obvious issue here is that for packages with lots of dependents it becomes infeasible for the average author to run a tool that builds everything and marks failures as broken. I think it is worth mentioning this alternative because this RFC demonstrates a clean way to define an appropriate sample size. Then it is expected that nixpkgs-provided build resources can be used for the full build + mark as broken.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and this is the compromise with passthru.tests listing downstream dependencies. The idea is to list packages which are "likely to break with breaking changes".

For example, some packages may make use of many of systemd's features, however, other packages only really use libudev, which is much more stable. We could probably forego the libudev packages, and just list the packages which are using systemd's more advanced features.


Continue to use current ad-hoc conventions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


# Unresolved questions
[unresolved]: #unresolved-questions

How far should testing go?
- What consistitutes that "enough testing" was done to a package before a change was merged?

Should `<packaga>.passthru.tests` be flat?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vote yes. If you need some sort of differentiation you can use a naming convention such as ownerSomething. For now these conventions can be per-package and if we see enough of these arise we can consider unifying them or adding more structure once we have clear use cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this means: use a list and not a set? I guess two reasons not to flatten:

  1. If the tests are slow, I'm sure it'll occasionally help someone to be able to target a specific one?
  2. If there's enough speculative value in having some ~well-known attrs dedicated to a specific purpose (as in bullet 2 of https://github.com/NixOS/rfcs/pull/119/files#r805415330)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant for it to be as a single attr set. I wanted to avoid: passthru.tests.scenarioA.variantB.test

jonringer marked this conversation as resolved.
Show resolved Hide resolved
For packages which have extremes in resource usage when testing (e.g. pytorch), it may
be beneficial to have additional structure for the tests to denote expectations of resources
and ownership of testing for upstream packages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this differentiation can be solved by nixosTests vs tests.


# Future work
[future]: #future-work

Problem with onboarding more test to aspects of nixpkgs CI and processes is the increased
need of compute, storage, and ram resources. Therefore, consideration of future work should
take into consideration how much testing is feasible for a given change.

Onboarding of CI tools to support testing paradigms:
- ofborg
- Testing of <package>.passthru.tests is already done.
jonringer marked this conversation as resolved.
Show resolved Hide resolved
- Testing of downstream dependencies and their tests when minimal (e.g. <10 rebuilds?)
- hydra
- Allow for derivations exposed to hydraJobs to also probe for `<drv>.passthru.tests`?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be implemented in release.nix files.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this line, and added it to unresolved section, as it's more of a hydra concern than anything.

Also, the ultimate goal of hydra is to provide channel updates for nixpkgs, in which these tests are more "nice to have" than anything else.

Also, I'm not sure if the official hydra should be building all of the tests, if the tests are only meant to influence whether a change should be merged. The official hydra could be populating the cache with a lot of never-to-be-used-again builds.