Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cargo does not rebuild if project got changed in the same second #5918

Open
RalfJung opened this issue Aug 21, 2018 · 8 comments · Fixed by #5919
Open

Cargo does not rebuild if project got changed in the same second #5918

RalfJung opened this issue Aug 21, 2018 · 8 comments · Fixed by #5919
Labels
A-caching Area: caching of dependencies, repositories, and build artifacts A-rebuild-detection Area: rebuild detection and fingerprinting S-triage Status: This issue is waiting on initial triage.

Comments

@RalfJung
Copy link
Member

To reproduce, take a small crate (I am using https://github.com/rust-lang-nursery/rustc-perf/tree/master/collector/benchmarks/coercions), and run

cargo +nightly build # fill incremental cache
for i in $(seq 1 10); do touch src/main.rs && cargo +nightly build; done

These should give fairly consistent timings. However, they do not:

   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.72s
    Finished dev [unoptimized + debuginfo] target(s) in 0.00s                                                                                                                                                      
   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.70s
   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.71s
    Finished dev [unoptimized + debuginfo] target(s) in 0.00s                                                                                                                                                      
   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.70s
   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.69s
    Finished dev [unoptimized + debuginfo] target(s) in 0.00s                                                                                                                                                      
   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.70s
   Compiling issue-32278-big-array-of-strings v0.1.0 (file:///home/r/src/rust/rustc-perf/collector/benchmarks/coercions)                                                                                           
    Finished dev [unoptimized + debuginfo] target(s) in 0.70s

As you can see, it does not even recompile the project each time! Seems like it is (a) using a one-second granularity, and (b) if the last change and the last build happened in the same second, it assumes the build was first! The latter is a wrong assumption, leading to this bug.

@dwijnand
Copy link
Member

Related: #2426 "cargo build does not rebuild if a source file was modified during a build"

Personally I think cargo should only rebuild a project if the contents change, not if the timestamp changes, so I feel compelled to say I disagree that it's a bug that it doesn't "even" recompile the project each time. 😀

But more to your point, perhaps cargo could trigger a rebuild if the current time and the last build time match.

@RalfJung
Copy link
Member Author

That's what we have incremental compilation for. Please don't make it even harder to test that.^^ ;)

I proposed a fix for this at #5919

@dwijnand
Copy link
Member

I think I see what you mean: cargo should be dumb and use file timestamps, and let the incremental compilation within rustc to shortcut when no recompilation is necessary? TIL!

bors added a commit that referenced this issue Aug 22, 2018
fix cargo not doing anything when the input and output mtimes are equal

That's a problem as the input may have changed in that same second but after the output got generated!

Fixes #5918
@alexcrichton alexcrichton reopened this Jan 4, 2019
@alexcrichton
Copy link
Member

We've ended up deciding to revert the fix for this in #6484, so I'm gonna reopen this to track a fix again. From the discussion there here's some points:

  • Primarily, we want to fix cargo build does not rebuild if a source file was modified during a build #2426, where if a file is modified while rustc is running we'll rebuild the project.
  • This means that we track when rustc started, and when build scripts started, moving the mtime of the relevant files to before either was invoked. This moves the mtime closer to the input files' mtimes.
  • This in turn means that the window for edits and then rebuilding is narrowing, meaning that if we keep this fix in spurious rebuilds start to happen more often because the filesystem timestamps don't have the right resolution.

It was decided to back the fix for this out because this use case seemed a bit more niche than editing a file during a build (which typically takes much longer). A long-term fix for this will probably involve cryptographically hashing files. My thinking for this is:

  • Whenever Cargo actually executes a build, it should cryptographically hash all known inputs once the rustc invocation (or build script invocation) finishes.
  • When testing if something needs to be rebuilt, first a cheap mtime check is performed.
    • If the files were modified in the past relative to the last build, nothing to do.
    • If the files were modified in the future relative to the last build, a rebuild should happen
    • If the files were modified at the exact same moment in time as the last build, then all cryptographic hashes are checked. If anything changed we do a rebuild, otherwise we update the mtime of the last build to the current time (hopefully farther in the future to enable fast mtime checksd in the future)

There's been bugs in Cargo as well historically requesting cryptographic hashes always be used but we have been wary to do that for performance. This would naturally open up a possibility of having a config option to always use cryptographic checks and never use fast mtime checks.

@RalfJung
Copy link
Member Author

RalfJung commented Jan 5, 2019

I don't understand why spurious rebuilds are traded for spuriously failing to notice changes -- IOW, spuriously doing too much work is traded for spuriously doing the wrong thing. :(

You talk about touch src/lib.rs && cargo build (in a loop) being niche. That's as close to a standard benchmark setup (for incremental rebuilds) as I can imagine. In comparison, editing a file during the build seems extremely niche to me -- in fact it seems like a bug. It's all a matter of perspective, of course.

EDIT: Hm, okay, #2426 does list some realistic scenarios where this might occur. And it's a correctness issue there as well, not just missing an optimization. :/

Can you at least provide guidance about how to run benchmarks for rebuilds after a NOP edit, now that the obvious way got broken again? I mean, I can hope I never have to debug rustc incremental perf regressions again, but someobody will and then this landmine will be right there waiting for them.

it should cryptographically hash all known inputs once the rustc invocation (or build script invocation) finishes.

Won't that hash the wrong thing if the file changed during the rustc invocation?

@dwijnand
Copy link
Member

dwijnand commented Jan 5, 2019

Yeah you have to hash first.

But with a long build or tests I've often changed some sources mid build and then had to touch files to get the edited files to actually recompile.

No disrespect intended but benchmarking incremental builds seems like the niche case. One we'd want to find a way to support, but with some niche way to subvert the noop'ing (--rebuild or --assume-stale or something.)

@RalfJung
Copy link
Member Author

RalfJung commented Jan 5, 2019

But with a long build or tests I've often changed some sources mid build and then had to touch files to get the edited files to actually recompile.

Fair enough. I have a reflex pretty deep in my muscle memory where I don't hit Ctrl-S while a build is ongoing, I guess that's why I never saw that.^^

@ehuss ehuss added the A-caching Area: caching of dependencies, repositories, and build artifacts label Feb 21, 2019
@ehuss ehuss added the A-rebuild-detection Area: rebuild detection and fingerprinting label Mar 21, 2019
bors added a commit that referenced this issue Feb 5, 2020
…e, r=alexcrichton

Fix rebuild_sub_package_then_while_package on HFS.

This test was flaky on HFS ([azure failure](https://dev.azure.com/rust-lang/cargo/_build/results?buildId=20144&view=logs&j=a5e52b91-c83f-5429-4a68-c246fc63a4f7&t=d4864165-4be3-5e34-b483-a6b05303aa68&l=2018)), resulting in this error:

```
   Compiling foo v0.0.1 (/Users/runner/runners/2.164.7/work/1/s/target/cit/t750/foo)
error[E0460]: found possibly newer version of crate `b` which `a` depends on
 --> src/lib.rs:1:1
  |
1 | extern crate a; extern crate b; pub fn toplevel() {}
  | ^^^^^^^^^^^^^^^
  |
  = note: perhaps that crate needs to be recompiled?
  = note: the following crate versions were found:
          crate `b`: /Users/runner/runners/2.164.7/work/1/s/target/cit/t750/foo/target/debug/deps/libb-98160c67a5811c37.rlib
          crate `b`: /Users/runner/runners/2.164.7/work/1/s/target/cit/t750/foo/target/debug/deps/libb-98160c67a5811c37.rmeta
          crate `a`: /Users/runner/runners/2.164.7/work/1/s/target/cit/t750/foo/target/debug/deps/liba-7d2b9ccd932a36e9.rmeta
```

There are two race-condition bugs here.

Race 1: The second cargo build command (`cargo build -pb`) would sometimes not build, because it thought `b` is fresh.  This can happen when the first build finishes and changing `b/src/lib.rs` happen within the same second. (#5918)  The test silently ignored this failure, this is not the cause of the CI failure, though.

Race 2: The first and second build commands work as expected.  The third build command fails because it thinks both `a` and `b` are "fresh".  However, `b` was just rebuilt, and `a` depends on it, so `a` should have been rebuilt.  It thinks `a` is fresh because `a`'s mtime is equal to `b` when `b` finishes compiling within the same second that the first build finished.

The solution here is to make sure the second step happens well after the first.  The underlying problem is #5918.
@epage epage added the S-triage Status: This issue is waiting on initial triage. label Oct 23, 2023
@epage
Copy link
Contributor

epage commented Oct 23, 2023

#6529 would fix this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-caching Area: caching of dependencies, repositories, and build artifacts A-rebuild-detection Area: rebuild detection and fingerprinting S-triage Status: This issue is waiting on initial triage.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants