-
-
Notifications
You must be signed in to change notification settings - Fork 743
Conversation
This comment has been minimized.
This comment has been minimized.
@CleanCut is it because it doesnt run with the new [ $default-branch ]?
This should run for pushes and PR's against the master branch |
@Blisto91 Woot! That did it. I must have grabbed that macro feature out of the template before it was supported. Or maybe we've gotten off the legacy plan billing plan and gotten Actions enabled between now and then. Either way, we can now move forward. 🎉 |
Would this help with allowing it to fail on nightly? |
Extra note. If they aren't available on the latest release the toolchain action will downgrade until it finds the latest version with the components included. |
I came to the same conclusion as you. GitHub actions doesn't yet seem to have any way to let something fail but not make it look like a big, red failure. I don't know what else we can do other than just not requiring nightly checks to pass and cross our fingers and hope it doesn't fail much making it look like we can't merge even though we can.
Good catch! I had no idea. I modified the setup so 1) we don't use the action to install the components, and 2) we only install the components (and use them) on stable, in later steps. I've put a lot of effort over the last couple days into getting sccache working on macos/linux using "local" storage and GitHub Actions' cache. If I can get that working, I'll talk to @fhaynes about getting access to an Amethyst S3 bucket to use for sccache for all OS's. I feel like we're getting a lot more traction with GitHub Actions vs. GitLab CI -- though admittedly the lower friction of direct integration with GitHub is a huge part of that. |
Looking good! Your clean cut on this CI stuff is God's work! Regarding sccache, how will it work with cleaning or pruning it over time? Not that it would be a problem in the short or maybe even mid term. |
@Blisto91 Per the sccache readme
Since we're using an FS cache, it looks like it's doing an LRU of a fashion, probably by checking mtimes. Ejection of least recently used artifacts would happen as you approach whatever the limit is set to. Since you're relying on github's cache for your disk, you also have whatever the TTL is there as an extra knob you can turn. I'm not sure what github's cache policies are like but if they have a hard limit for artifacts you may be looking at a cold start from time to time whenever they dump the volume out from under you. |
Ah I see. Githubs own cache storage have a hard limit of 5GB so even with the impressive Zstd compression you quickly hit a wall where it will start evicting cache until it is under limit. It will also evict caches that haven't been used for a week. Edit: If Amethyst at some point decides it is too much hassle maintaining their own self hosted runner then I think acceptable performance could still be achieved by using the shared runners in combination with sccache and cloud storage. |
@Blisto91 @onelson I haven't been able to get sccache to actually run as far as I can tell. That's my next task, but if you can see what I'm doing wrong please point it out! Theoretically, everything is set up perfectly...except sccache is not actually being run. The GitHub cache of local files side of things is sort of irrelevant, because I'm switching to using S3 for sccache storage as soon as I can verify that I can actually run sccache at all and it results in a speedup like we expect. Though it was very interesting to learn about the GitHub cache side of things. |
Wondering how you were checking this. There's no additional output when used the way you are using it now, so I guess you'd either need to check the cache dir to see if there were files in there, or reason about the overall build times. If the "wrapper" env var is set, then the server is started and stopped around a I would recommend bookending your commands with It'll report overall cache size and hit/miss ratios. |
I've been observing cache creation time. "1 second" to create and upload a tarfile of all dependencies? No cache created. Also cache restore time shows a 22-byte tarfile. Using |
So the problem seems to be that the variables in the os matrix doesn't get set as environmental variables and you can't set it directly per os matrix as far as i can see.
This will add them on top of the global env: so it wont get overwritten or anything. Note that the job env: doesn't have to be at the bottom. |
@fhaynes For the first time ever, GitHub has released a public product roadmap. On that roadmap, there is an item on the Jul-Sep column for Actions: Pull requests from private forks run Actions workflows. I think that alleviates the last concern I had about using GitHub actions going forward, and my disposition is now to focus on making the GitHub Actions workflow work well. |
Sigh. I discovered that I had omitted the "--workspace" option, so I was only running tests against the top-level amethyst crate. Now that I've enabled all tests, they aren't passing. Naturally. Tomorrow's task. Sheesh. Seemed like the mdbook part was nearly there. |
The failures in the latest workflow is because the Linux virtual enviroment doesn't have any sound devices. Edit: |
Was https://github.com/peaceiris/actions-mdbook considered for getting the book to build? It can also auto-publish to Github Pages. |
@fosskers The PR currently uses it to fetch the latest mdbook. |
It's weird that the timing test seems to be randomly failing. Haven't seen it before during my own testing. In the tests where we check that the stopwatch reports the correct numbers we use a thread::sleep. watch.start();
thread::sleep(Duration::from_secs(DURATION));
watch.stop(); This seems to fluctuate somewhere in the range 1.003 - 1.130 ish seconds on mac. It does not seem to be related to any amethyst code because using std timing methods shows the same weirdness. let now = Instant::now();
thread::sleep(Duration::from_secs(DURATION));
let elapsed = now.elapsed().as_secs() as f64 + now.elapsed().subsec_nanos() as f64 * 1e-9; Using spin-sleep yielded the same results. Some extra weirdness is that the fluctuation seems to be consistent within the same run. Investigation continues. |
So the timing issue seems to be with thread::sleep specifically. I looked more into spin-sleep and the default native accuracy it sets outside Windows is 125us. Which means it will do a thread::sleep until 125us before the target and spin the rest. let sleeper = spin_sleep::SpinSleeper::new(1_000_000_000);
watch.start();
sleeper.sleep(Duration::from_secs(DURATION));
watch.stop(); Using the spinning part directly is also the same picture perfect timing let duration = Duration::new(DURATION, 0);
let now = Instant::now();
watch.start();
while now.elapsed() < duration {
thread::yield_now();
}
watch.stop(); Trusting the thread::sleep until 150ms before the target by setting native accuracy seemed to also yield the same results. I'm still not sure why the normal sleep function is behaving this way and don't own a mac myself to test on. |
@Blisto91 I think the mac runners are more oversubscribed and prone to have spiky performance. I see three viable options:
I hesitate to do the first because then if we get some performance regression, we won't notice it. I hesitate to do the third via spinning, because that can really affect CPU usage. My vote is the second option. What are your thoughts? |
I think upping the uncertainty on just Mac is fine for now. I would also not be happy about a general increase. |
❓❓❓ The latest run failed to compile an mdbook test from a file of the book that doesn't exist in this branch. Maybe GitHub actions messed up and checked out the wrong branch? Or didn't clean up from an earlier run? I can't think of anything to do other than make a new change and see if it happens again. |
We're about ready to merge! I'm excited to get CI working for folks in general again. I split out all the to-do items I think can be done later and copied it into this issue: #2407 |
Is it weird that I'm just as excited about the ci work as I am with the legion port? 😁 |
@Blisto91 I'm excited too! All we've got blocking us that I know of is these |
Thinking out loud here so if i misunderstand something please correct me. The CI with the Checkout action doesn't run on your PR as it exists in your branch. It runs on a new virtual version where your changes are already merged into the target branch (master in this case). So the version we test on here is actually the current master branch, including new book pages and all, with your branch changes merged into it. The reason some projects use Bors is because it will put all PR's waiting for merge in a queue and merge them one by one. When one in the queue has been merged the next one will then be testet against the new target version before merging. Now why this specifically fails with the new tiles examples i don't know. |
Canceled. |
bors r+ |
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut ## To Do: - Matrix: `[mac, linux, windows] * [stable, beta, nightly]` - [x] `cargo fmt --check` -- only on stable - [x] `cargo clippy` -- only on stable - [x] `cargo test` - [x] code compiles - [x] tests & API doctests run - [x] `mdbook build book` - [x] `mdbook test -L ./target/debug/deps book` -- only on stable ## To Do Later separate from this PR: - [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links - [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links - [ ] Speed up checks. Goal: 10 minutes or less - [ ] use `sccache` with S3 for `cargo test` - [ ] use self-hosted runners - [ ] Experiment with various compiler settings for reducing compile times and/or sizes - [ ] Automatically push rendered docs to website - [ ] master merges - [ ] releases - [ ] Automatically push rendered book to website - [ ] master merges - [ ] releases - [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_ - [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience.☹️ _ ) Co-authored-by: Nathan Stocks <cleancut@github.com>
Adding I'm going to run through the following status values again in this order to see if any of them work: |
Canceled. |
bors r+ |
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut ## To Do: - Matrix: `[mac, linux, windows] * [stable, beta, nightly]` - [x] `cargo fmt --check` -- only on stable - [x] `cargo clippy` -- only on stable - [x] `cargo test` - [x] code compiles - [x] tests & API doctests run - [x] `mdbook build book` - [x] `mdbook test -L ./target/debug/deps book` -- only on stable ## To Do Later separate from this PR: - [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links - [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links - [ ] Speed up checks. Goal: 10 minutes or less - [ ] use `sccache` with S3 for `cargo test` - [ ] use self-hosted runners - [ ] Experiment with various compiler settings for reducing compile times and/or sizes - [ ] Automatically push rendered docs to website - [ ] master merges - [ ] releases - [ ] Automatically push rendered book to website - [ ] master merges - [ ] releases - [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_ - [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience.☹️ _ ) Co-authored-by: Nathan Stocks <cleancut@github.com>
Canceled. |
bors r+ |
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut ## To Do: - Matrix: `[mac, linux, windows] * [stable, beta, nightly]` - [x] `cargo fmt --check` -- only on stable - [x] `cargo clippy` -- only on stable - [x] `cargo test` - [x] code compiles - [x] tests & API doctests run - [x] `mdbook build book` - [x] `mdbook test -L ./target/debug/deps book` -- only on stable ## To Do Later separate from this PR: - [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links - [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links - [ ] Speed up checks. Goal: 10 minutes or less - [ ] use `sccache` with S3 for `cargo test` - [ ] use self-hosted runners - [ ] Experiment with various compiler settings for reducing compile times and/or sizes - [ ] Automatically push rendered docs to website - [ ] master merges - [ ] releases - [ ] Automatically push rendered book to website - [ ] master merges - [ ] releases - [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_ - [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience.☹️ _ ) Co-authored-by: Nathan Stocks <cleancut@github.com>
Canceled. |
bors r+ |
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut ## To Do: - Matrix: `[mac, linux, windows] * [stable, beta, nightly]` - [x] `cargo fmt --check` -- only on stable - [x] `cargo clippy` -- only on stable - [x] `cargo test` - [x] code compiles - [x] tests & API doctests run - [x] `mdbook build book` - [x] `mdbook test -L ./target/debug/deps book` -- only on stable ## To Do Later separate from this PR: - [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links - [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links - [ ] Speed up checks. Goal: 10 minutes or less - [ ] use `sccache` with S3 for `cargo test` - [ ] use self-hosted runners - [ ] Experiment with various compiler settings for reducing compile times and/or sizes - [ ] Automatically push rendered docs to website - [ ] master merges - [ ] releases - [ ] Automatically push rendered book to website - [ ] master merges - [ ] releases - [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_ - [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience.☹️ _ ) Co-authored-by: Nathan Stocks <cleancut@github.com>
Canceled. |
bors r+ |
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut ## To Do: - Matrix: `[mac, linux, windows] * [stable, beta, nightly]` - [x] `cargo fmt --check` -- only on stable - [x] `cargo clippy` -- only on stable - [x] `cargo test` - [x] code compiles - [x] tests & API doctests run - [x] `mdbook build book` - [x] `mdbook test -L ./target/debug/deps book` -- only on stable ## To Do Later separate from this PR: - [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links - [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links - [ ] Speed up checks. Goal: 10 minutes or less - [ ] use `sccache` with S3 for `cargo test` - [ ] use self-hosted runners - [ ] Experiment with various compiler settings for reducing compile times and/or sizes - [ ] Automatically push rendered docs to website - [ ] master merges - [ ] releases - [ ] Automatically push rendered book to website - [ ] master merges - [ ] releases - [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_ - [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience.☹️ _ ) Co-authored-by: Nathan Stocks <cleancut@github.com>
I don't know anything about bors and the documentation I've found isn't that great. |
Timed out. |
Ahh, nice find. Okay, I'm going to revert to what I think ought to work for bors and manually merge this, and then I'll open a separate PR if bors still needs to be fixed. Hopefully that won't be necessary 🤞, but I'll make sure not to touch the workflow file in that PR if it is. |
...which _should_ work. We can't use bors to merge this particular branch because we've edited a workflow. See https://forum.bors.tech/t/resource-not-accessible-by-integration/408/7
To Do:
[mac, linux, windows] * [stable, beta, nightly]
cargo fmt --check
-- only on stablecargo clippy
-- only on stablecargo test
mdbook build book
mdbook test -L ./target/debug/deps book
-- only on stableTo Do Later separate from this PR:
mdbook-linkcheck
to detect broken book linkscargo-deadlinks
to detect broken linkssccache
with S3 forcargo test
cargo clippy
and fix all errors..." -- I don't know if this is feasiblenightly
without making the build look like a red X. (You can let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience.