Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cargo +nightly fmt #1017

Merged
merged 17 commits into from
Dec 12, 2023
Merged

cargo +nightly fmt #1017

merged 17 commits into from
Dec 12, 2023

Conversation

AlexErrant
Copy link
Contributor

Pull Request Template

Checklist

  • Confirmed that run-checks all script has been executed.
    nnnnope still broken on my box
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Instigating comment.

Changes

Format silently breaks on lines longer than the default, 100. I enable two nightly features, error_on_line_overflow and format_strings. Despite being nightly, we can run cargo +nightly fmt while remaining on the stable toolchain.

  • format_strings is useful for breaking up long lines as mentioned here.
  • error_on_line_overflow is useful for making long lines explicitly fail. You may still have to manually format and annotate with #[rustfmt::skip], otherwise you get errors like:
error[internal]: line formatted, but exceeded maximum width (maximum: 100 (see `max_width` option), found: 105)
    --> /home/alex/burn/burn-autodiff/src/ops/tensor.rs:1088:1088:101
     |
1088 |         impl<B: Backend, const D: usize> Backward<B::FullPrecisionBackend, D, 1> for ToFullPrecision<B> {
     |                                                                                                     ^^^^^
     |

Grep this PR for #[rustfmt::skip] for example usage.

Interestingly setting max_width = 200, formatting, then setting max_width = 100 and formatting seemed to shake loose some changes.

Testing

I made some changes to runchecks, lessee what CI thinks.

@Luni-4
Copy link
Collaborator

Luni-4 commented Nov 29, 2023

Personally I would not add nighlty formatting to CI because, over time, it can cause more breaks than anything else. A solution could be to pin a specific nightly version, but a better solution would be to fix format problems using the stable version. But I'm not the one who should decide the fate of this PR, just giving an advice

@AlexErrant
Copy link
Contributor Author

a better solution would be to fix format problems using the stable version

Not possible, since error_on_line_overflow and format_strings are both not stable. Good point on pinning nightly though, so I pinned it to the same version as here. Should I use a newer version? IDK.

Note that the formatting changes done here are more or less the same as formatting without error_on_line_overflow or format_strings. Nightly will just make it so formatting will holler at you when you violate said rules.

@louisfd
Copy link
Member

louisfd commented Nov 30, 2023

@nathanielsimard would depending on nightly affect the release to crates.io?

@nathanielsimard
Copy link
Member

@AlexErrant Since most of the development is done on stable, I feel like it's better to stick with Rust fmt stable instead of nightly. Once of the main reason to use a formatter is to eliminate conflicts generated by using different formatters by different people. However, I'm not against accepting a pull request that reformats the code using fmt nightly and then re-reformats it using the stable version. This could improve the formatting without creating conflicts with the stable formatter—just improving the code style where the stable branch falls short.

@AlexErrant
Copy link
Contributor Author

I'm not against accepting a pull request that reformats the code using fmt nightly and then re-reformats it using the stable version

Done!

@@ -1085,7 +1085,9 @@ impl<B: Backend> TensorOps<Self> for Autodiff<B> {
phantom: PhantomData<B>,
}

impl<B: Backend, const D: usize> Backward<B::FullPrecisionBackend, D, 1> for ToFullPrecision<B> {
#[rustfmt::skip]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would avoid skipping fmt

@@ -1113,7 +1115,9 @@ impl<B: Backend> TensorOps<Self> for Autodiff<B> {
phantom: PhantomData<B>,
}

impl<B: Backend, const D: usize> Backward<B, D, 1> for FromFullPrecision<B::FullPrecisionBackend> {
#[rustfmt::skip]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would avoid skipping fmt

if std::env::var("DISABLE_WGPU").is_err() {
cargo_test(["-p", "burn-core", "--features", "test-wgpu"].into());
}
cargo_test(["-p", "burn-core", "--features", "test-wgpu"].into());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should remove that if

} else {
cargo_build(["--workspace", "--exclude=xtask"].into());
}
cargo_build(["--workspace", "--exclude=xtask"].into());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

Copy link

codecov bot commented Dec 9, 2023

Codecov Report

Attention: 169 lines in your changes are missing coverage. Please review.

Comparison is base (3066196) 85.57% compared to head (3072806) 85.54%.

Files Patch % Lines
burn-autodiff/src/ops/module.rs 40.57% 82 Missing ⚠️
burn-tensor/src/tensor/api/check.rs 73.63% 29 Missing ⚠️
burn-train/src/renderer/tui/renderer.rs 0.00% 26 Missing ⚠️
burn-ndarray/src/ops/base.rs 0.00% 9 Missing ⚠️
burn-core/src/nn/norm/batch.rs 0.00% 7 Missing ⚠️
burn-core/src/nn/conv/checks.rs 0.00% 4 Missing ⚠️
burn-wgpu/src/graphics.rs 0.00% 4 Missing ⚠️
burn-core/src/nn/loss/binary_cross_entropy.rs 33.33% 2 Missing ⚠️
burn-core/src/nn/loss/cross_entropy.rs 33.33% 2 Missing ⚠️
burn-train/src/learner/early_stopping.rs 33.33% 2 Missing ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1017      +/-   ##
==========================================
- Coverage   85.57%   85.54%   -0.03%     
==========================================
  Files         508      508              
  Lines       53912    53993      +81     
==========================================
+ Hits        46133    46191      +58     
- Misses       7779     7802      +23     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@AlexErrant
Copy link
Contributor Author

AlexErrant commented Dec 9, 2023

Whoops I went a little too delete-happy in runchecks.rs. Rejected all those changes!

@nathanielsimard nathanielsimard merged commit 610d640 into tracel-ai:main Dec 12, 2023
10 of 11 checks passed
@AlexErrant AlexErrant deleted the fmt_100 branch February 5, 2024 23:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants