Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for cargo test #39

Merged

Conversation

luca-della-vedova
Copy link
Member

@luca-della-vedova luca-della-vedova commented Jul 9, 2024

Closes #3
This PR improves support for cargo test by adding test result generation, as well as support for doc tests and cargo fmt tests.

I made the following choices:

  • Collate all results from each command (cargo test and cargo fmt) into a single test and report whether the whole command succeeded or failed as a single test case. This is different from the "grep approach", but that is imho not a very reliable idea because there is no guarantee that the human readable formatting will stay the same and it might break without any notice. The alternative of using json requires nightly so that is also out of the question (until it is stabilized, at which point we should revisit this implementation).
  • Export all cargo test results into a single cargo_test.xml file, similar to what we do for Python packages (pytest.xml) but different from what we do for C++ packages. Happy to revisit this if there is a strong feeling.
  • By default run unit tests and fmt. I haven't implemented command line arguments to set which tests to run yet, that can be done either here or in a followup PR.

What's next:

  • Add parameter to enable / disable fmt, unit tests.
  • Integrate with colcon-ros-cargo. Add colcon test verb colcon-ros-cargo#19
  • Adding time to the test result file, I didn't add it first because cargo test already reports time but we are not parsing it, and reimplementing a stopwatch in Python feels unnecessary.
  • Test granularity (i.e. number of tests, what failed and what didn't). Depend on a larger refactor when json or xml outputs are stable
  • Figure out how to add doctests (pure binary packages fail doctest with the same exit code as a failing test)

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
@codecov-commenter
Copy link

codecov-commenter commented Jul 9, 2024

Codecov Report

Attention: Patch coverage is 77.55102% with 11 lines in your changes missing coverage. Please review.

Project coverage is 66.02%. Comparing base (9fdb14f) to head (b3ef3cd).

Files Patch % Lines
colcon_cargo/task/cargo/test.py 80.00% 4 Missing and 4 partials ⚠️
colcon_cargo/task/cargo/build.py 66.66% 1 Missing and 2 partials ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main      #39       +/-   ##
===========================================
+ Coverage   46.98%   66.02%   +19.04%     
===========================================
  Files           6        6               
  Lines         166      209       +43     
  Branches       24       30        +6     
===========================================
+ Hits           78      138       +60     
+ Misses         74       50       -24     
- Partials       14       21        +7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
@esteve
Copy link
Contributor

esteve commented Jul 9, 2024

@luca-della-vedova would it make sense to switch to https://nexte.st/ ? It'd mean depending on the cargo-nextest crate, but it provides output in the JUnit format, which colcon already has parsers for.

@luca-della-vedova
Copy link
Member Author

@luca-della-vedova would it make sense to switch to https://nexte.st/ ? It'd mean depending on the cargo-nextest crate, but it provides output in the JUnit format, which colcon already has parsers for.

I'm mostly neutral about this, while it would make the integration a lot easier and more feature complete (i.e. no need to write our own xml generation, per test output), it has two main drawbacks I can think of:

  • Require a custom cargo install from people to use colcon-cargo, which right now doesn't need any external crate
  • Not support doctests for which we would still need to fallback to normal cargo test (as documented here), which would bring us again to "how do we format the output of doctests if we don't have a machine readable output".

Whichever approach we take imho it would just be a stopgap until cargo test can output machine readable format, but that seems to be progressing very slowly

@esteve
Copy link
Contributor

esteve commented Jul 9, 2024

Not support doctests for which we would still need to fallback to normal cargo test (as documented nextest-rs/nextest#16), which would bring us again to "how do we format the output of doctests if we don't have a machine readable output".

I agree, not having support for doctests is unfortunate. If we can support plain cargo test, that'd be great, even if it's just minimal (pass/fail).

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
@luca-della-vedova luca-della-vedova marked this pull request as ready for review July 12, 2024 06:29
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
@luca-della-vedova
Copy link
Member Author

luca-della-vedova commented Jul 12, 2024

This should now be ready for review.

I added unit tests to the sample package so the CI should prove that it works but if you want to try yourself:

Install this package (and the matching colcon-ros-cargo if you want to try a ros package)

pip install git+https://github.com/luca-della-vedova/colcon-cargo.git@luca/cargo_test_support --force
pip install git+https://github.com/luca-della-vedova/colcon-ros-cargo.git@luca/add_test --force

Then do a colcon build and colcon test for your package!

I tested it on ros2-rust and rmf_site, for example (current main):

$ colcon test --packages-select rclrs
Starting >>> rclrs   
--- stderr: rclrs                   
Warning: can't set `imports_granularity = Crate`, unstable features are only available in nightly channel.
Warning: can't set `imports_granularity = Crate`, unstable features are only available in nightly channel.
---
Finished <<< rclrs [4.32s]

Summary: 1 package finished [4.59s]
  1 package had stderr output: rclrs

$ cat build/rclrs/cargo_test.xml
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite name="cargo_test" errors="0" failures="0" skipped="0" tests="2">
        <testcase name="unit"/>
        <testcase name="fmt"/>
    </testsuite>
</testsuites>

On the other hand, if I edit the formatting to be a bit messed up and a unit test to fail the output becomes:

$ colcon test --packages-select rclrs
Starting >>> rclrs   
--- stderr: rclrs                   
error: test failed, to rerun pass `--lib`
Warning: can't set `imports_granularity = Crate`, unstable features are only available in nightly channel.
Warning: can't set `imports_granularity = Crate`, unstable features are only available in nightly channel.
---
Finished <<< rclrs [3.94s]	[ with test failures ]

Summary: 1 package finished [4.21s]
  1 package had stderr output: rclrs
  1 package had test failures: rclrs

$ colcon test-result
build/rclrs/cargo_test.xml: 2 tests, 0 errors, 2 failures, 0 skipped

Summary: 13319 tests, 0 errors, 2 failures, 4130 skipped
lucadv@noble:~/ws_rclrs$ cat build/rclrs/cargo_test.xml
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite name="cargo_test" errors="0" failures="2" skipped="0" tests="2">
        <testcase name="unit">
            <failure message="cargo test failed">
running 52 tests
......................F.............................
failures:

---- time::tests::test_conversion stdout ----
thread 'time::tests::test_conversion' panicked at src/time.rs:115:9:
assertion `left == right` failed
  left: 1
 right: 2
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    time::tests::test_conversion

test result: FAILED. 51 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.35s

</failure>
        </testcase>
        <testcase name="fmt">
            <failure message="cargo fmt failed">Diff in /usr/local/google/home/lucadv/ws_rclrs/src/ros2_rust/rclrs/src/time.rs at line 110:
         let msg = time.to_ros_msg().unwrap();
         assert_eq!(msg.nanosec, 100);
 
-
-
         assert_eq!(msg.sec, 2);
     }
 }
</failure>
        </testcase>
    </testsuite>
</testsuites>

@esteve I can't seem to ask for review from you but happy to have your input

Copy link

@maspe36 maspe36 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think its required to merge this PR, but we really need some documentation in the README for this repo.

It'd be nice to have some kind of translation, this colcon command, translates to this cargo command. Especially now that this isn't a 1:1 (e.x colcon test translates to cargo build ... & cargo fmt ... )

if CARGO_EXECUTABLE is None:
# TODO(luca) log this as error in the test result file
raise RuntimeError("Could not find 'cargo' executable")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Any reason we shouldn't just log this and return early like we do with the RuntimeError directly above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point, I just left it unchanged to avoid touching up too much. This behavior is also what the build task does

try:
env = await get_command_environment(
'build', args.build_base, self.context.dependencies)
except RuntimeError as e:
logger.error(str(e))
return 1
self.progress('prepare')
rc = self._prepare(env, additional_hooks)
if rc:
return rc
# Clean up the build dir
build_dir = Path(args.build_base)
if args.clean_build:
if build_dir.is_symlink():
build_dir.unlink()
elif build_dir.exists():
shutil.rmtree(build_dir)
# Invoke build step
if CARGO_EXECUTABLE is None:
raise RuntimeError("Could not find 'cargo' executable")
so imho if we change it here we should also change it there for consistency, and then we would have a PR about cargo test going to touch on the build task!
What the TODO documents is that I was pondering whether we should create a test-result file for this case or just exit early with an error. I suspect we should still create a test result file hence I noted that down as a TODO.

For reference making it a return code would produce this result:

image

While currently (RuntimeError) this is the output:

image

My recommendation here would be to decide which looks best and do a followup PR that fixes it for both the build and test task. It will be a choice between verbosity (raise exception, get full backtrace) and simplicity (return error code, simple and concise error message). But as a spoiler I do agree with you that the concise error code looks a lot better, most people probably don't want a backtrace of colcon's task spawning

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think raising the runtime error is fine. It's what colcon-cmake does if cmake isn't found. Personally I like the distinction between "an error in the test run" which is indicated by the test result output containing error listings and an "an error running the tests" which is indicated by the colcon exception.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the TODO b3ef3cd

nargs='*', metavar='*', type=str.lstrip,
help='Pass arguments to Cargo projects. '
'Arguments matching other options must be prefixed by a space,\n'
'e.g. --cargo-args " --help"')

async def test(self, *, additional_hooks=None): # noqa: D102
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we actually add a docstring here? Perhaps just mentioning at a high level the two cargo commands running and why?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what to say about the why we run tests / fmt 😅 but very valid, I added a docstring saying what we run (test and fmt) and what we don't and why we don't (docs) bf3637e

logger.error(str(e))
return 1

# Disable color to avoid escape sequences in test result file
env['NO_COLOR'] = '1'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does --color never in the cargo commands not work? Or is this colored output from somewhere else?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my experiments not really:

Stock invocation:

image

Only color=never, note how it removes most of the colors:

image

Only NO_COLOR=1, removes the other set of colors:

image

Both flags, all colors are gone:

image

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cargo test --color never -- --color never seems to work
image

Copy link
Member Author

@luca-della-vedova luca-della-vedova Jul 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I noticed that the output without the NO_COLOR=1 environment argument or, as you mentioned, cargo [cmd] -- color never [args] is OK. The color itself is present in the output to the console but the stdout itself of the test that is logged to the test-result file is not colored, so it's actually not harmful (and maybe even a bit helpful?) so I took it out in d2ea503.

testsuite.attrib['errors'] = str(0)
testsuite.attrib['failures'] = str(failures)
testsuite.attrib['skipped'] = str(0)
testsuite.attrib['tests'] = str(2)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps I'm misunderstanding, but it seems like we're hard coding the report to always list 2 tests run?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that is the case, all the output from cargo test is collated into a single result, and the result of cargo fmt --check is into another result. This is due to it not being possible right now to separate the result of cargo test by test case (unless we migrate to nextest or try to parse human readable output. It's part of the "what's next" in the PR description (Test granularity)

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
This reverts commit dc8d9c9.

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
@luca-della-vedova
Copy link
Member Author

luca-della-vedova commented Jul 16, 2024

I don't think its required to merge this PR, but we really need some documentation in the README for this repo.

It'd be nice to have some kind of translation, this colcon command, translates to this cargo command. Especially now that this isn't a 1:1 (e.x colcon test translates to cargo build ... & cargo fmt ... )

Thanks for all the feedback! For this I added a root level example of colcon test to the README.

Edit: For the translation it's kinda noted under the "What's next" section in Add parameter to enable / disable fmt, unit tests..

Ideally I would like some command line parameter where users can specify what kind of tests they want, i.e. enable / disable format, doc tests, etc. The risk there however, is the good old "bikeshedding" where I feel we might get stuck on details like what to call the CLI parameter, what it should look like etc. For this reason I thought a first implementation of cargo test and cargo fmt would be good enough. We are lucky in a way that Rust seems to usually have a single linter of choice, compared to C++ / Python that have a lot of them, so I think it's safe to assume that most people will want to run through a simple cargo fmt.

Copy link

@maspe36 maspe36 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that for now its better to have something than to bike shed. The last thing I want is for this PR to stall out over something benign.

This is a good first pass at getting colcon test functionality for rclrs packages.

LGTM. Thanks for putting this together!

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
if CARGO_EXECUTABLE is None:
# TODO(luca) log this as error in the test result file
raise RuntimeError("Could not find 'cargo' executable")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think raising the runtime error is fine. It's what colcon-cmake does if cmake isn't found. Personally I like the distinction between "an error in the test run" which is indicated by the test result output containing error listings and an "an error running the tests" which is indicated by the colcon exception.

Signed-off-by: Luca Della Vedova <lucadv@intrinsic.ai>
@mxgrey
Copy link
Contributor

mxgrey commented Aug 22, 2024

It seems a pre-existing problem with colcon-cargo is that it will cause colcon build to emit a build error when a library-only package exists in the workspace. I rediscovered this problem while testing this PR.

The good news is that the fix is fairly straightforward, and I've opened a PR to fix this that targets this branch: luca-della-vedova#2

nargs='*', metavar='*', type=str.lstrip,
help='Pass arguments to Cargo projects. '
'Arguments matching other options must be prefixed by a space,\n'
'e.g. --cargo-args " --help"')

async def test(self, *, additional_hooks=None): # noqa: D102
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
async def test(self, *, additional_hooks=None): # noqa: D102
async def test(self, *, additional_hooks=None):

I don't think we need the # noqa: D102 anymore since the docstring has been added.

Copy link
Contributor

@mxgrey mxgrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks quite good 👍

I recommend merging in luca-della-vedova#2 before we merge this so that library-only packages work right away, but that could also be left for a follow-up PR.

@luca-della-vedova
Copy link
Member Author

Since luca-della-vedova#2 is a bit tangential to this PR that hasn't been widely reviewed yet (it's a new feature to allow building pure library targets, that is not currently supported in colcon-cargo), I'll merge this as is then open a followup for it

@luca-della-vedova luca-della-vedova merged commit 48cd670 into colcon:main Aug 23, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Generate test result from cargo test
6 participants