Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute UniFFI bindings via a simple Python test #87

Merged
merged 1 commit into from
Feb 26, 2024

Conversation

mgeisler
Copy link
Contributor

Issues:

Addresses #81

Description of changes:

This uses the UniFFI test helpers to compile and make the libmls_rs_uniffi.so file available to the Python script.

The test is very simple here: it simply executes the Python script, which then has to return with a zero exit code. If it fails, the output is printed and you then have to debug the Python code by hand.

There is no facility to pass data back to the Rust unit test — it would be great to have this, but right now it's a one-way communication only.

Call-outs:

I would have liked to use $CARGO_TARGET_TMPDIR to place the temporary files in a stable place under target/. However, this variable is only set for integration tests, not unit tests. So the script now ends up somewhere in your system temp directory.

We could decide to only use integration tests for these tests, let me know what you think! We could also write a bit more code to find a good place under target/ without relying on the environment variable.

Testing:

Use cargo test!

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 and MIT license.

@mgeisler mgeisler requested a review from a team as a code owner February 25, 2024 22:39
@mgeisler
Copy link
Contributor Author

@mulmarta, I think this more or less implements your idea of testing using Python. You're right: it's super easy to get going with just a few lines of code! This also very quickly shows that we need a lot more fields or getters on the types to actually make them usable from any of the high-level target langauges 😄

@tomleavy, this should also help with testing #86.

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 88.89%. Comparing base (13eba2c) to head (4c90b3e).

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #87   +/-   ##
=======================================
  Coverage   88.88%   88.89%           
=======================================
  Files         171      172    +1     
  Lines       31194    31217   +23     
=======================================
+ Hits        27727    27750   +23     
  Misses       3467     3467           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mgeisler
Copy link
Contributor Author

Codecov Report

All modified and coverable lines are covered by tests ✅

It would be amazing to have this integrate with the coverage reports — I think that would require making the Python code call into the coverage-instrumented .so file somehow.

This uses the UniFFI test helpers to make the `libmls_rs_uniffi.so`
file available to the Python script.

I would have liked to use `$CARGO_TARGET_TMPDIR` to place the
temporary files in a stable place under `target/`, but this variable
is only set for integration tests, not unit tests. So the script now
ends up somewhere in your system temp directory.

We could decide to only use integration tests for these tests, let me
know what you think!

The test is very simple here: it simply executes the Python script,
which then has to return with a zero exit code. If it fails, the
output is printed and you then have to debug the Python code by hand.

Related to awslabs#81.
@mulmarta mulmarta merged commit d34b971 into awslabs:main Feb 26, 2024
12 checks passed
@mgeisler mgeisler deleted the test-uniffi-with-python branch February 27, 2024 09:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants