Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add first graph and target search test #115

Merged
merged 3 commits into from
Aug 22, 2024

Conversation

AdamKorcz
Copy link
Contributor

Adds the first graph and target search test as mentioned in #101.

This adds the get_targetinfo client API for both clients too. This might need more work for tests that actually use the output. The test in this PR does not use the output.

Interestingly, go-tuf fails in an indeterministic manner from this test; Different test cases will fail each time you run the test. I have tried removing all the testcases except for the "max-number-of-delegations" one, and this still fails sometimes. @rdimitrov could you check this? You can test it by cloning my branch and running make test-go-tuf. I have left some print statements for debugging assistance.

@AdamKorcz AdamKorcz requested review from rdimitrov and jku August 6, 2024 21:25
@AdamKorcz
Copy link
Contributor Author

I am putting this in draft. I am not getting the same failures locally. Will need to debug this. Please wait with the review.

@AdamKorcz AdamKorcz marked this pull request as draft August 6, 2024 21:28
@segiddins
Copy link
Contributor

I think it might be cleaner instead to have the get_targetinfo entry point write the target info to a provided path, rather than relying on STDOUT having nothing else printed?

@AdamKorcz
Copy link
Contributor Author

@jku I am unable to reproduce the same outcome as the CI for python-tuf locally. Could you check if you can reproduce it?

@jku
Copy link
Member

jku commented Aug 7, 2024

I'll have a look at the failures but as a first comment:
Let's not expand the CLI if at all possible -- I'm not sure we can assume that all libraries even expose API like this. Can we not use download command to achieve similar testable situations? I especially don't want any kind of structured output requirements for the CLI: it is just a pain for client developers

Copy link
Member

@jku jku left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue at least with python-tuf can be reproduced with --repository-dump-dir /tmp/dumps. There's two issues that I can see :

  1. The code added to clients/python-tuf assumes get_targetinfo() always returns TargetFile: this is untrue (maybe we should enable linting in clients, mypy would have complained about this: lint: Add the python client to linted files #118)

  2. The test also fails because the dumping code invoked by --repository-dump-dir uses fetch_metadata() which ends up breaking FetchTracker.

If you use the already existing metadata_statistics and artifact_statistics instead of FetchTracker this should not be an issue (because those are bumped in fetch() that is only called by the http handler, not other code)

tuf_conformance/repository_simulator.py Outdated Show resolved Hide resolved
tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
@AdamKorcz AdamKorcz force-pushed the graphs-test branch 2 times, most recently from 1622969 to 7630ce6 Compare August 8, 2024 11:39
@jku
Copy link
Member

jku commented Aug 8, 2024

Is this the flaky failure for go-tuf:

FAILED tuf_conformance/test_updater_delegation_graphs.py::test_graph_traversal[two-level-delegations] - AssertionError: assert [('B', 1), ('C', 1), ('A', 1)] == [('A', 1), ('B', 1), ('C', 1)]

very interesting if it is really flaky... It could of course be a bug in our request tracking but I think that is pretty simple and should not lead to ordering issues

@AdamKorcz
Copy link
Contributor Author

AdamKorcz commented Aug 8, 2024

Is this the flaky failure for go-tuf:

FAILED tuf_conformance/test_updater_delegation_graphs.py::test_graph_traversal[two-level-delegations] - AssertionError: assert [('B', 1), ('C', 1), ('A', 1)] == [('A', 1), ('B', 1), ('C', 1)]

very interesting if it is really flaky... It could of course be a bug in our request tracking but I think that is pretty simple and should not lead to ordering issues

In this run another test - three-level-terminating-ignores-all-but-roles-descendants - fails. I believe this is similar to the flakiness I described in the initial PR message.

Copy link
Member

@jku jku left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you said don't review yet... I'm doing that anyway since this is kind of blocking the continuation like #140.

All comments are minor -- the only issue preventing merging is go-tuf flakiness. I don't have great ideas for dealing with that. We would have to either

  • support skipping in the action API like we do expected failures

  • or add a hard coded special case in the test until go-tuf is fixed

      if ("clients/go-tuf/go-tuf" in client._cmd):
         pytest.skip("skip for flakiness")
    

maybe the hard coded skip (and filing an issue to remind us to remove it) makes sense so we can move and merge this?

tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
@@ -62,7 +62,7 @@
SPEC_VER = ".".join(SPECIFICATION_VERSION)

# Generate some signers once (to avoid all tests generating them)
NUM_SIGNERS = 8
NUM_SIGNERS = 9
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For info: The cyclic-graph test fails because there are not enough signers.

@AdamKorcz AdamKorcz force-pushed the graphs-test branch 2 times, most recently from dacf9f9 to 7d25dcf Compare August 21, 2024 18:13
Signed-off-by: Adam Korczynski <adam@adalogics.com>
Signed-off-by: Adam Korczynski <adam@adalogics.com>
@AdamKorcz AdamKorcz marked this pull request as ready for review August 21, 2024 18:31
@AdamKorcz
Copy link
Contributor Author

if ("clients/go-tuf/go-tuf" in client._cmd):
pytest.skip("skip for flakiness")

Added in 990c5e9

@AdamKorcz AdamKorcz requested a review from jku August 21, 2024 18:53
Copy link
Member

@jku jku left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks good.

  • The 150 lines of setup data is not amazing but I don't see meaningful improvements (and some of it, like artifact support, is for future tests)
  • left one comment on return values

tuf_conformance/test_updater_delegation_graphs.py Outdated Show resolved Hide resolved
Signed-off-by: Adam Korczynski <adam@adalogics.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants