-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add first graph and target search test #115
Conversation
I am putting this in draft. I am not getting the same failures locally. Will need to debug this. Please wait with the review. |
I think it might be cleaner instead to have the |
@jku I am unable to reproduce the same outcome as the CI for python-tuf locally. Could you check if you can reproduce it? |
I'll have a look at the failures but as a first comment: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue at least with python-tuf can be reproduced with --repository-dump-dir /tmp/dumps
. There's two issues that I can see :
-
The code added to clients/python-tuf assumes
get_targetinfo()
always returns TargetFile: this is untrue (maybe we should enable linting in clients, mypy would have complained about this: lint: Add the python client to linted files #118) -
The test also fails because the dumping code invoked by
--repository-dump-dir
uses fetch_metadata() which ends up breaking FetchTracker.
If you use the already existing metadata_statistics
and artifact_statistics
instead of FetchTracker this should not be an issue (because those are bumped in fetch()
that is only called by the http handler, not other code)
1622969
to
7630ce6
Compare
Is this the flaky failure for go-tuf:
very interesting if it is really flaky... It could of course be a bug in our request tracking but I think that is pretty simple and should not lead to ordering issues |
In this run another test - three-level-terminating-ignores-all-but-roles-descendants - fails. I believe this is similar to the flakiness I described in the initial PR message. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know you said don't review yet... I'm doing that anyway since this is kind of blocking the continuation like #140.
All comments are minor -- the only issue preventing merging is go-tuf flakiness. I don't have great ideas for dealing with that. We would have to either
-
support skipping in the action API like we do expected failures
-
or add a hard coded special case in the test until go-tuf is fixed
if ("clients/go-tuf/go-tuf" in client._cmd): pytest.skip("skip for flakiness")
maybe the hard coded skip (and filing an issue to remind us to remove it) makes sense so we can move and merge this?
@@ -62,7 +62,7 @@ | |||
SPEC_VER = ".".join(SPECIFICATION_VERSION) | |||
|
|||
# Generate some signers once (to avoid all tests generating them) | |||
NUM_SIGNERS = 8 | |||
NUM_SIGNERS = 9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For info: The cyclic-graph
test fails because there are not enough signers.
dacf9f9
to
7d25dcf
Compare
Signed-off-by: Adam Korczynski <adam@adalogics.com>
Signed-off-by: Adam Korczynski <adam@adalogics.com>
Added in 990c5e9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks good.
- The 150 lines of setup data is not amazing but I don't see meaningful improvements (and some of it, like artifact support, is for future tests)
- left one comment on return values
Signed-off-by: Adam Korczynski <adam@adalogics.com>
Adds the first graph and target search test as mentioned in #101.
This adds the
get_targetinfo
client API for both clients too. This might need more work for tests that actually use the output. The test in this PR does not use the output.Interestingly, go-tuf fails in an indeterministic manner from this test; Different test cases will fail each time you run the test. I have tried removing all the testcases except for the
"max-number-of-delegations"
one, and this still fails sometimes. @rdimitrov could you check this? You can test it by cloning my branch and runningmake test-go-tuf
. I have left some print statements for debugging assistance.