-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TST: Reorganizing tests and moving to pytest-style #103
TST: Reorganizing tests and moving to pytest-style #103
Conversation
OK, I decided to touch the |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## development #103 +/- ##
================================================
- Coverage 86.49% 68.14% -18.35%
================================================
Files 21 23 +2
Lines 955 945 -10
================================================
- Hits 826 644 -182
- Misses 129 301 +172
☔ View full report in Codecov by Sentry. |
class TestQueries(unittest.TestCase): | ||
def setUp(self): | ||
# Opensearch client Params | ||
os.environ[ | ||
"OS_DOMAIN" | ||
] = "search-sds-metadata-uum2vnbdbqbnh7qnbde6t74xim.us-west-2.es.amazonaws.com" | ||
] = "search-sds-metadata-uum2vnbdbqbnh7qnbde6t74xim.us-east-1.es.amazonaws.com" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could consider doing this. You would still have to hard-code the DomainName though so maybe not worth the change.
session = boto3.Session()
# get the opensearch client
opensearch_client = session.client('opensearch')
# describe the OpenSearch domain and get its endpoint
domain_description = opensearch_client.describe_domain(DomainName='sdsmetadatadomain-tlcs-dev')
os.environ["OS_DOMAIN"] = domain_description['DomainStatus']['Endpoint']
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a really good idea @laspsandoval! Right now this test doesn't even pass I don't think because it requires the opensearch stack to be setup and running already, so we are testing against deployed resources. @sdhoyt is that right, or is there a different way to test this?
Also, I just found out about vcrpy
that may be an interesting thing to investigate for these remote resources and testing against them without requiring them to be available 24/7.
https://vcrpy.readthedocs.io/en/latest/
with a pytest plugin that we could make use of:
https://pytest-vcr.readthedocs.io/en/latest/
For another time though...
looks good to me. I'll submit my pr on change for test_data_manager_stack.py |
assert result["Buckets"][0]["Name"] == BUCKET_NAME | ||
|
||
# upload a file | ||
local_filepath = f"tests/test-data/{TEST_FILE}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make this change and the test will pass:
local_filepath = os.path.join(os.path.dirname(file), '..', 'test-data', TEST_FILE)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test should be passing already? But, I agree that in general we should move towards Pathlib for path joining and our friends using Windows ;) I've made a similar change, but with pathlib instead.
Made updates to infrastructure tests. Note that I left some comments in to explain the changes. I will remove those comments before we merge. |
The tests you added don't pass... Can we move that to a separate branch that you rebase off of this to keep them separate PRs? I think that is actually the more complicated piece you're tackling, and I'd like to keep incrementally making updates rather than intimidating people with large PRs to review. |
@laspsandoval , I just removed your commit and rebased and added the additional change you suggested. Let me know if this messed up your workflow at all and I'm happy to try and help out on that front. @maxinelasp I also added the |
|
||
@pytest.fixture() | ||
def s3_client(_aws_credentials): | ||
with mock_s3(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean all the tests in lambda_endpoints are using a mock for all the S3 calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, any tests that have def test_name(s3_client):
in them would use this fixture and therefore get the mock. I also added another "autouse" fixture that relies on this one so that all of the lambda_endpoints get this during the setup step without needing to think about it. It is perhaps hiding a bit too much, but with the number of tests we had defined it seemed nicer to do this and make sure we were getting mocks all the time rather than a potential mix in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think leaving the poetry change here is fine, it's not really a major difference. It does look like poetry.lock has some problems merging, I assume you can just keep the changes from this branch and that will be fine.
It looks like the code coverage bot is adding warnings to the tests that the lines aren't tested - is that expected behavior? Seems odd to me.
Overall it looks good!
We will need to make some updates to make codecov happy later, but it is a bit of a misnomer for now since we are skipping entire chunks of tests in this and I'm purposefully just doing it incrementally. @laspsandoval will be updating some more after this to hopefully get us a bit further along but I suspect we still won't be at 100% and then we can start digging more into it. In general though, I think codecov warning about not hitting tests is actually important and telling us that our tests aren't running :) We should have 100% coverage of tests, otherwise, we aren't testing what we think we're testing! |
|
||
|
||
@pytest.fixture(autouse=True) | ||
def setup_s3(s3_client): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how much we should care about this for test functions, but it looks like some docstrings are missing here (as well as the test_object_exists_with_s3_uri
function)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, good catch here! I think if something isn't immediately obvious we should have a docstring or comment letting us know what it is. I've also added a few others.
Specifically, though, I'd say that fixtures definitely should have some comments since they are meant for re-use, tests I don't mind as much if they don't have docstrings as long as they are named well and obvious about what they do. So, up to the developer on those.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also like to see a specific note on whether the tests are using mocked clients or real ones - unless it's obvious. So I think the existing comment is good!
This is a major reorganization and refactor of the test suite to try and align more closely with pytest-style tests, using fixtures instead of setup/teardown methods. The tests have been reorganized into folders that more closely align to names that are being tested as well.
Change Summary
Overview
This is a major reorganization and refactor of the test suite to try and align more closely with pytest-style tests, using fixtures instead of setup/teardown methods.
The tests have been reorganized into folders that more closely align to names that are being tested as well.
NOTE: The infrastructure tests will still be failing. The lambda_endpoints
test_indexer
andtest_queries
rely on the actual server running, so I did not update those tests.New Dependencies
N/A
New Files
N/A
Deleted Files
Several unused test data files
Updated Files
All test files other than the infrastructure portion (that will be handled in a separate PR).
test_true
totest_expected
to more clearly explain that it is an expected value (some of these were set toFalse
which confused me when looking at the assertions.Testing
See above for updated files