-
Notifications
You must be signed in to change notification settings - Fork 11
Issue 10 pytest #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 10 pytest #24
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Way to go moving all this from robots! Now that it's easier to read the tests, I have two thoughts. None of these should be seen as blocking - the PR is an improvement on the state of the code as-is, so you should feel comfortable merging it as-is rather than dragging out the review/edit process. Nevertheless, you might still want to consider them before checking off whatever internal ticketing system you have:
- This test code seems very nested, with 50 files of 10-50 lines. This means a lot of repeated code across files. When code is flattened and similar code is next to each other, the repetition becomes more obvious and important differences stand out: e.g.
scalar_data.test_scalardata_device.test_no_data
vsscalar_data.test_scalardata_location.test_no_data
behave very differently, even though they're called in the same way: One returns an HTTP error, the other an object with an empty container. That they can be called the same way but behave differently suggests that perhaps the ONC API might be confusing for users and could be fixed - but the comparison is only clear when repetition is removed and code is flattened.
More simply, there's no need for folders with only one file, e.g. discover_properties/test_properties.py could just be test_properties.py. The other folders with multiple files (archive_file, data_product_delivery) probably could also be combined to both shorten the overall SLOC and make it clearer.
- Runtime. Have you checked the durations of individual tests (
pytest --durations=100
)? Consider:
Test durations
============================================================================================== slowest 100 durations ==============================================================================================
67.78s call tests/data_product_delivery/test_data_product_delivery_order.py::test_valid_default
38.08s call tests/data_product_delivery/test_data_product_delivery_order.py::test_valid_manual
36.96s call tests/data_product_delivery/test_data_product_delivery_order.py::test_valid_no_metadata
31.54s call tests/data_product_delivery/test_data_product_delivery_order.py::test_valid_results_only
25.62s call tests/archive_file/test_archivefile_direct_download.py::test_valid_params_one_page
19.98s call tests/archive_file/test_archivefile_download.py::test_valid_params
17.50s call tests/archive_file/test_archivefile_direct_download.py::test_valid_params_multiple_pages
9.93s call tests/raw_data/test_rawdata_device.py::test_valid_params_one_page
9.82s call tests/scalar_data/test_scalardata_device.py::test_valid_params_one_page
7.68s call tests/scalar_data/test_scalardata_location.py::test_valid_params_one_page
4.47s call tests/raw_data/test_rawdata_location.py::test_valid_params_one_page
3.22s call tests/archive_file/test_archivefile_location.py::test_valid_params_one_page
3.18s call tests/archive_file/test_archivefile_device.py::test_valid_params_one_page
2.25s call tests/scalar_data/test_scalardata_device.py::test_valid_params_multiple_pages
2.24s call tests/raw_data/test_rawdata_device.py::test_valid_params_multi_pages
2.06s call tests/archive_file/test_archivefile_device.py::test_invalid_param_value
1.95s call tests/discover_data_products/test_data_products.py::test_valid_params
1.94s call tests/discover_data_products/test_data_products.py::test_no_data
1.83s call tests/scalar_data/test_scalardata_location.py::test_valid_params_multiple_pages
1.73s call tests/discover_properties/test_properties.py::test_valid_params
1.54s call tests/discover_locations/test_locations.py::test_valid_params
1.54s call tests/scalar_data/test_scalardata_device.py::test_no_data
1.53s call tests/discover_locations/test_locations.py::test_no_data
1.44s call tests/raw_data/test_rawdata_device.py::test_no_data
1.37s call tests/discover_properties/test_properties.py::test_no_data
1.32s call tests/discover_locations/test_locations_tree.py::test_no_data
1.18s call tests/archive_file/test_archivefile_location.py::test_valid_params_multiple_pages
1.06s call tests/discover_deployments/test_deployments.py::test_valid_params
1.04s call tests/raw_data/test_rawdata_location.py::test_valid_params_multiple_pages
1.03s call tests/discover_device_categories/test_device_categories.py::test_no_data
1.02s call tests/discover_locations/test_locations_tree.py::test_valid_params
1.01s call tests/raw_data/test_rawdata_location.py::test_invalid_param_value
1.01s call tests/discover_device_categories/test_device_categories.py::test_valid_params
0.95s call tests/scalar_data/test_scalardata_location.py::test_no_data
0.93s call tests/discover_device_categories/test_device_categories.py::test_invalid_param_name
0.93s call tests/discover_locations/test_locations_tree.py::test_invalid_time_range_future_start_time
0.92s call tests/data_product_delivery/test_data_product_delivery_run.py::test_invalid_request_id
0.92s call tests/scalar_data/test_scalardata_location.py::test_invalid_param_value
0.92s call tests/data_product_delivery/test_data_product_delivery_order.py::test_invalid_param_value
0.92s call tests/raw_data/test_rawdata_device.py::test_invalid_param_value
0.92s call tests/archive_file/test_archivefile_location.py::test_invalid_param_name
0.92s call tests/archive_file/test_archivefile_download.py::test_invalid_params_missing_required
0.92s call tests/data_product_delivery/test_data_product_delivery_download.py::test_invalid_run_id
0.92s call tests/discover_deployments/test_deployments.py::test_invalid_param_value
0.92s call tests/archive_file/test_archivefile_location.py::test_no_data
0.92s call tests/discover_locations/test_locations.py::test_invalid_param_name
0.92s call tests/archive_file/test_archivefile_device.py::test_invalid_params_missing_required
0.92s call tests/archive_file/test_archivefile_device.py::test_no_data
0.92s call tests/raw_data/test_rawdata_device.py::test_invalid_param_name
0.92s call tests/discover_locations/test_locations_tree.py::test_invalid_param_value
0.91s call tests/raw_data/test_rawdata_location.py::test_no_data
0.91s call tests/discover_locations/test_locations.py::test_invalid_time_range_greater_start_time
0.91s call tests/discover_deployments/test_deployments.py::test_invalid_time_range_future_start_time
0.91s call tests/discover_deployments/test_deployments.py::test_invalid_time_range_greater_start_time
0.91s call tests/discover_devices/test_devices.py::test_invalid_time_range_future_start_time
0.91s call tests/discover_devices/test_devices.py::test_invalid_param_value
0.91s call tests/scalar_data/test_scalardata_device.py::test_invalid_param_name
0.91s call tests/raw_data/test_rawdata_location.py::test_invalid_param_name
0.91s call tests/discover_locations/test_locations_tree.py::test_invalid_param
0.91s call tests/discover_deployments/test_deployments.py::test_no_data
0.91s call tests/archive_file/test_archivefile_device.py::test_valid_params_multiple_pages
0.91s call tests/discover_devices/test_devices.py::test_invalid_param_name
0.91s call tests/discover_locations/test_locations.py::test_invalid_time_range_future_start_time
0.91s call tests/scalar_data/test_scalardata_device.py::test_invalid_param_value
0.91s call tests/archive_file/test_archivefile_location.py::test_invalid_params_missing_required
0.91s call tests/discover_data_products/test_data_products.py::test_invalid_param_value
0.91s call tests/discover_devices/test_devices.py::test_invalid_time_range_greater_start_time
0.91s call tests/discover_locations/test_locations_tree.py::test_invalid_time_range_greater_start_time
0.91s call tests/discover_deployments/test_deployments.py::test_invalid_param_name
0.91s call tests/archive_file/test_archivefile_device.py::test_invalid_param_name
0.90s call tests/archive_file/test_archivefile_download.py::test_invalid_param_value
0.88s call tests/discover_locations/test_locations.py::test_invalid_param_value
0.88s call tests/discover_data_products/test_data_products.py::test_invalid_param_name
0.88s call tests/discover_properties/test_properties.py::test_invalid_param_name
0.88s call tests/discover_properties/test_properties.py::test_invalid_param_value
0.87s call tests/discover_device_categories/test_device_categories.py::test_invalid_param_value
0.84s call tests/scalar_data/test_scalardata_location.py::test_invalid_param_name
0.82s call tests/discover_devices/test_devices.py::test_no_data
0.81s call tests/discover_devices/test_devices.py::test_valid_params
0.80s call tests/archive_file/test_archivefile_location.py::test_invalid_param_value
0.04s setup tests/discover_data_products/test_data_products.py::test_invalid_param_name
0.04s setup tests/discover_locations/test_locations.py::test_invalid_param_value
0.04s setup tests/scalar_data/test_scalardata_location.py::test_invalid_param_name
0.04s setup tests/discover_properties/test_properties.py::test_invalid_param_value
0.01s setup tests/raw_data/test_rawdata_location.py::test_valid_params_one_page
The longest test runs for >60 sec, around 20% of the runtime. The top ten tests account for over 80%. This obviously depends upon the behavior of the server, and maybe QA is faster. If so, it might make sense to mark these tests as slow and only call them in CI. Or if QA is much faster, to allow CI (and maybe me?) to use the QA server.
Regardless, ordering data products seems to be responsible for most of the runtime. Maybe this indicates that, as a higher-level function, it doesn't deserve four different tests for parameter combinations, and instead vary the parameters on lower level (requesting, running, and downloading) functionality? Or maybe the ordered data products are too large/complex for the server?
def pytest_configure(): | ||
print("========== ONC config ==========") | ||
print(f"Testing environment: {'PROD' if is_prod else 'QA'}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing calls this function - is it needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a hook function. It will print something like this (from GitHub Actions console output) whenever invoking pytest.
py312: commands[0]> pytest --cov=onc
========== ONC config ==========
Testing environment: PROD
============================= test session starts ==============================
platform linux -- Python 3.12.2, pytest-8.0.2, pluggy-1.4.0
cachedir: .tox/py312/.pytest_cache
rootdir: /home/runner/work/api-python-client/api-python-client
configfile: pyproject.toml
testpaths: tests
plugins: cov-4.1.0
collected 80 items
For the lower-level (requesting, running, and downloading) and higher-level (order) functions of data product delivery, I have to say the code is not that intuitive to me. I initially thought orderDataProduct is just the combination of the three methods, like what I did in the test_valid_manual, but the code did something more complex. Maybe this method needs some code refactor first. |
Fix #10
Completely replaced robot framework with pytest. Most of the test cases use the params from the OpenAPI page examples.
I deleted some tests that just used different params for the same web service (for example multiple getLocations tests with locationCode, locationName, deviceCategoryCode,...), as the client library simply passes the params to the request, so those tests won't increase the test coverage. They should be in the internal test framework for the backend.
I also dropped support for Python 3.8, which is agreed by Spencer after we fixed the Python version issue on the task machine (previous disccusion). It should be in a new issue, but I used a dictionary union operator in the tests, which is a new feature in Python 3.9.