Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reducing testing time #1625

Merged
merged 5 commits into from
Mar 19, 2023
Merged

Reducing testing time #1625

merged 5 commits into from
Mar 19, 2023

Conversation

BlazejNowicki
Copy link
Contributor

Fixes #1530.

Summary

Objective of this PR is to reduce testing time locally. Purposed solution marks some of the tests that take the longest so that they can be skipped reducing testing time.

Other information

List of tests that take more than 10 seconds

191.36s call     darts/tests/models/forecasting/test_historical_forecasts.py::HistoricalforecastTestCase::test_torch_auto_start_multiple_no_cov
112.07s call     darts/tests/datasets/test_dataset_loaders.py::DatasetLoaderTestCase::test_ok_dataset
90.57s call     darts/tests/models/forecasting/test_historical_forecasts.py::HistoricalforecastTestCase::test_torch_auto_start_with_cov
65.20s call     darts/tests/models/forecasting/test_regression_models.py::ProbabilisticRegressionModelsTestCase::test_fit_predict_determinism
64.22s call     darts/tests/models/forecasting/test_regression_models.py::RegressionModelsTestCase::test_fit
51.69s call     darts/tests/models/forecasting/test_local_forecasting_models.py::LocalForecastingModelsTestCase::test_statsmodels_future_models
44.24s call     darts/tests/models/forecasting/test_regression_models.py::ProbabilisticRegressionModelsTestCase::test_probabilistic_forecast_accuracy
37.57s call     darts/tests/models/forecasting/test_probabilistic_models.py::ProbabilisticTorchModelsTestCase::test_fit_predict_determinism
31.03s call     darts/tests/models/forecasting/test_TCN.py::TCNModelTestCase::test_coverage
29.90s call     darts/tests/models/forecasting/test_local_forecasting_models.py::LocalForecastingModelsTestCase::test_models_performance
28.24s call     darts/tests/models/forecasting/test_regression_models.py::RegressionModelsTestCase::test_models_accuracy_multivariate
22.67s call     darts/tests/models/forecasting/test_local_forecasting_models.py::LocalForecastingModelsTestCase::test_models_runnability
21.48s call     darts/tests/models/forecasting/test_probabilistic_models.py::ProbabilisticTorchModelsTestCase::test_probabilistic_forecast_accuracy
19.29s call     darts/tests/models/forecasting/test_global_forecasting_models.py::GlobalForecastingModelsTestCase::test_same_result_with_different_n_jobs
18.00s call     darts/tests/models/forecasting/test_regression_models.py::RegressionModelsTestCase::test_models_accuracy_multiseries_multivariate
17.43s call     darts/tests/models/forecasting/test_historical_forecasts.py::HistoricalforecastTestCase::test_regression_auto_start_multiple_no_cov
16.26s call     darts/tests/models/forecasting/test_historical_forecasts.py::HistoricalforecastTestCase::test_historical_forecasts
15.72s call     darts/tests/models/forecasting/test_local_forecasting_models.py::LocalForecastingModelsTestCase::test_exogenous_variables_support
14.90s call     darts/tests/models/forecasting/test_dlinear_nlinear.py::DlinearNlinearModelsTestCase::test_multivariate_and_covariates
13.14s call     darts/tests/models/forecasting/test_probabilistic_models.py::ProbabilisticTorchModelsTestCase::test_likelihoods_and_resulting_mean_forecasts
12.36s call     darts/tests/models/forecasting/test_regression_models.py::RegressionModelsTestCase::test_models_accuracy_univariate
11.49s call     darts/tests/models/forecasting/test_backtesting.py::BacktestingTestCase::test_backtest_forecasting

PyTest allows to apply markers to tests so that they can be skipped or executed separately.

Marked tests

$ pytest -m "slow" --collect-only
...
<Package datasets>
  <Module test_dataset_loaders.py>
    <UnitTestCase DatasetLoaderTestCase>
      <TestCaseFunction test_ok_dataset>
<Package forecasting>
  <Module test_TCN.py>
    <UnitTestCase TCNModelTestCase>
      <TestCaseFunction test_coverage>
  <Module test_historical_forecasts.py>
    <UnitTestCase HistoricalforecastTestCase>
      <TestCaseFunction test_torch_auto_start_multiple_no_cov>
      <TestCaseFunction test_torch_auto_start_with_cov>
  <Module test_local_forecasting_models.py>
    <UnitTestCase LocalForecastingModelsTestCase>
      <TestCaseFunction test_statsmodels_future_models>
  <Module test_probabilistic_models.py>
    <UnitTestCase ProbabilisticTorchModelsTestCase>
      <TestCaseFunction test_fit_predict_determinism>
      <TestCaseFunction test_likelihoods_and_resulting_mean_forecasts>
      <TestCaseFunction test_probabilistic_forecast_accuracy>
      <TestCaseFunction test_stochastic_inputs>
  <Module test_regression_models.py>
    <UnitTestCase RegressionModelsTestCase>
      <TestCaseFunction test_fit>
    <UnitTestCase ProbabilisticRegressionModelsTestCase>
      <TestCaseFunction test_fit_predict_determinism>
      <TestCaseFunction test_probabilistic_forecast_accuracy>
...

To run other tests

$ pytest -m "not slow"

Running time of tests marked "slow": ~11.5 min
Running time of tests not marked as "slow": ~7 min

@BlazejNowicki BlazejNowicki changed the title Marking slow tests Reducing testing time Mar 6, 2023
@codecov-commenter
Copy link

codecov-commenter commented Mar 6, 2023

Codecov Report

Patch coverage has no change and project coverage change: -0.08 ⚠️

Comparison is base (f7c292a) 94.13% compared to head (fc50fdc) 94.05%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1625      +/-   ##
==========================================
- Coverage   94.13%   94.05%   -0.08%     
==========================================
  Files         125      125              
  Lines       11340    11326      -14     
==========================================
- Hits        10675    10653      -22     
- Misses        665      673       +8     

see 9 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@BlazejNowicki BlazejNowicki marked this pull request as ready for review March 6, 2023 13:57
@BlazejNowicki BlazejNowicki requested a review from hrzn as a code owner March 7, 2023 13:07
Copy link
Collaborator

@dennisbader dennisbader left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks a lot @BlazejNowicki :)
Quick question, just by accumulating the times of all the skipped unit tests, shouldn't the reduced testing time be lower than 7 minutes? (or does it mean in total the testing time will be 11.5 minutes less? :) )

@dennisbader dennisbader merged commit 95f4d4f into master Mar 19, 2023
@dennisbader dennisbader deleted the feat/reduce-testing-time branch March 19, 2023 12:03
alexcolpitts96 pushed a commit to alexcolpitts96/darts that referenced this pull request May 31, 2023
* Marking slow tests

* Group torch tests

---------

Co-authored-by: Blazej Nowicki <blazej.nowicki@unit8.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Try to find ways to reduce testing time.
4 participants