Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Features: absolute_maximum; mean_n_absolut_max | Add install_requirements.sh script #833

Merged
merged 21 commits into from
Apr 1, 2021

Conversation

OliEfr
Copy link
Contributor

@OliEfr OliEfr commented Mar 28, 2021

Added:

F1. I added a install_requirements.sh script for convenience
F2. I added a absolut_maximum feature. This feature calculates the absolut maximum.
F3. I added a mean_n_absolut_max feature. This calculates the arithmetic mean of the n absolut maximum values of the time series.

Tests:

T1: I added unit-tests for mean_n_absolut_max.
T2: I did not ad unit-tests for absolut_maximum, because there is also no unit-test for maximum.

T3: Some tests fail. Most are not on my watch, I think. However, I think the following are on me but I dont know how to resolve them: test_contains_all_non_high_comp_features and test_default_calculates_all_features. In particular, I dont know how I can add my newly created features here.

Questions:

Q1: The feature mean_n_absolut_max takes an additional input parameter (the number of maximums to consider). Where and how should I check, that this input parameter is valid, e.g. not smaller 0 and so on. Should it be checked in the pytest unittests? Or should I raise an error within the feature calculator function itself? For now, I just return np.nan if the input parameter is not valid.

Others

  • build the docs
  • Sorry for commiting multiple changes at once and sorry for not creating a new branch. I think, I should have done this. Next time I will.

Copy link
Collaborator

@MaxBenChrist MaxBenChrist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good first draft, btw. what are those lock files in the dask-worker space for?

tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
install_requirements.sh Show resolved Hide resolved
.gitignore Outdated Show resolved Hide resolved
OliEfr and others added 2 commits March 29, 2021 16:00
Co-authored-by: Maximilian Christ <max.christ@me.com>
@OliEfr
Copy link
Contributor Author

OliEfr commented Mar 29, 2021

I noticed that you dont test for the AssertionError in test_estimate_friedrich_coefficients. Is there a reason for this?

I added assertRaises tests for test_mean_n_absolute_max.

@OliEfr
Copy link
Contributor Author

OliEfr commented Mar 30, 2021

@MaxBenChrist Could you please refer to T3 from my PR? What about test_contains_all_non_high_comp_features and test_default_calculates_all_features?

Copy link
Collaborator

@nils-braun nils-braun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also thanks from my side. I had a few comments in the code.

Some general points:

  • as mentioned by Max, please remove the dask-worker-space files from your PR.
  • one of your new features (the mean_n_absolute_max) requires a parameter. This means you need to add it to the feature calculator settings with a reasonable default. Especially, you need to add an entry in the ComprehensiveFCParameters dict (have a look into the other features which are already there). I would assume reasonable values would be small numbers, probably below 10.

To your questions:

  • Q1: testing for < 0 in your code is fine. Users can change the parameters, so testing in the unittests is not enough.
  • no, there is no reason we do not test the assertion error. That could be added, good point.
  • to T3: To fix the two remaining tests, you need to add your new feature to the settings (see above) - as mentioned in the exception of the tests.

install_requirements.sh Show resolved Hide resolved
tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
tsfresh/feature_extraction/feature_calculators.py Outdated Show resolved Hide resolved
OliEfr and others added 2 commits April 1, 2021 11:00
Co-authored-by: Nils Braun <nils-braun@users.noreply.github.com>
Co-authored-by: Nils Braun <nils-braun@users.noreply.github.com>
@OliEfr
Copy link
Contributor Author

OliEfr commented Apr 1, 2021

Check. All done.

For testing the install_requirements.sh file, I created another issue. Because I am not sure yet how to implement it.
I understand, that it makes sense to include it in the workflow. However, no workflow so far requires this exact install_requirements.sh file.

@OliEfr OliEfr requested a review from nils-braun April 1, 2021 11:14
@nils-braun
Copy link
Collaborator

I will have a look this evening, but I think I am still seeing the dask worker files :-)
Thanks already for your time!

@nils-braun
Copy link
Collaborator

Thanks!

@nils-braun nils-braun merged commit 99b4478 into blue-yonder:main Apr 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants