Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add root mean square feature #813

Merged
merged 6 commits into from
Mar 3, 2021
Merged

Add root mean square feature #813

merged 6 commits into from
Mar 3, 2021

Conversation

OliEfr
Copy link
Contributor

@OliEfr OliEfr commented Feb 24, 2021

#811

I added the feature and unit tests. I build the docs.

Before I changed anything in the code already four unit tests failed. I am not sure how to handle this.

After I made my changes some more tests failed -in tests.integrations.test_feature_extraction:
I assume, this is because adding a new features changes the df dimensions. Should I adjust them?

Copy link
Collaborator

@nils-braun nils-braun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First of all: thanks a lot for your PR :-) In general I am quite ok with the changes in the feature calculators. I will now check the failing tests!

You did commit a few files that should not be there (the dask-worker space files and the pipeline file). If you want you can also add some entries in the .gitignore file to prevent this from happening!

@nils-braun
Copy link
Collaborator

So, concerning the test failures:

  • tests/integrations/test_feature_extraction.py::FeatureExtractionTestCase::test_dask abd tests/integrations/test_feature_extraction.py::FeatureExtractionTestCase::test_dask_no_pivot: that might be because you accidentally committed the two dask worker files
  • tests/integrations/test_feature_extraction.py::FeatureExtractionTestCase::test_pandas, tests/integrations/test_feature_extraction.py::FeatureExtractionTestCase::test_pandas_no_pivot and tests/units/feature_extraction/test_settings.py::TestMinimalSettingsObject::test_extraction_runs_through: as your feature is also part of the simple and minimal features, you will need to add your feature to the output results in these tests (as you have pointed out)
  • tests/units/feature_extraction/test_feature_calculations.py::FeatureCalculationTestCase::test_root_mean_square
    that is your own new feature :-) You need to convert the input to a numpy array before you can use it (as ** is only defined on numpy arrays, not on lists). You can use np.asarray for example, as in longest_strike_below_mean.
  • tests/units/transformers/test_relevant_feature_augmenter.py::test_relevant_augmentor_cross_validated: also affected by your new feature :-)
  • tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_warning_on_non_uniform_time_steps: that one needs fixing from my side. I will open a new PR to fix this.

Which other tests did fail for you before you added the feature?

@OliEfr
Copy link
Contributor Author

OliEfr commented Feb 25, 2021

Thanks, I'll keep working on it!

When I run the tests and commited the changes, also all .coverage files where added as well. Obviously I dont want that.
Why arent these files included in the gitignore? And how do I know which filetypes I should exclude?

@nils-braun
Copy link
Collaborator

There is no good reason to not have those files added to .gitignore (other than: we just forgot to do it).
In general: it is a good idea to not add any new files to your PR - if you did not create these files intentionally the chances are high you do not want to commit them.
Or put it the other way round: only add files to your commits and PRs that you edited intentionally.
But yes, lets add those files to the .gitignore. Would you like to do a PR or shall I do it?

@OliEfr
Copy link
Contributor Author

OliEfr commented Feb 26, 2021

Thats a good point.
Let me do the PR :)

@OliEfr
Copy link
Contributor Author

OliEfr commented Feb 26, 2021

Okay I think I resolved all failing tests which were on my watch - thanks for the help - was some good learning for me :)

@nils-braun
Copy link
Collaborator

Thanks! I have fixed the errors also on the main branch. Feel free to merge it back in.

@codecov-io
Copy link

codecov-io commented Feb 27, 2021

Codecov Report

Merging #813 (c53e6f4) into main (5dd1b31) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##             main     #813   +/-   ##
=======================================
  Coverage   95.91%   95.92%           
=======================================
  Files          18       18           
  Lines        1837     1841    +4     
  Branches      362      362           
=======================================
+ Hits         1762     1766    +4     
  Misses         37       37           
  Partials       38       38           
Impacted Files Coverage Δ
tsfresh/feature_extraction/feature_calculators.py 97.29% <100.00%> (+0.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5dd1b31...c53e6f4. Read the comment docs.

@OliEfr
Copy link
Contributor Author

OliEfr commented Feb 27, 2021

Done.
It fails in test_relevant_augmentor_cross_validated.
I dont know how to resolve espacially because it only fails for certain versions of python.

Out of interest: What are u using to view the test results of code, which was pushed to github? Sth like Travic CI? Because reviewing them on the github-website doesnt work well for me. Also, reviewing from the bash on my local clone doesn't work too well. Is there maybe any nice gui?

@nils-braun
Copy link
Collaborator

Depending on which IDE you are using, most modern IDEs come with nice support for showing test results. For example I am using VSCode together with the "Python Test Explorer for Visual Studio Code" extension - that works quite well if you want a "GUI".

Concerning the failing tests: I think we are triggering an incompatibility in scikit-learn here. I have created #822 to bump the scikit-learn version to 0.22 (which is ok, as this is already some years old). At least for me this solves the issue. Would you like to test again once the branch is merged?

By the way: I am planning to do a new release of tsfresh end of this week. There is no need to hurry (as there will be definitely another release), just for your information!

@OliEfr
Copy link
Contributor Author

OliEfr commented Mar 3, 2021

I installed the extension and will try it next time :)

Thanks for fixing the version-issue - I would never have had this idea! Works now.

@nils-braun
Copy link
Collaborator

Cool! Thanks for your contribution!

@nils-braun nils-braun merged commit 4fb8967 into blue-yonder:main Mar 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants