Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MNT] Avoid CI fail on deep test by generating at least 2 classes in random data #485

Merged
merged 5 commits into from
Jun 13, 2023

Conversation

hadifawaz1999
Copy link
Member

Compliment to #473

@hadifawaz1999 hadifawaz1999 added classification Classification package regression Regression package maintenance Continuous integration, unit testing & package distribution testing Testing related issue or pull request deep learning Deep learning related labels Jun 12, 2023
@TonyBagnall
Copy link
Contributor

it might be worth just printing a message if catching the exception, then skipping the print from CI? I get a bit nervous doing nothing on a catch, I drum into my students not to do this :)

@hadifawaz1999
Copy link
Member Author

it might be worth just printing a message if catching the exception, then skipping the print from CI? I get a bit nervous doing nothing on a catch, I drum into my students not to do this :)

yeah you're right makes sense :p

TonyBagnall
TonyBagnall previously approved these changes Jun 13, 2023
@MatthewMiddlehurst
Copy link
Member

Just to confirm, this test is expected to work as intended the majority of the time but in the case of weird IO magic we dont want to raise a failure?

The pytest functionality for expected failures may be a better fit i.e.
pytest.xfail("message")

@MatthewMiddlehurst
Copy link
Member

This would end the test though, so stuff after would not be run.

@hadifawaz1999
Copy link
Member Author

@MatthewMiddlehurst i dont think we should fail anything, given that its a CI problem and not ours, the idea is to skip the CI magic failing to avoid people getting random issues in their PRs even when it doesn't have anything to do with DL.

@MatthewMiddlehurst
Copy link
Member

xfail does not cause the full test set to fail. Ideally we would just fix the cause, as just skipping may cause actual issues from changes to also be skipped. Based on the error in #473 it doesn't seem to be IO related? self.model_ is None.

If you just want to skip for now, I won't block.

@hadifawaz1999
Copy link
Member Author

hadifawaz1999 commented Jun 13, 2023

xfail does not cause the full test set to fail. Ideally we would just fix the cause, as just skipping may cause actual issues from changes to also be skipped. Based on the error in #473 it doesn't seem to be IO related? self.model_ is None.

If you just want to skip for now, I won't block.

I agree @MatthewMiddlehurst that the issue should be fixed instead of ignoring, but am not sure that is fixable given that it won't fail locally but only on CI, tried it multiple times. I think its coming from the fact of saving to file in CI and it may not like that.
Not sure what do you mean by self.model_ is None, it is set when calling the fit function.

@MatthewMiddlehurst
Copy link
Member

Error log is here: https://github.com/aeon-toolkit/aeon/actions/runs/5186055397/jobs/9346945374?pr=473

Function call to

dummy_deep_clf.save_last_model_to_file()

results in

self.model_.save(file_path + self.last_file_name + ".hdf5")
AttributeError: 'NoneType' object has no attribute 'save'

so it is not even reaching the file saving part. Could be related to the random data used? I'm really not sure. The numpy random seed used was 379863931.

@hadifawaz1999
Copy link
Member Author

hmm, probably better to fix a seed for the test you think ?

@MatthewMiddlehurst
Copy link
Member

I'm not sure. It would probably help with stopping random failures, but the fact that this happened with any data is a little concerning. No idea if something happened in all the tensorflow stuff.

The way the data is randomly generated theres a small chance it becomes a 1 class problem maybe? For other tests we use utils._testing.collection.make_3d_test_data

@hadifawaz1999 hadifawaz1999 changed the title [MNT] Avoid CI fail on DL saving with try and catch [MNT] Avoid CI fail on deep test by generating at least 2 classes in random data Jun 13, 2023
@TonyBagnall
Copy link
Contributor

great spot @MatthewMiddlehurst

Copy link
Member

@MatthewMiddlehurst MatthewMiddlehurst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@TonyBagnall TonyBagnall merged commit 254758e into main Jun 13, 2023
@TonyBagnall TonyBagnall deleted the aif/try_catch_test_DL branch June 13, 2023 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
classification Classification package deep learning Deep learning related maintenance Continuous integration, unit testing & package distribution regression Regression package testing Testing related issue or pull request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants