Skip to content

Avoid replicated tests for lists of algorithms #136

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks
wdevazelhes opened this issue Dec 5, 2018 · 0 comments
Open
3 tasks

Avoid replicated tests for lists of algorithms #136

wdevazelhes opened this issue Dec 5, 2018 · 0 comments
Milestone

Comments

@wdevazelhes
Copy link
Member

As suggested by @bellet here #117 (review) (-3), we should avoid as much as possible to have different lists of metric_learners at different places in the code to parametrize tests. This problem was in the scope of PR #117 but we could say more generally that any test that should be ran on a list of estimators should take its list from a common place.
This is to make it easier when we add more algorithms to just add them to one master list.
We could also create this list by automatically discovering estimators, inpecting modules etc...
Currently there are some code not in the scope of PR #117 that have some mutualizable code and could benefit from using a common list:

  • everything that is inside test_fit_transform
  • the beginning of test_sklearn_compat where all estimators are listed in their deterministic form
  • all tests in test_transformer_metric_conversion

Currently the list of metric learners to use is at the beginning of test_utils

Another point (related to factorizing tests but a bit separated to what is above): we could also mutualize datasets of test: for instance metric_learn_test test_fit_transform and test_transformer_metric_conversion build an iris dataset in a Setup class. They could maybe use some mutualized dataset from some common dataset place (for now in test_utils)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants