You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As suggested by @bellet here #117 (review) (-3), we should avoid as much as possible to have different lists of metric_learners at different places in the code to parametrize tests. This problem was in the scope of PR #117 but we could say more generally that any test that should be ran on a list of estimators should take its list from a common place.
This is to make it easier when we add more algorithms to just add them to one master list.
We could also create this list by automatically discovering estimators, inpecting modules etc...
Currently there are some code not in the scope of PR #117 that have some mutualizable code and could benefit from using a common list:
everything that is inside test_fit_transform
the beginning of test_sklearn_compat where all estimators are listed in their deterministic form
all tests in test_transformer_metric_conversion
Currently the list of metric learners to use is at the beginning of test_utils
Another point (related to factorizing tests but a bit separated to what is above): we could also mutualize datasets of test: for instance metric_learn_testtest_fit_transform and test_transformer_metric_conversion build an iris dataset in a Setup class. They could maybe use some mutualized dataset from some common dataset place (for now in test_utils)
The text was updated successfully, but these errors were encountered:
As suggested by @bellet here #117 (review) (-3), we should avoid as much as possible to have different lists of metric_learners at different places in the code to parametrize tests. This problem was in the scope of PR #117 but we could say more generally that any test that should be ran on a list of estimators should take its list from a common place.
This is to make it easier when we add more algorithms to just add them to one master list.
We could also create this list by automatically discovering estimators, inpecting modules etc...
Currently there are some code not in the scope of PR #117 that have some mutualizable code and could benefit from using a common list:
test_fit_transform
test_sklearn_compat
where all estimators are listed in their deterministic formtest_transformer_metric_conversion
Currently the list of metric learners to use is at the beginning of
test_utils
Another point (related to factorizing tests but a bit separated to what is above): we could also mutualize datasets of test: for instance
metric_learn_test
test_fit_transform
andtest_transformer_metric_conversion
build an iris dataset in a Setup class. They could maybe use some mutualized dataset from some common dataset place (for now intest_utils
)The text was updated successfully, but these errors were encountered: