-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: eval
should have option compute metrics with and without clean-ups
#472
Comments
One issue with implementing this is that, to convert network outputs to a string of labels in the In The function we use in A fix for this is just to do the segmenting in That can work, but there's other behavior of the function we use in For predictions, we need to be able to map from network outputs ( But for evaluation, we don't want to do this, since we're computing an edit distance, and we don't want to get unfairly penalized because of the extra characters in multi-character labels. If we used multi-character labels, there could be a greater difference between some edits than others, e.g., ("br" -> "aw") would require more edits than ("ac" -> "aw") (since there's only character changed in the latter). So we should implement as follows:
labelmap = vak.labels.multi_char_labels_to_single_char(labelmap)
post_tfm = vak.transforms.labeled_timebins.ToSegments(labelmap=labelmap) |
I got as far as converting To be able to just call the same function we use in There's a couple of ways to work around this:
1 is the quick and easy way to get things done with this version. |
Update on this one: We need to add a It will always be an instance of In a way the transform we used before was convenient because it only requires one argument, the labeled timebins themselves. Ok I will proceed like that. ☝️ |
I got this to a point where it's working. There's an issue, though. We do the post-processing and then return the labels all in a single call to a single transform class; this means we don't have access to the transformed labeled timebins vector. That means we can't compute accuracy after applying the transformations. In other words our approach isn't very functional, and it conflates two things: applying the post-processing to the labeled timebins, and converting labeled timebins to either labels (alone) or segments (with labels, onsets, and offsets). We really only want a So in other words, the inside of lbl_tb = self.network(x)
if self.post_tfm:
lbl_tb = self.post_tfm(lbl_tb)
y_pred_labels = transforms.labeled_timebins.to_labels(lbl_tb.cpu().numpy()) Will need to refactor to achieve this. |
- Add post_tfm_kwargs to config/eval.py - Add post_tfm_kwargs attribute to LearncurveConfig - Add 'post_tfm_kwargs' option to config/valid.toml - Add post_tfm_kwargs to LEARNCURVE section of vak/config/valid.toml - Add use of post_tfm eval in engine.Model - Add post_tfm_kwargs to core.eval and use with model - Add logic in core/eval.py to use post_tfm_kwargs to make post_tfm - Use multi_char_labels_to_single_char in core.eval, not in transforms, to make sure edit distance is computed correctl - Add post_tfm parameter to vak.models.from_model_config_map - Add parameter and put in docstring, - Pass argument into Model.from_config - Add post_tfm_kwargs to TeenyTweetyNet.from_config - Add post_tfm_kwargs to unit test in test_core/test_eval.py - Pass post_tfm_kwargs into core.eval in cli/eval.py - Add parameter post_tfm_kwargs to vak.core.learncurve function, pass into calls to core.eval - Pass post_tfm_kwargs into core.learncurve inside cli.learncurve
- Add post_tfm_kwargs to config/eval.py - Add post_tfm_kwargs attribute to LearncurveConfig - Add 'post_tfm_kwargs' option to config/valid.toml - Add post_tfm_kwargs to LEARNCURVE section of vak/config/valid.toml - Add use of post_tfm eval in engine.Model - Add post_tfm_kwargs to core.eval and use with model - Add logic in core/eval.py to use post_tfm_kwargs to make post_tfm - Use multi_char_labels_to_single_char in core.eval, not in transforms, to make sure edit distance is computed correctl - Add post_tfm parameter to vak.models.from_model_config_map - Add parameter and put in docstring, - Pass argument into Model.from_config - Add post_tfm_kwargs to TeenyTweetyNet.from_config - Add post_tfm_kwargs to unit test in test_core/test_eval.py - Pass post_tfm_kwargs into core.eval in cli/eval.py - Add parameter post_tfm_kwargs to vak.core.learncurve function, pass into calls to core.eval - Pass post_tfm_kwargs into core.learncurve inside cli.learncurve
- Add post_tfm_kwargs to config/eval.py - Add post_tfm_kwargs attribute to LearncurveConfig - Add 'post_tfm_kwargs' option to config/valid.toml - Add post_tfm_kwargs to LEARNCURVE section of vak/config/valid.toml - Add use of post_tfm eval in engine.Model - Add post_tfm_kwargs to core.eval and use with model - Add logic in core/eval.py to use post_tfm_kwargs to make post_tfm - Use multi_char_labels_to_single_char in core.eval, not in transforms, to make sure edit distance is computed correctl - Add post_tfm parameter to vak.models.from_model_config_map - Add parameter and put in docstring, - Pass argument into Model.from_config - Add post_tfm_kwargs to TeenyTweetyNet.from_config - Add post_tfm_kwargs to unit test in test_core/test_eval.py - Pass post_tfm_kwargs into core.eval in cli/eval.py - Add parameter post_tfm_kwargs to vak.core.learncurve function, pass into calls to core.eval - Pass post_tfm_kwargs into core.learncurve inside cli.learncurve
- Add post_tfm_kwargs to config/eval.py - Add post_tfm_kwargs attribute to LearncurveConfig - Add 'post_tfm_kwargs' option to config/valid.toml - Add post_tfm_kwargs to LEARNCURVE section of vak/config/valid.toml - Add use of post_tfm eval in engine.Model - Add post_tfm_kwargs to core.eval and use with model - Add logic in core/eval.py to use post_tfm_kwargs to make post_tfm - Use multi_char_labels_to_single_char in core.eval, not in transforms, to make sure edit distance is computed correctl - Add post_tfm parameter to vak.models.from_model_config_map - Add parameter and put in docstring, - Pass argument into Model.from_config - Add post_tfm_kwargs to TeenyTweetyNet.from_config - Add post_tfm_kwargs to unit test in test_core/test_eval.py - Pass post_tfm_kwargs into core.eval in cli/eval.py - Add parameter post_tfm_kwargs to vak.core.learncurve function, pass into calls to core.eval - Pass post_tfm_kwargs into core.learncurve inside cli.learncurve
as in paper.
Should add same options as in predict config and then adapt from this script:
https://github.com/yardencsGitHub/tweetynet/blob/master/article/src/scripts/run_eval_with_and_without_output_transforms.py
The text was updated successfully, but these errors were encountered: