Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warnings on bucket analysis in integration tests #410

Open
tetsuok opened this issue Aug 26, 2022 · 5 comments
Open

Warnings on bucket analysis in integration tests #410

tetsuok opened this issue Aug 26, 2022 · 5 comments

Comments

@tetsuok
Copy link
Contributor

tetsuok commented Aug 26, 2022

Currently, running integration tests show 95 warnings on "bucket analysis". Below is a sample output in cli_test.CLITest:

test_codegen_custom (integration_tests.cli_test.CLITest) ... WARNING:explainaboard:bucket analysis: feature src_num_oov not found, skipping
WARNING:explainaboard:bucket analysis: feature src_fre_rank not found, skipping
WARNING:explainaboard:bucket analysis: feature ref_num_oov not found, skipping
WARNING:explainaboard:bucket analysis: feature ref_fre_rank not found, skipping
WARNING:explainaboard:bucket analysis: feature tok_train_freq not found, skipping

If these warnings are not actionable nor critical, these warnings need to be turned off completely. In general, warnings needs to be actionable for users to indicate required actions to be taken. Otherwise, users would ignore them at all. In addition, leaving non-critical and non-actionable warnings makes logs in CI jobs longer and messy, which could increase a developer's cognitive load required to identify more relevant information from the logs.

@neubig
Copy link
Contributor

neubig commented Aug 26, 2022

Thanks! This is because these are training-set dependent features, but no training set is available.

I don't think this warning should be disabled entirely (because it can also be indicative of a bug or misconfiguration), but we can have a check specifically regarding training set dependent features and disable the warning in this case.

@tetsuok
Copy link
Contributor Author

tetsuok commented Aug 26, 2022

Oh, I'm suggesting disabling warnings in tests, not at the module level. If these warnings can be fixed, I'd love to fix them.

@odashi
Copy link
Contributor

odashi commented Aug 27, 2022

I think this behavior should be treated as an error, and it is better to have an explicit flag to control whether to rely on training-only feature or not. In most cases warnings are never useful since it behaves as just postponing exposure of potential bugs.

These features should have require_training_set flag that can be used to check the behavior.

require_training_set: bool = False

@odashi
Copy link
Contributor

odashi commented Aug 29, 2022

Recently I met a runtime error around this: a test raises errors when I fixed a typing issues (skipped analysis returns None, but None was not treated appropriately by the caller). I think raising an error instead of a warning also makes the code flow simpler because we guarantee that the analysis always returns results.

@odashi
Copy link
Contributor

odashi commented Aug 30, 2022

Created #418 to provide a better control, but it still intentionally produces warnings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants