You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to discuss potential strategies for marking specific test failures as known or expected. This arises from some dmesg test failures observed on grunt and zork Chromebooks, where the emerg test case is failing due to No irq handler for vector errors being reported in the logs (as expected) - see e.g.: https://lava.collabora.dev/scheduler/job/13063812#results_468058486. This issue is known and harmless on these boards. I'm wondering if it would make sense to have a way to encode this information so that users can easily know about it and act accordingly (e.g. focus on other types of errors or filter out specific test failures if needed).
In the specific case of the dmesg test mentioned, I don't see a sane way to report the exact error log lines from the LAVA test case itself. So I guess this may require users to manually tag failures or provide additional information after they are reported.
I suspect there could be other instances where tests could benefit from manually added debug information (e.g. flakey tests); this issue is to discuss whether any actions should be taken to address these scenarios and explore potential solutions.
The text was updated successfully, but these errors were encountered:
Hello,
I'd like to discuss potential strategies for marking specific test failures as known or expected. This arises from some dmesg test failures observed on grunt and zork Chromebooks, where the
emerg
test case is failing due toNo irq handler for vector errors
being reported in the logs (as expected) - see e.g.: https://lava.collabora.dev/scheduler/job/13063812#results_468058486. This issue is known and harmless on these boards. I'm wondering if it would make sense to have a way to encode this information so that users can easily know about it and act accordingly (e.g. focus on other types of errors or filter out specific test failures if needed).In the specific case of the dmesg test mentioned, I don't see a sane way to report the exact error log lines from the LAVA test case itself. So I guess this may require users to manually tag failures or provide additional information after they are reported.
I suspect there could be other instances where tests could benefit from manually added debug information (e.g. flakey tests); this issue is to discuss whether any actions should be taken to address these scenarios and explore potential solutions.
The text was updated successfully, but these errors were encountered: