-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Massive long running tests on Linux #56477
Comments
Tagging subscribers to this area: @dotnet/ncl Issue DetailsFailures 7/8-7/27 (incl. PRs):
Perhaps a regression introduced around 7/16? Higher frequency: ~1/day Prior to 7/8 we do not have console outputs, but the frequency seems to be significantly lower - <1 per week across all OS versions Data from 3/29-7/8:
|
Triage: We need to look into this for 6.0. Ideally try to repro locally. If not, we can force crash via Timeouts on tests that hang with highest frequency ... |
It seems it was not fully addressed yet :( ... we've got 2 hits after the fix got in - reopening |
Are the recent hits on the PR or main? Maybe the main with fix wasn't merged to the PRs yet. |
@aik-jahoda the last 3 failures in top post are already with the change present (I added the attempted fix into the timeline to make it easier to reason about it). |
Weird, no occurrences in last week (until 8/26), closing again. We can reopen if it happens again with higher frequency. |
Another hit in main (7.0), so it is still around, but with lower frequency. I will reopen once it has a few occurrences. |
Links? If there are same tests hanging we can perhaps add some instrumentation to get core dump. |
Failures 7/8-9/6 (incl. PRs):
(also in release/6.0 and 6.0-rc1 branches)
Addressed on 8/11 in PR #56966
Perhaps a regression introduced around 7/16? Higher frequency: ~1/day
Prior to 7/8 we do not have console outputs, but the frequency seems to be significantly lower - <1/week across all OS versions
Data from 3/29-7/8:
The text was updated successfully, but these errors were encountered: