-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let's talk about the CI situation #1614
Comments
IMO, the correct response to that would be automation. Our previous attempt was nodejs/node#43954, which I don't think have helped a lot unfortunately. Maybe we should reconsider nodejs/node#40817: we could have a bot that checks the Jenkins CI result when a
It should solve the "no one bothers to check the failures" and "no one reports the flaky tests" – it might not help with the "no one is investigating the flakes" though. |
Another thing that would help would be to mark on the Current release line as flaky all tests which have a corresponding open issue. That would greatly help with making release proposal ready, and also external team/projects that run our tests. That way, |
In the past I tried to automate marking tests as flaky nodejs/node-core-utils#746 but closed for lack of time, if someone wants to give it a shot. |
I think we should also discuss what happens after a test is marked flaky - is it just going to rot in the status file? I think that has been effectively happening and could actually be contributing to the flakes. e.g. some of the flakes might be caused by the same V8 task runner bug e.g. nodejs/node#47297, some of those have been marked flaky but if the bug never gets dealt with, it could just continue to flake other tests under certain circumstances. |
is it reasonable to say all the current problems are stemming from the fact that there are less folks looking at the CI? or is there a new pattern? (IIRC, our CI was very healthy when there were many folks tending it) |
@juristr this is probably a much larger discussion with the Node team but it seems like Node could benefit from the flaky target detection that Nx has no? I'm not too familiar with Node's CI setup right now and I would assume that Nx Cloud doesn't have all of the OS targets Node would need but maybe there's a discussion to be had here? |
If you look into jenkins-alerts repository, we have a warning/incident almost everyday. Most of the time, it is related to host machine related issues. https://github.com/nodejs/jenkins-alerts/issues |
Every issue in that repository will be a host machine issue as the whole point of the repository is to monitor for machine related issues -- it only looks to see if the machine is either running out of disk space or offline in Jenkins. |
I've had PRs that have required 10-16 CI runs to get even a flaky success. Now the OSX runner has been jammed up for days with no clear indication how to fix it or who can fix it. |
I think @aduh95 concrete suggestion of the bot which:
Makes a lot of sense to me. I'm +1 on that and I think it would help get tests marked as flaky. In terms of actually investigating/fixing issues I really wish we could find some way to get people to volunteer/focus on that but don't have any new ideas on that front. If we had a number of people who would commit to spending X amount of time each week/month on doing that either separately or together then I would join that effort, but in the absence of some level of critical mass of people committing to contribute I think any one individual sees an unsurmountable task. |
Hey, Tom from @codecov here. I know this may come off as me being a shill (I am), but as Node already uses Codecov, I thought I'd mention that we are building a flaky test product to help identify and highlight flakes. This is still a pretty new feature, but I'd love to see it actually be useful for a large codebase. Here is a screenshot from the blog post that shows what a flaky test looks like Here is a link to the source of the screenshot on GitHub. |
Finding/highlighting flakes is not the problem (which is not to say more could be done). We are already detecting test failures that happen for CIs across multiple PRs in https://github.com/nodejs/reliability. BuildPulse was added as an experiment for Windows builds in nodejs/build#3653 and I have no idea if anyone has been looking at the results. This is fundamentally a people problem -- we need to somehow motivate people to look at the flakes and decide whether the tests can be fixed or should be removed. Keeping long term flakey tests in Node.js is just building up problems for later. And/or be more proactive in detecting when a PR introduces a flake into the system. As a warning we also need to not become too reliant on the Resume Build feature of the Multijob Plugin in Jenkins as that plugin is deprecated -- if a future Jenkins update ever breaks that plugin we'd need to migrate (most likely to Jenkins pipelines) and we'd lose the Resume Build feature since it's part of that plugin (I don't know if Pipelines has an equivalent). |
Got it, thanks for the info @richardlau! I suppose then that our flaky test product is not useful right now for this case. |
There is a common issue (deadlock during process shutdown) behind many (all?) timeout failures in our CI. I've opened a specific issue for this: nodejs/node#54918. |
Hi everyone, while working on some flaky tests, I implemented this very basic tool (https://github.com/pmarchini/giogo) that leverages cgroups to limit resources (memory, CPU, IO). Using this approach, I was able to reproduce the flakiness of some of the tests on my local machine. I hope this could be helpful to someone else. |
Instead of technical solutions, there's a simple social one: pay someone to investigate and fix flaky tests. It's soul crushing work, no one is going to do that for fun. The foundation is still swimming in money, right? Might as well put it to better use than marketing and lawyers. |
FYI That is being planned. The statement of work is being drafted in #1629. |
I just saw that, nice! The fact it's a paid position got kind of buried but it's there. |
* doc: add 2024-10-30 meeting notes * fixup! doc: add 2024-10-30 meeting notes * fixup: explain #1614 --------- Co-authored-by: Gireesh Punathil <gpunathi@in.ibm.com>
As of recently, especially with nodejs/build#3887, dealing with the CI when preparing a release or when trying to land a PR has become very frustrating. I tried to list all the pain points, let me know if I forgot something:
node
; but it also makes the situation more frustrating for contributing, and also for other projects that build their own node and expect tests not marked as flaky to not be flaky).I think we need to discuss:
I'm opening this in the TSC repo, because it's kind of a meta discussion that I'm guessing is not going to be of much interest to folks following nodejs/node, but of course anyone is welcome to participate to the discussion.
The text was updated successfully, but these errors were encountered: