Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let's talk about the CI situation #1614

Open
aduh95 opened this issue Sep 8, 2024 · 18 comments
Open

Let's talk about the CI situation #1614

aduh95 opened this issue Sep 8, 2024 · 18 comments

Comments

@aduh95
Copy link
Contributor

aduh95 commented Sep 8, 2024

As of recently, especially with nodejs/build#3887, dealing with the CI when preparing a release or when trying to land a PR has become very frustrating. I tried to list all the pain points, let me know if I forgot something:

  • The CI is so flaky it's systematically takes several attempts to get a passing one.
  • Most folks do not bother to check the failures.
  • Even fewer folks are reporting flaky tests (I plead guilty).
  • Even fewer folks are investigating those flakes (and I get it, the very nature of flaky tests makes them very hard to reproduce, let alone debugging them).
  • Opening a PR to mark test(s) as flaky often receives objections that the flakiness should be solved instead of ignored (which I totally get, those flakes are most of the time due to a bug in node; but it also makes the situation more frustrating for contributing, and also for other projects that build their own node and expect tests not marked as flaky to not be flaky).

I think we need to discuss:

  1. Is there a way to better detect flaky tests before landing them?
  2. How can we handle the immediate situation?

I'm opening this in the TSC repo, because it's kind of a meta discussion that I'm guessing is not going to be of much interest to folks following nodejs/node, but of course anyone is welcome to participate to the discussion.

@aduh95
Copy link
Contributor Author

aduh95 commented Sep 8, 2024

  • Is there a way to better detect flaky tests before landing them?

IMO, the correct response to that would be automation. Our previous attempt was nodejs/node#43954, which I don't think have helped a lot unfortunately. Maybe we should reconsider nodejs/node#40817: we could have a bot that checks the Jenkins CI result when a resume-ci label is added to a PR:

  1. First it checks whether any of the failing test is one that's being changed in this PR. If it is, it adds a comment on the PR explaining why it won't resume CI.
  2. Then it checks if all the falling tests have a corresponding issue. If not, it adds a comment on the PR explaining why it won't resume CI.
  3. Otherwise, it resumes the CI.

It should solve the "no one bothers to check the failures" and "no one reports the flaky tests" – it might not help with the "no one is investigating the flakes" though.

@aduh95
Copy link
Contributor Author

aduh95 commented Sep 8, 2024

Another thing that would help would be to mark on the Current release line as flaky all tests which have a corresponding open issue. That would greatly help with making release proposal ready, and also external team/projects that run our tests. That way, main can still have a high bar for when a test can be marked as flaky, only the release line (on which we don't do development) can be for forgiving.

@marco-ippolito
Copy link
Member

In the past I tried to automate marking tests as flaky nodejs/node-core-utils#746 but closed for lack of time, if someone wants to give it a shot.
The idea is that if a test has failed on x different PRs, create a PR to mark it as flaky and issue.

@joyeecheung
Copy link
Member

I think we should also discuss what happens after a test is marked flaky - is it just going to rot in the status file? I think that has been effectively happening and could actually be contributing to the flakes. e.g. some of the flakes might be caused by the same V8 task runner bug e.g. nodejs/node#47297, some of those have been marked flaky but if the bug never gets dealt with, it could just continue to flake other tests under certain circumstances.

@gireeshpunathil
Copy link
Member

is it reasonable to say all the current problems are stemming from the fact that there are less folks looking at the CI? or is there a new pattern? (IIRC, our CI was very healthy when there were many folks tending it)

@yharaskrik
Copy link

@juristr this is probably a much larger discussion with the Node team but it seems like Node could benefit from the flaky target detection that Nx has no? I'm not too familiar with Node's CI setup right now and I would assume that Nx Cloud doesn't have all of the OS targets Node would need but maybe there's a discussion to be had here?

@anonrig
Copy link
Member

anonrig commented Sep 9, 2024

If you look into jenkins-alerts repository, we have a warning/incident almost everyday. Most of the time, it is related to host machine related issues. https://github.com/nodejs/jenkins-alerts/issues

@richardlau
Copy link
Member

If you look into jenkins-alerts repository, we have a warning/incident almost everyday. Most of the time, it is related to host machine related issues. https://github.com/nodejs/jenkins-alerts/issues

Every issue in that repository will be a host machine issue as the whole point of the repository is to monitor for machine related issues -- it only looks to see if the machine is either running out of disk space or offline in Jenkins.

@jasnell
Copy link
Member

jasnell commented Sep 9, 2024

The CI is so flaky it's systematically takes several attempts to get a passing one.

I've had PRs that have required 10-16 CI runs to get even a flaky success. Now the OSX runner has been jammed up for days with no clear indication how to fix it or who can fix it.

@mhdawson
Copy link
Member

mhdawson commented Sep 9, 2024

I think @aduh95 concrete suggestion of the bot which:

  1. First it checks whether any of the failing test is one that's being changed in this PR. If it is, it adds a comment on the PR explaining why it won't resume CI.
  2. Then it checks if all the falling tests have a corresponding issue. If not, it adds a comment on the PR explaining why it won't resume CI.
  3. Otherwise, it resumes the CI.

Makes a lot of sense to me. I'm +1 on that and I think it would help get tests marked as flaky.

In terms of actually investigating/fixing issues I really wish we could find some way to get people to volunteer/focus on that but don't have any new ideas on that front. If we had a number of people who would commit to spending X amount of time each week/month on doing that either separately or together then I would join that effort, but in the absence of some level of critical mass of people committing to contribute I think any one individual sees an unsurmountable task.

@thomasrockhu-codecov
Copy link

Hey, Tom from @codecov here.

I know this may come off as me being a shill (I am), but as Node already uses Codecov, I thought I'd mention that we are building a flaky test product to help identify and highlight flakes. This is still a pretty new feature, but I'd love to see it actually be useful for a large codebase.

Here is a screenshot from the blog post that shows what a flaky test looks like

image

Here is a link to the source of the screenshot on GitHub.

@richardlau
Copy link
Member

richardlau commented Sep 10, 2024

Finding/highlighting flakes is not the problem (which is not to say more could be done). We are already detecting test failures that happen for CIs across multiple PRs in https://github.com/nodejs/reliability. BuildPulse was added as an experiment for Windows builds in nodejs/build#3653 and I have no idea if anyone has been looking at the results.

This is fundamentally a people problem -- we need to somehow motivate people to look at the flakes and decide whether the tests can be fixed or should be removed. Keeping long term flakey tests in Node.js is just building up problems for later.

And/or be more proactive in detecting when a PR introduces a flake into the system.

As a warning we also need to not become too reliant on the Resume Build feature of the Multijob Plugin in Jenkins as that plugin is deprecated -- if a future Jenkins update ever breaks that plugin we'd need to migrate (most likely to Jenkins pipelines) and we'd lose the Resume Build feature since it's part of that plugin (I don't know if Pipelines has an equivalent).

@thomasrockhu-codecov
Copy link

Got it, thanks for the info @richardlau! I suppose then that our flaky test product is not useful right now for this case.

@lpinca
Copy link
Member

lpinca commented Sep 13, 2024

There is a common issue (deadlock during process shutdown) behind many (all?) timeout failures in our CI. I've opened a specific issue for this: nodejs/node#54918.

@pmarchini
Copy link
Member

Hi everyone, while working on some flaky tests, I implemented this very basic tool (https://github.com/pmarchini/giogo) that leverages cgroups to limit resources (memory, CPU, IO).

Using this approach, I was able to reproduce the flakiness of some of the tests on my local machine.
Note: in my case, I was working on test runner tests, and limiting the processor allowed me to reproduce the issue most of the time.

I hope this could be helpful to someone else.
If you have any ideas or want to contribute, feel free to open a PR

@bnoordhuis
Copy link
Member

Instead of technical solutions, there's a simple social one: pay someone to investigate and fix flaky tests. It's soul crushing work, no one is going to do that for fun.

The foundation is still swimming in money, right? Might as well put it to better use than marketing and lawyers.

@richardlau
Copy link
Member

Instead of technical solutions, there's a simple social one: pay someone to investigate and fix flaky tests. It's soul crushing work, no one is going to do that for fun.

FYI That is being planned. The statement of work is being drafted in #1629.

@bnoordhuis
Copy link
Member

I just saw that, nice! The fact it's a paid position got kind of buried but it's there.

gireeshpunathil added a commit that referenced this issue Oct 30, 2024
RafaelGSS added a commit that referenced this issue Oct 30, 2024
* doc: add 2024-10-30 meeting notes

* fixup! doc: add 2024-10-30 meeting notes

* fixup: explain #1614

---------

Co-authored-by: Gireesh Punathil <gpunathi@in.ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests