Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vitest hangs tests, close timed out after 1000ms #2008

Closed
6 tasks done
InfraK opened this issue Sep 8, 2022 · 104 comments · Fixed by #3418 or #5047
Closed
6 tasks done

Vitest hangs tests, close timed out after 1000ms #2008

InfraK opened this issue Sep 8, 2022 · 104 comments · Fixed by #3418 or #5047
Labels
p4-important Violate documented behavior or significantly improves performance (priority)

Comments

@InfraK
Copy link

InfraK commented Sep 8, 2022

Describe the bug

Started encountering our tests hanging with the message close timed out after 1000ms, a google search lead to this issue, tried everything in there with no success, for us is still hit or miss whether we get the error on not.

We get the error on different machines and also CI.

Reproduction

https://stackblitz.com/edit/vitejs-vite-brwl54?file=package.json

System Info

System:
    OS: macOS 12.4
    CPU: (10) arm64 Apple M1 Pro
    Memory: 133.73 MB / 16.00 GB
    Shell: 5.8.1 - /bin/zsh
  Binaries:
    Node: 16.14.0 - ~/.nvm/versions/node/v16.14.0/bin/node
    Yarn: 3.2.0 - ~/.nvm/versions/node/v16.14.0/bin/yarn
    npm: 8.9.0 - ~/.nvm/versions/node/v16.14.0/bin/npm
  Browsers:
    Chrome: 105.0.5195.102
    Firefox: 101.0.1
    Safari: 15.5

Used Package Manager

npm

Validations

@dillondotzip
Copy link

dillondotzip commented Sep 9, 2022

I can confirm the same issue but with pnpm and within a windows github action matrix.

@sheremet-va
Copy link
Member

Not sure why it happens in this case, but you can increase timeout with test.teardownTimeout option for now.

@InfraK
Copy link
Author

InfraK commented Sep 13, 2022

@sheremet-va, the timeout makes no difference, we have increased it to really high amounts and still had the same issue.

This is really slowing down our pipeline and our local workflows, is there any way we can help to troubleshoot or debug? Not even sure where to start.

@sheremet-va
Copy link
Member

This is really slowing down our pipeline and our local workflows, is there any way we can help to troubleshoot or debug? Not even sure where to start.

It means you start some process and don't end it. Check you websocket/DB/playwright connections.

@InfraK
Copy link
Author

InfraK commented Sep 13, 2022

@sheremet-va

Not sure what process we could be opening inside that very minimal repo, just using standard libraries for react development, in our real scenario, we are only testing frontend code without any Websockets or DB connections.

I'm attaching the log from process._getActiveHandles() which I included right before the timeout message, you can review it here here. I can't make heads or tails of the log itself, but maybe you can.

Any further guidance on where to look for issues will be greatly appreciated! Thank you for your time and the hard work you are doing here, I really really like vitest.

@sheremet-va
Copy link
Member

Not sure what process we could be opening inside that very minimal repo, just using standard libraries for react development

The repro is fixed, if I increase timeout there. You are saying it didn't work for your project.

I can't make heads or tails of the log itself, but maybe you can.

I'm also not sure what it means, but thanks for the way to check it -- I didn't know about this method 😄

@InfraK
Copy link
Author

InfraK commented Sep 14, 2022

@sheremet-va Still seeing the issue, looking for forward guidance if you have any.

What I've tried so far:

  • Saw this issue with Node itself, and upgraded node to v16.17, issue is still persisting.
  • Trying to detect open handles, wtfnode library gives this small report (It's the previous log i found, but in more human form"
close timed out after 1000ms
^C[WTF Node?] open handles:
- File descriptors: (note: stdio always exists)
  - fd 1 (tty) (stdio)
  - fd 0 (tty)
  - fd 2 (tty) (stdio)
- Child processes
  - PID 6841
    - Entry point: node:internal/child_process:253
    - STDIO file descriptors: 22

It seems like there is in fact an open child process that didn't exit, however, I'm not sure what could be spawning it, we are using a very small set of libraries, I'm trying to reproduce the same on StackBlitz but no luck so far.

Process list (not sure if this helps or not)

25552 ttys003    0:00.01 sh ./test-forever.sh
25933 ttys003    0:00.26 npm run test
25934 ttys003    0:00.00 sh /var/folders/7v/bfghyq3x19108d32wxqzgc9h0000gn/T/test-479d11ca.sh
25935 ttys003    0:00.05 node /Users/santiagokent/.../node_modules/.bin/vitest run
25936 ttys003    2:22.86 node /Users/santiagokent/.../node_modules/vitest/dist/cli.mjs run
25938 ttys003    0:00.21 /Users/santiagokent/.../node_modules/esbuild-darwin-arm64/bin/esbuild --service=0.15.7 --ping

@InfraK
Copy link
Author

InfraK commented Sep 15, 2022

After experimenting a little bit more, we found that disabling threads decreases significantly the chances of getting this error, not sure if this can provide guidance as to what could be failing, this is at least some mitigation for our local workflows and pipeline.

We manually need to cleanup after testing-library.

@fengmk2
Copy link

fengmk2 commented Sep 25, 2022

Same problem on Github Action with Node.js >= 16 https://github.com/node-modules/urllib/actions/runs/3122054130/jobs/5063741291

image

@fengmk2
Copy link

fengmk2 commented Sep 25, 2022

Same problem on Github Action with Node.js >= 16 https://github.com/node-modules/urllib/actions/runs/3122054130/jobs/5063741291

image

Add --no-threads can temporary fixes.
https://github.com/node-modules/urllib/pull/398/files#diff-7ae45ad102eab3b6d7e7896acd08c427a9b25b346470d7bc6507b6481575d519R53

LeSuisse pushed a commit to Enalean/tuleap that referenced this issue Oct 3, 2022
Part of request #27556: Migrate Vite-based packages to Vitest

project-sidebar-internal's JS unit tests get stuck from time to time.
It's been reproduced locally but it does not trigger always. An issue
comment [0] on vitest's github advises to disable threads to mitigate
the issue. We are going to try that.

[0]:
vitest-dev/vitest#2008 (comment)

Change-Id: I4ae7344745b4f4d8ac30b5a21113e461690d0d5e
@matsko
Copy link

matsko commented Oct 12, 2022

Any movement on this? Our CI is blocked at the moment.

@sheremet-va
Copy link
Member

Any movement on this? Our CI is blocked at the moment.

Zero movement. This is not a global problem, it's not reproducible to everyone. If you get this error even with hight teardownTimeout it means you have a hanging process that needs to be resolved.

@JoshuaToenyes
Copy link

We're experiencing the same issue. Setting threads: false in vitest.config.ts solved the issue, as @fengmk2 indicated.

All of our tests are simple unit tests (99% synchronous). There are no open DB/Playwright/websocket processes that could be causing the hang.

@jessekrubin
Copy link

@JoshuaToenyes same here! no hanging process for me and get the same error.

@silverwind
Copy link
Contributor

silverwind commented Nov 8, 2022

Seeing this "close timed out after 1000ms" message on Drone CI as well. Test run still succeeds, and all tests are very simple unit tests, nothing fancy, nothing that would keep the event loop alive so I assume any open handles must be inside vitest itself.

Does unfortunately not reproduce locally, even with CI variable set.

@vidarc
Copy link

vidarc commented Nov 15, 2022

Curious if others are also using c8 for coverage reporting? I just recently started getting this while upgrading to node 18. I tried everything I could think of to track it down in my code, but just switching to istanbul for the coverage seems to have fixed/hidden whatever the issue was. Didn't seem to matter which tests I ran or didn't run, always got the time out error and then had to manually stop the process.

@JoshuaToenyes
Copy link

Curious if others are also using c8 for coverage reporting? I just recently started getting this while upgrading to node 18. I tried everything I could think of to track it down in my code, but just switching to istanbul for the coverage seems to have fixed/hidden whatever the issue was. Didn't seem to matter which tests I ran or didn't run, always got the time out error and then had to manually stop the process.

@vidarc I don't think it's related to coverage. We use istanbul and still have the issue. Also, we see the timeout even if not running with coverage.

It could be related to the Node.js version, though. I haven't tried with Node.js < 18 or 19.

@silverwind
Copy link
Contributor

For me, it was on Node 18 with no coverage enabled.

@isaac-scarrott
Copy link

isaac-scarrott commented Nov 15, 2022

This is also not working as correct in our Github Action workflow. Some of our tests fail, but the vitest command exits with code 0, therefore the step is successful. Additionally it doesn't output all logs. I was looking in the vitest code and noticed that if the teardownTimeout is exceeded, the process exists with code 0. I believe this is what is causing the false positives in our pipeline. See the code I am referring to here

@iliketomatoes
Copy link

Curious if others are also using c8 for coverage reporting? I just recently started getting this while upgrading to node 18. I tried everything I could think of to track it down in my code, but just switching to istanbul for the coverage seems to have fixed/hidden whatever the issue was. Didn't seem to matter which tests I ran or didn't run, always got the time out error and then had to manually stop the process.

@vidarc I don't think it's related to coverage. We use istanbul and still have the issue. Also, we see the timeout even if not running with coverage.

It could be related to the Node.js version, though. I haven't tried with Node.js < 18 or 19.

@vidarc @JoshuaToenyes I'm on Node 16, and I started to have this problem when I added c8 for code coverage. When I switched to instanbul the problem went away.

@silverwind
Copy link
Contributor

silverwind commented Dec 5, 2022

I happen to have one case where it reproduces locally, in a around 1 in 10 runs. When it happens, the node process goes to 100% cpu and hangs forever. I managed to hook up the inspector:

image

processReallyExit is from module signal-exit module which vitest has multiple dependencies on:

vitest@0.25.3
├─┬ @antfu/install-pkg@0.1.1
│ └─┬ execa@5.1.1
│   └── signal-exit@3.0.7
├─┬ execa@6.1.0
│ └── signal-exit@3.0.7
└─┬ log-update@5.0.1
  └─┬ cli-cursor@4.0.0
    └─┬ restore-cursor@4.0.0
      └── signal-exit@3.0.7

reallyExit is a undocumented node internal which apparently is being monkey-patched by signal-exit.

@juriadams
Copy link

Very good find, @silverwind!

Could you perhaps share how you managed to attach the inspector? I would love to help get to the bottom of this, as it's also blocking my CI.

@silverwind
Copy link
Contributor

silverwind commented Dec 6, 2022

Just node --inspect node_modules/.bin/vitest with chrome chrome://inspect/#devices waiting in background, see here. The problem with the method is one needs to be quick with opening the inspector window because once the process stalls, it seems impossible to hook up the inspector.

BTW the tests I reproduced this with are doing a bit fancy stuff like starting/stopping a HTTP server, so I think it's definitely possibly that in this case it's not a bug in vite itself, but it still smells like one because the exact same tests run fine in jest.

@AlexandrHoroshih
Copy link

AlexandrHoroshih commented Dec 28, 2022

We had a similar problem on our CI - Vitest tests completed successfully, but sometimes the process would not stop after that and the whole job would hang until it closed by timeout.

I made a script like this:

// vitest_ci.mjs
import 'zx/globals';

import psList from 'ps-list';

setTimeout(async () => {
  console.log(await psList())
  $`exit`
}, 2 * 60 * 1000)

try {
  await $`yarn vitest run`;
} catch (p) {
  $`exit ${p.exitCode}`;
}

and used it instead of yarn vitest run.

I then ran the CI several times until I caught the hang. In the list of processes I saw esbuild 🙃

This looks like the problem from this esbuild issue and it was fixed in version 0.16.3 - two weeks ago

The latest version of Vitest uses esbuild@0.16.10 - we've also updated our Vitest version and haven't caught a hang like this on CI in the last few days 👍

UPD:
We still see this problem, but much less often than before the update.

@paynecodes
Copy link

For me, this happens for any project that is not the first in the vitest.workspace.ts file. When I move the projects around, in the defineConfig array parameter, the first one works fine. All subsequent projects hang.

@sheremet-va sheremet-va moved this to P2 - 4 in Team Board Feb 12, 2024
@sheremet-va sheremet-va added the p4-important Violate documented behavior or significantly improves performance (priority) label Feb 12, 2024
@sheremet-va sheremet-va removed the status in Team Board Feb 15, 2024
@sheremet-va sheremet-va removed the bug label Feb 16, 2024
@Irev-Dev
Copy link

Irev-Dev commented Feb 23, 2024

In our case, we had one component that was causing the hang. Since there isn't very actionable messages about what causes the hang, we instead were able to track down which specific test suite caused the hang with the following bash script

#!/bin/bash

find ./src -type f \( -name "*.test.ts" -o -name "*.test.tsx" \) | while read current_file; do
  echo "Running tests for $current_file"
  yarn vitest run --mode development "$current_file"
done

It will still hang, but you'll know which suite it is. Posting in case this is useful to anyone else, It's pretty easy to debug from there (or work around with mocks).

Obviously, that will only find the first test suite to hang if there are multiple.

@Ch4m4
Copy link

Ch4m4 commented Mar 1, 2024

To remove timeout after tests, add teardownTimeout: null to your config

@KonradLinkowski
Copy link

Does anybody know if it's possible to just forcefully kill the hanging workers when all tests pass?

@AriPerkkio
Copy link
Member

if it's possible to just forcefully kill the hanging workers

There's node:worker_threads.Worker.terminate. This is what Vitest makes Tinypool call:

But as explained on the description of nodejs/undici#2026 and shown on the Logs & Screenshots section, NodeJS doesn't terminate the worker in these cases. The only way to terminate the worker is to kill the whole main process outside Node. But even that requires forcing:

ari ~/repro  $ pgrep node
24498

ari ~/repro  $ kill  24498
ari ~/repro  $ pgrep node
24498
# ^^ Ok this one is really stuck

ari ~/repro  $ kill  -9 24498
[1]+  Killed: 9               node undici.mjs

ari ~/repro  $ pgrep node
# no output, process killed

We are planning to make pool: 'forks' default in Vitest V2, in #5047. It will close this issue.

@silverwind
Copy link
Contributor

Should retest with Node v21.7.1 as per nodejs/undici#2026 (comment).

@Mnigos
Copy link

Mnigos commented Mar 20, 2024

I have pretty similar issue. I noticed that test freezes in specified test, but I see no difference how this test is different from others and why especially this one freezes.

history-tracks.repository.ts

export const relations: FindOptionsRelations<HistoryTrack> = {
  user: true,
}
export const order: FindOptionsOrder<HistoryTrack> = {
  playedAt: 'DESC',
}

@Injectable()
export class HistoryTracksRepository extends Repository<HistoryTrack> {
  constructor(
    private readonly dataSource: DataSource,
    private readonly tracksRepository: TracksRepository
  ) {
    super(HistoryTrack, dataSource.createEntityManager())
  }

  findHistoryTracksByUser(userId: string) {
    return this.find({
      where: {
        user: {
          id: userId,
        },
      },
      relations,
      order,
    })
  }

  findLastHistoryTrackByUser(userId: string) {
    return this.findOne({
      where: {
        user: {
          id: userId,
        },
      },
      relations,
      order,
    })
  }

  createHistoryTrack(newHistoryTrack: CreateHistoryTrack) {
    const historyTrack = this.create(newHistoryTrack)

    return this.save(historyTrack)
  }
}

history-tracks.repository.spec.ts

describe.skip('HistoryTracksRepository', () => {
  const userId = 'userId'

  let historyTracksRepository: HistoryTracksRepository

  let historyTrackMock: DeepMockProxy<HistoryTrack>
  let historyTracksMock: DeepMockProxy<HistoryTrack[]>

  beforeEach(async () => {
    const module = await Test.createTestingModule({
      providers: [
        {
          provide: DataSource,
          useValue: {
            createEntityManager: vi.fn(),
          },
        },
        {
          provide: TracksRepository,
          useValue: {
            findOrCreateTrackFromExternalId: vi.fn(),
          },
        },
        HistoryTracksRepository,
      ],
    }).compile()

    historyTracksRepository = module.get(HistoryTracksRepository)

    historyTrackMock = mockDeep<HistoryTrack>({
      track: trackEntityMock,
    })
    historyTracksMock = [historyTrackMock]
  })

  test('should be defined', () => {
    expect(historyTracksRepository).toBeDefined()
  })

  test('should find history tracks by user', async () => {
    const findSpy = vi
      .spyOn(historyTracksRepository, 'find')
      .mockResolvedValue(historyTracksMock)

    expect(
      await historyTracksRepository.findHistoryTracksByUser(userId)
    ).toEqual(historyTracksMock)

    expect(findSpy).toHaveBeenCalledWith({
      where: {
        user: {
          id: userId,
        },
      },
      relations,
      order,
    })
  })

  test('should find last history track by user', async () => {
    const findOneSpy = vi
      .spyOn(historyTracksRepository, 'findOne')
      .mockResolvedValue(historyTrackMock)

    expect(
      await historyTracksRepository.findLastHistoryTrackByUser(userId)
    ).toEqual(historyTrackMock)

    expect(findOneSpy).toHaveBeenCalledWith({
      where: {
        user: {
          id: userId,
        },
      },
      relations,
      order,
    })
  })
})

Every test excluding should be defined one takes about 26 seconds to complete...

@Mnigos
Copy link

Mnigos commented Mar 20, 2024

So in my case the issue was caused by using mockDeep from https://github.com/eratio08/vitest-mock-extended

@joker22008
Copy link

Л

@sheremet-va sheremet-va moved this to Changes Requested in Team Board Mar 28, 2024
@sheremet-va sheremet-va moved this from Changes Requested to Has plan in Team Board Mar 28, 2024
thisconnect added a commit to thisconnect/bitbox-wallet-app that referenced this issue Apr 24, 2024
CI sometimes still hangs, trying to run without threads

- vitest-dev/vitest#2008
thisconnect added a commit to thisconnect/bitbox-wallet-app that referenced this issue Apr 24, 2024
CI sometimes still hangs, trying to run without threads

- vitest-dev/vitest#2008
thisconnect added a commit to thisconnect/bitbox-wallet-app that referenced this issue Apr 24, 2024
CI sometimes still hangs, trying to run without threads

- vitest-dev/vitest#2008
- vitest-dev/vitest#3077

In earlier vitest versions the optinon was --no-threads, but this
changed to --pool=forks in:
- https://github.com/vitest-dev/vitest/releases/tag/v1.0.0-beta.0
- https://github.com/vitest-dev/vitest/blob/main/docs/guide/migration.md#pools-are-standardized-4172

i# with '#' will be ignored, and an empty message aborts the commit.
thisconnect added a commit to thisconnect/bitbox-wallet-app that referenced this issue Apr 29, 2024
CI sometimes still hangs, trying to run without threads

- vitest-dev/vitest#2008
- vitest-dev/vitest#3077

In earlier vitest versions the optinon was --no-threads, but this
changed to --pool=forks in:
- https://github.com/vitest-dev/vitest/releases/tag/v1.0.0-beta.0
- https://github.com/vitest-dev/vitest/blob/main/docs/guide/migration.md#pools-are-standardized-4172

Changed vitest config, so that CI and make webtest are both using
a single thread.
@capaj
Copy link

capaj commented May 9, 2024

wow nicely done @sheremet-va

AlexErrant added a commit to AlexErrant/Pentive that referenced this issue May 13, 2024
@github-actions github-actions bot locked and limited conversation to collaborators May 24, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
p4-important Violate documented behavior or significantly improves performance (priority)
Projects
Archived in project