Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coverage:test:frontend failing with syntax error #6753

Closed
calebcartwright opened this issue Jul 10, 2021 · 13 comments · Fixed by #6778
Closed

coverage:test:frontend failing with syntax error #6753

calebcartwright opened this issue Jul 10, 2021 · 13 comments · Fixed by #6778
Labels
developer-experience Dev tooling, test framework, and CI keep-service-tests-green Related to fixing failing tests of the services

Comments

@calebcartwright
Copy link
Member

Have been unable to identify a root cause, much less a resolution, but for some reason running nyc with our entrypoint tests is barfing on the await keyword and failing. This is problematic with the current structure of the daily service tests, that utilize npm scripts that include running this one:

$ npm run coverage:test:entrypoint

> shields.io@0.0.0 coverage:test:entrypoint
> nyc npm run test:entrypoint

/home/caleb/dev/shields/server.js: Unexpected reserved word 'await'. (61:0)

  59 | export const server = new Server(config)
  60 |
> 61 | await server.start()
     | ^
  62 |

compared to just running the test script:

$ npm run test:entrypoint

> shields.io@0.0.0 test:entrypoint
> cross-env NODE_CONFIG_ENV=test mocha entrypoint.spec.js



Configuration:
{
  bind: { address: 'localhost', port: 1111 },
  metrics: {
    prometheus: { enabled: false, endpointEnabled: false },
    influx: { enabled: false, timeoutMilliseconds: 1000, intervalSeconds: 15 }
  },
  ssl: { isSecure: false },
  cors: { allowedOrigin: [] },
  services: {
    github: {
      baseUri: 'https://api.github.com/',
      debug: { enabled: false, intervalSeconds: 200 }
    },
    trace: false
  },
  cacheHeaders: { defaultCacheLengthSeconds: 120 },
  handleInternalErrors: false,
  fetchLimit: '10MB',
  requestTimeoutSeconds: 120,
  requestTimeoutMaxAgeSeconds: 30,
  requireCloudflare: false,
  redirectUrl: 'http://frontend.example.test',
  rasterUrl: 'http://raster.example.test'
}
0710162725 Server is starting up: http://localhost:1111/
(node:304443) [DEP0111] DeprecationWarning: Access to process.binding('http_parser') is deprecated.
(Use `node --trace-deprecation ...` to show where the warning was created)
  ✔ should render a badge (46ms)

  1 passing (3s)

@calebcartwright calebcartwright added developer-experience Dev tooling, test framework, and CI keep-service-tests-green Related to fixing failing tests of the services labels Jul 10, 2021
@calebcartwright
Copy link
Member Author

Likely related to #6717 in that nyc is having issues with make-badge-urls.js

@paulmelnikow
Copy link
Member

paulmelnikow commented Jul 10, 2021

I had a branch going where I tried to wrap the await import's in try blocks (before I realized the problem was my Node version). Let me see if it yields a better error message.

Added: Actually I think this is the same issue: the top-level await.

I tried to reproduce this but couldn't. npm run coverage:test:entrypoint failed the first time I ran it with an error about reading a sourcemap in .cache. I deleted .cache and then the second time it works. (It is running very slowly, however.)

@calebcartwright
Copy link
Member Author

before I realized the problem was my Node version

Should've noted in the description but the described behavior is reproducible in the latest versions of Node 14 and 16.

The only difference between the two scripts is ultimately mocha entrypoint.spec.js and nyc mocha entrypoint.spec.js where the former works and the latter crashes miserably. I've a high degree of confidence that means it's not version related, especially since there's still some known gaps with ESM and nyc AIUI.

I tried to reproduce this but couldn't.

On your Mac I presume? I'd be surprised if the behavior diverges by OS, but you can see this reproduced consistently on Circle as well (the daily-tests job is the only one running any coverage scripts AFAIK)

@paulmelnikow
Copy link
Member

Yea, on my Mac. I do see it happening in the daily-tests CI (where it's clear that the Node version is not the problem).

One idea (since there's at least one environment where this is running): does it fail the same way in Circle on the main Shields repo or only in daily-tests?

@calebcartwright
Copy link
Member Author

does it fail the same way in Circle on the main Shields repo or only in daily-tests?

I suppose we could update the circle config temporarily and test with the PR jobs from a topic branch, but i'm not expecting to see anything different there since the daily test jobs just clones the repo, changes into the resultant directory, and then runs the script.

I'm wondering if there were some parser plugins that were previously in the root of the project that were moved or removed as part of the ESM work, but it's almost enough to tempt one to take a closer look at c8

@calebcartwright
Copy link
Member Author

failed the first time I ran it with an error about reading a sourcemap in .cache. I deleted .cache

I went and did an npm i in the frontend directory, then started (and stopped) the server, and running the coverage script after all that got me the same error containing about the sourcemap file. Deleting the .cache directory hasn't made any impact for me, but my guess at this point is that there's some other stuff that has to be run first to get to a point where it's possible to run coverage, a clone + npm install + running the coverage script is going to fail

@paulmelnikow
Copy link
Member

I suppose we could update the circle config temporarily and test with the PR jobs from a topic branch

As a quick diagnostic, a faster option is to re-run a job with SSH, connect to the build machine, and run anything you like (once the steps complete, anyway. I think you're probably right that it won't make a difference, but it would clarify that the problem is not anything odd happening in daily-tests. Not the only possible next step, but would help close the gap on why this sometimes works and sometimes doesn't. I might try that next time I did into this.

On my Mac a fresh clone of Shields, npm ci spits out a bunch of errors, and npm run coverage:test:entrypoint times out on the first try and then passes on the second.

Hmmm, any chance this could be related to the version of the npm CLI?

@calebcartwright
Copy link
Member Author

On my Mac a fresh clone of Shields, npm ci spits out a bunch of errors, and npm run coverage:test:entrypoint times out on the first try and then passes on the second

Odd. I assume this is the case but to be overly explicit, by passing that does mean the script is also exiting with a 0 code correct?

Hmmm, any chance this could be related to the version of the npm CLI?

I'm only using npm v7+ locally (v7.19.1 to be precise) since we require > v7, but in CI have tested and reproduced the behavior with both v6 and v7, consistently reproducible. Are you thinking this could be a minor/patch version related issue with npm?

More generally to clarify for my own sake, is there an implication that you don't think there could be an issue with our scripts, nyc, or the esm pieces with the frontend, or are you just trying to think of other possible options?

As a quick diagnostic, a faster option is to re-run a job with SSH, connect to the build machine, and run anything you like (once the steps complete, anyway

If this was solely an issue in CI then I'd definitely agree. I actually started to do this originally when i wasn't convinced Circle was using the right version of Node, but validated that it was and abandoned the ssh approach (I also haven't done ssh key setup with circle yet) after also observing I could reproduce the behavior identically locally (also linux) where I find it easier to work.

@paulmelnikow
Copy link
Member

by passing that does mean the script is also exiting with a 0 code correct?

Yup, correct.

More generally to clarify for my own sake, is there an implication that you don't think there could be an issue with our scripts, nyc, or the esm pieces with the frontend, or are you just trying to think of other possible options?

Indeed, I'm not sure, just trying to think of other possible options.

also observing I could reproduce the behavior identically locally (also linux) where I find it easier to work.

👍🏼

@calebcartwright
Copy link
Member Author

Indeed, I'm not sure, just trying to think of other possible options.

Gotcha. If nothing else I think it's fair to say that something's at least partially busted, to use the technical term 😆

I'll try to check on Windows later to see if it's in a similar "working after some coercion" like Mac or failing outright like on Linux

@PyvesB
Copy link
Member

PyvesB commented Jul 11, 2021

Seems that NYC does not yet support loading native ESM modules: istanbuljs/nyc#1287. Top-level await is something specific to ESM modules. I've tried giving https://github.com/istanbuljs/esm-loader-hook a go, but wasn't successful.

but it's almost enough to tempt one to take a closer look at c8

That indeed seems to be one of the recommended stable alternatives.

@paulmelnikow
Copy link
Member

Probably going to c8 makes sense. As an alternative, we could probably revert the top-level await, though that seems like a step backwards.

@calebcartwright
Copy link
Member Author

fwiw I had some success dropping c8 in without having to change a thing other than the nyc references to c8 in our scripts. However, I observed some issues with some of the other scripts (not coverage related, the test commands were failing) that derailed me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
developer-experience Dev tooling, test framework, and CI keep-service-tests-green Related to fixing failing tests of the services
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants