-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caching issues breaking fetching binaries #3224
Comments
index.* and the release files are "promoted" at the same time and a cache flush is triggered afterward. I think what you're seeing is a flapping load balancer; which continues to happen and really needs to be diagnosed and fixed (I've been suspecting our main server has i/o issues, triggered by rsync, perhaps other things, that gets overloaded and can't respond in time to reply to health checks). So what happens is that CF switches from our primary server to the backup server, but the backup server doesn't strictly say in sync with the main server, that only happens periodically (I don't recall the frequency, something like every 15 minutes probably). So what I think you're experiencing is this:
Even if we fix the LB flapping, it's probably not good to have them out of sync like this. I've thought for a long time that asset promotion by releasers should also have some step that causes a sync to the secondary server. Maybe the easiest and most sensible thing (regardless of LB issues) to do here would be cause a sync to the secondary server before we request CF purge their cache. Here would be the place to do that: https://github.com/nodejs/build/blob/main/ansible/www-standalone/resources/scripts/cdn-purge.sh.j2 - this script runs privileged, so prior to the |
#3226 added |
I found an issue where one of the index.html files no longer exists after the switch to Next.js, which meant that our check-build-site.sh script always triggered a website rebuild every five minutes, which cannot be helping. #3230 |
This issue is stale because it has been open many days with no activity. It will be closed soon unless the stale label is removed or a comment is made. |
Commenting to avoid stale closure. I think we've mitigated this somewhat, but this is still an issue. |
When a new node version is released, the list of versions at https://nodejs.org/dist/index.json is updated and the binaries are uploaded at https://nodejs.org/dist/v
VERSION
(e.g. https://nodejs.org/dist/v19.8.1/).However, the website sits behind a Cloudflare cache and sometimes the data seems out of sync: the API responds saying there is a version, but the binaries are not available.
This then causes tools that try to download the latest version using this information to fail, e.g. actions/setup-node#717
I suspect there might be a few things contributing to this:
Cache-Control: no-cache
headers on 404 pages. Otherwise, caching proxies in the middle may fallback to default values, resulting in stale cache data. I think this might be what's happening inside the GitHub Actions environment, but I'm not sure.https://nodejs.org/dist/v19.8.1/win-x64/node.exe
) had taken at least a few hours to update after the binaries should have been there.The text was updated successfully, but these errors were encountered: