-
Notifications
You must be signed in to change notification settings - Fork 873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flaky installation on ci server #882
Comments
+1 |
I have a similar problem when jobs run in parallel on the same agent:
It seems to be a sort of a race condition. Is there an option to change the place where .tar.gz is saved before it gets extracted? If no we definitely need one. |
I have the same issue in our GitLab CI environment. It only happens from time to time: |
Any fix for this? |
Relates to #807 which looks like when this scenario happens it will at least auto-delete the corrupted download, you can see this in the original post above
That said, it doesn't actually automatically retry the download then (maybe it should?) - there's a comment on the PR actually about this here. |
I think possibly the download logic here should be changed so that it downloads to a temp directory and then uses move (atomic) once the download has completed. Currently, it's just writing the response to the destination directly. If writing to a temp location you don't have to worry about the corrupt download really as much. |
@ryanrupp you are right, such enhancement makes sense. And yes, initially not to complicate the change too much, the simple strategy with deletion of corrupted archive was used, the simple retry should eventually solve the problem. This solution is not perfect, also it can be not an option in some CI/CD environments, but for my team it was enough at the moment. I guess the best what you can do here is to submit the PR with proposed change. |
Same happening here on GitHub Actions started happening constantly after latest 1.14.0 release. Edit: Had to delete all my GitHub Caches to get it to start working again. |
@eirslett any update on this issue? |
+1 |
2 similar comments
+1 |
+1 |
This issue also happens when you try to manually download packages from https://nodejs.org/dist/. It seems that this repository is quite unstable (sometimes it takes less than 1 second to download package, sometimes over 5 min, sometimes it times out). Retry option might mitigate this behaviour. Related issue: nodejs/nodejs.org#4495 |
There is an open PR: #1098 Just need to be merged and released. |
@eirslett did a release yesterday but does not seem very active in this project. I wonder if it is time to fork it if he is no longer merging PR's or active in this plugin? |
The fix with several retries can be a simple and short-term solution, but it does not look like a good idea in a long-term. Private projects mostly should not rely on public repositories like maven central or nodejs.org. E.g. hub.docker.com already two years reject requests to fetch images if there is too many requests from one IP. I believe sooner of later the nodejs can start doing the same - and it's reasonable - probably they have huge costs for the distribution infrastructure. |
@seregamorph I agree but what about GitHub Actions? We have automated builds running on our open source project and they probably all look like they are coming from GitHub? |
That's a good question 😅 |
Perhaps #1118 has resolved this issue? |
Unfortunately it does not it just pushes the problem I get this now on GitHub Actions.
|
Is it an option for you to host a Nexus instance for Primefaces? For example something like https://www.primefaces.org/downloads/node/v18.8.0/node-v18.8.0-linux-x64.tar.gz just simply a mirror of all the files in https://nodejs.org/dist/v18.8.0/? |
I wonder what the cause of this issue is, is this just GitHub actions rate limiting kicking in? |
Workaround for github actions:
https://gist.github.com/uebelack/3b61f59a7a792e917c4fd4c37e4bea5d |
@uebelack does this make sure Node is on the system already so when FontEnd goes to check its already installed by the GitHub Action? |
sure |
@eirslett Followed your approach, which works very well. But instead of Nexus/Primefaces we use JFrog Artifactory as company wide maven repository. In our case we set up an After configuring frontend-maven-plugin to use the mirror by setting |
Do you want to request a feature or report a bug?
bug
What is the current behavior?
because of a "failure" ci-setup, every push is built under the same directory
flaky behavior: most of the time successfully
suggested fix
the archive maybe corrupted before
and reinstallation should be clearing the old outdated / corrupted files?
question: try
rm -rf .maven
before installation ?If the current behavior is a bug, please provide the steps to reproduce.
What is the expected behavior?
runs smoothly every time
Please mention your frontend-maven-plugin and operating system version.
Red hat linux: 7
frontend-maven-plugin: 1.8.0
The text was updated successfully, but these errors were encountered: