-
-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v6.20.0 container images errors when pulling #2348
v6.20.0 container images errors when pulling #2348
Comments
Oops I pressed enter when writing the title and it sent. |
Running the commands:
In here, containers/podman#2542, at containers/podman#2542 (comment) and containers/podman#2542 (comment), it seems we can do something in the image |
I was able to reproduce this issue on the current beta javascript flavor when using rootless Docker on Linux. Noting for myself that this issue doesn't reproduce on Windows according to #2318 (comment). |
I kinda have an idea for a temporary workaround in order to be able to release, without having finished finding the problem and fixing it. I was still searching this afternoon, but at somewhere I read something and tried it and it was working. So if we build the image in an already restricted environment, the UID:GID problem can't arise, (since it won't be mapped to an unavailable range? I don't know). But I reconfirmed the bug in another gitpod, and I'm able to build CI-light and it works. |
I have to go for now, but there's a ghcr package on my profile (it doesn't appear on my megalinter fork). |
I am still hitting this when using rootless Docker in the javascript flavor of MegaLinter v6.20.0. |
@nvuillam could we add a section to release notes with know issues, and maybe direct users to add info or resolution steps/ideas here? |
We can pin an issue at the top of the repo I'm currently solving #2427 |
The circle CI link in the error message of #2429 is quite useful. But I didn’t find a way yet. I know there’s something in the node_modules. Adding the chown root:root as the circle ci help page could be a bazooka way for a quick fix. I saw it when searching, but I wasn’t sure of the consequences. It might be another thing to fix when we want to be less dependent on the root user (the issues where we can’t remove a folder/file that was linted) |
Well, at least I know that the problem was started at the date I filed the issue (2023-02-11) and I'm pretty confident that betas from the previous weekend were ok. It is about in the same days that the build-push-action was added. But even if it is a big change, I don't think that it's the only culprit. Since we were already using buildx before that. So for the bisect, you could try before the #2342 was merged. But since we don't have the |
That was more or less the range I intended to bisect, although more specific, which is helpful. I don't know what commands to run at each bisection point though. I figure if the bisection claims the v6.19.0 release was bad, we will at least have proven by contradiction that the issue cropped up through an unpinned dependency (or possibly something else but likely unrelated to any specific recent PR in our repo). |
Well, I have proven in the issue description (the screenshot) that the v6.19.0 was correct. And at multiple independent tries afterwards, on multiple platforms and methods, that the betas were failing, but v6.19.0 was correct. And as of 2022-03-04, v6.19.0 and v6.18.0 were correct. |
Ok, what about this as a bazooka temporary fix: Qusic/SmartBoy@1b7c1f8 |
Impressive find, @echoix. What led you to the solution? |
Describe the bug
A clear and concise description of what the bug is.
When pulling the latest beta images (as of 2023-02-11) on Gitpod, docker doesn't complete and shows an error when extracting a layer.
To Reproduce
Steps to reproduce the behavior:
docker pull oxsecurity/megalinter:beta
ordocker pull oxsecurity/megalinter-go:beta
(was https://hub.docker.com/layers/oxsecurity/megalinter-go/beta/images/sha256-8fab0a400aa67089841912c597863e78a871c14bb6f1b26d57d947ca2b1c2807?context=explore and https://hub.docker.com/layers/oxsecurity/megalinter/beta/images/sha256-43fb5d5d36b4d623be36ea8c46958ed517e230fe17707093454e317d2edae384?context=explore) at time of bugExpected behavior
A clear and concise description of what you expected to happen.
Docker images can be pulled and run without any errors.
Screenshots
If applicable, add screenshots to help explain your problem.
failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /node-deps/node_modules/ast-types-flow/lib/types.js: invalid argument
Additional context
Add any other context about the problem here.
Discovered in #2318
There seems to be a relation to UIDs, lchown, and maybe rootless containers. moby/moby#43576
In the go flavor, the error message was
failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /node-deps/node_modules/character-parser/LICENSE: invalid argument
The text was updated successfully, but these errors were encountered: