-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build static jq binaries #25073
build static jq binaries #25073
Conversation
this commit builds jq as per upstream releases and links statically without relying on any system deps. Signed-off-by: kranurag7 <81210977+kranurag7@users.noreply.github.com>
That is a CI build, which also uses embedded / vendored copy of dependencies.
Which results in better tracking of CVEs by all scanners, and allows resolving CVEs without a rebuild, when dependencies are upgraded with CVE fixes (for example glibc upgrade alone, resolves CVE that may be in use by jq). Note jq package builds a shared library which other packages depend on. And generally it is not advised to mix static and shared linked libraries - as if in jq library is statically linked, then used as shared library dependency in falco, and falco otherwise shared linked against the same library as jq needs at runtime there could be symbol resolution clashes. The upstream ci workflow job is not a justification suitable here. Why do you want jq binary or libjq library statically linked, and what does doing that gain? Which comes at a cost of increasing work for static linking tracking, and rebuilds when static library dependencies move. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs more information
Thanks for the additional info @xnox, I missed the point that I'm vendoring glibc this way. In hindsight, I realize, I should have been careful after that this patch #12914 I don't have a strong use case, I was just hacking and I was trying to make the following work. Details
docker run --name wolfi-install -v $HOME/.arkade/bin:/usr/bin --rm -it cgr.dev/chainguard/wolfi-base:latest sh -c 'apk add jq' Having said that, I'll close this PR with an agreement on the points you wrote above. |
why would you not create container that has jq package installed, and call said container as an entry point? You don't need jq binary back on the host, as you can always call extracting binaries from one container to a random host OS, are not guaranteed to work - even when statically linked. Especially whenever dlopen is involved. Statically linked binaries are not portable. |
I created an image using apko using the following config. apko config for jq
contents:
keyring:
- https://packages.wolfi.dev/os/wolfi-signing.rsa.pub
repositories:
- https://packages.wolfi.dev/os
packages:
- jq
entrypoint:
command: /usr/bin/jq
accounts:
groups:
- groupname: nonroot
gid: 65532
users:
- username: nonroot
uid: 65532
gid: 65532
run-as: 65532
archs:
- x86_64
- arm64 so far this works great and I'm happy to put this behind an alias something like this: alias jq 'docker run --rm -i apko.local/cache:aa09cca0ad3a67a015713c490ca4c74bbe067e8a97fc90417a385351abdc77fa'
alias crane 'docker run --rm cgr.dev/chainguard/crane:latest'
alias aws 'docker run -v ~/.aws:/home/nonroot/.aws cgr.dev/chainguard/aws-cli-v2:latest-dev' some shortcomings were:
I'm exactly doing this thing (please correct if you think this is really wrong) and so far I extract static go binaries from the container by mounting a location in my path. (usually some path from $HOME, something like ~/.local/bin or ~/.arkade/bin). I wrote about it here Earlier I only used to extract go static binaries but in the past few months I'm doing the same for rust binaries as well. Especially after https://www.chainguard.dev/unchained/changes-to-static-git-and-busybox-developer-images-2 (thank you). So, after this when I setup a VM/baremetal server for self hosting or working on projects, I rely on this to download a lot of toolings i use on a daily basis (zellij, ripgrep, fd, bat, lazygit, helix, kind, helm, kubectl, kustomize, atuin and a lot more.) I have two just global recipes defines in my global justfile and I use it almost daily. I have this defined in my dotfiles as well. The reason for doing this is because apk toolings are quite nice and fast. I've tried scripting out things in the past and nix as well but so far I like approach of extracting binaries. I've really spent good chunk of time in finding the best package manager but I'm still navigating that journey for my host OS :). Sometimes, extracting binaries from containers directly breaks for sure with some dynamically linked binaries but with my usage overtime, I have identified those and I don't download them. Now, I'm aware of that the binaries are coming as of now with root permission on the host but again I'm happy to take that tradeoff given it's fast. I tried to pull it using non-root user but that'll fail inside container as expected. $ docker run --user 1000 --rm -v ~/.arkade/bin/:/usr/bin/ cgr.dev/chainguard/wolfi-base:latest sh -c 'apk add lazygit'
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
Sorry, I don't know this but I'll search and study more about it. Any references would mean a lot.
Not sure about this but go binaries are meant to be portable when statically linked without CGO_ENABLED. Please let me know if you meant something else by portable.
Yes, that's true. @xnox, Please let me know if you've any comments on the current workflow. I'll study more on the knowledge gaps from the conversation and I'm excited about it. |
@kranurag7 what is the benefit of mixing host os & container os? Why whould you not create a container with all the apk packages you like, enter said container and use that as your daily shell? For example in wolfi-dev/os in Makefile we have this target
Which bind mounts current working directory into the container as read-write, enters the container which has the wolfi sdk build with all the developer tools installed. And then one can execute anything one wants, interractively, inside it. With clean separation of the host and container - no duplicates of binaries leaking from the container to the host, only data is shared. Can you make apko image with all the tools you like, and then locally make your gnome terminal execute That is a more natural dev workflow (bind mount your home-dir into wolfi container) rather than the inverse - copying out wolfi binaries from a container onto the host and hoping they still all work and are compatible with any other host os tooling. Also lots of IDEs are compatible with containers published like that - i.e. VS Code, Google workstations, Github code workspace - such that you can have Wolfi container as your working environment both locally and on remote systems. Even static binaries, can contain, runtime dependencies which are not otherwise observable resulting at runtime failures. |
@xnox thanks a lot for the suggestion, the overall workflow sounds good to me. I can start a shell inside the container and work from there. I've used the following config to create a dev image for me. contents:
keyring:
- https://packages.wolfi.dev/os/wolfi-signing.rsa.pub
repositories:
- https://packages.wolfi.dev/os
packages:
- wolfi-base
- chezmoi
- go
- crane
- oras
- helix
- fish
- zellij
- kubectl
- helm
- jq
entrypoint:
command: /usr/bin/fish
work-dir: /home/nonroot
paths:
- path: /home
type: directory
permissions: 0o755
# uid: 65532
# gid: 65532
environment:
PATH: /usr/sbin:/sbin:/usr/bin:/bin:/app/.arkade/bin
TERM: xterm-256color
COLORTERM: truecolor
# accounts:
# groups:
# - groupname: nonroot
# gid: 65532
# users:
# - username: nonroot
# uid: 65532
# gid: 65532
# run-as: 65532
archs:
- x86_64
- arm64 some minor problems were:
For now, I think I'll try out this workflow and it'll take me a few days to get used to this. If I don't then I'll have it project specific something like Thanks again for your inputs. :) |
You can bind-mount socket files from host, into the container, such that Example for docker https://stackoverflow.com/a/61406232 |
They tagged a release 2 weeks ago, and we have it in wolfi. If it is not good enough, ask upstream to tag new releases. For example vim tags like every push, and we do build/release it sometimes multiple times a day. https://github.com/vim/vim/tags |
Yes, people asked for nightly something like vim and neovim but as of now, it's not going in that direction. Ref: helix-editor/helix#6362 I wanted to try out new features so I built a pipeline using by wolfi-base here https://github.com/kranurag7/nightly-helix/blob/main/.github/workflows/nightly.yaml |
upstream ref: https://github.com/jqlang/jq/blob/c1d885b0249f3978de3a21ea2bfb7ced06ed2aff/.github/workflows/ci.yml#L85-L94
this commit builds
jq
as per upstream releases and links statically without relying on any system deps.the current
jq
executable dynamically links to some system dependencies.