Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker image outdated #2646

Open
phmarek opened this issue Oct 28, 2024 · 15 comments · Fixed by #2647
Open

Docker image outdated #2646

phmarek opened this issue Oct 28, 2024 · 15 comments · Fixed by #2647

Comments

@phmarek
Copy link

phmarek commented Oct 28, 2024

Please use Debian Testing or Unstable when building the Docker image, and/or refresh them more often.
Thanks!

dockerimage

@bakaq
Copy link
Contributor

bakaq commented Oct 28, 2024

I didn't even realize there was a Dockerfile here. I think the problem is that that no contributor seems to use it regularly so it got neglected. Thanks for pointing it out.

@bakaq
Copy link
Contributor

bakaq commented Oct 28, 2024

Actually, it seems to have been built 2 weeks ago. This matches with this Docker image, but I'm not sure if it is the actual image that we generate in Github Actions here. Was this the image that you were testing?

@bakaq
Copy link
Contributor

bakaq commented Oct 28, 2024

Ok, on closer inspection the Action seems to only build the "latest" tag (which builds off of the master branch). The pinned releases (like "0.9.4", which is probably the image you tested) are only created once when the tag is pushed to the repo. This is a bit problematic, because the latest pinned version ("0.9.4") was generated 8 months ago.

We could just regenerate all Docker image tags on all pushes to master, but I think that would get out of hand pretty fast. Probably better to use the schedule Github Action trigger to rebuild all release tags once a week or such.

@gruhn @panasenco Pinging because you two seem to be the ones that have worked the most on the Docker images.

@panasenco
Copy link
Contributor

Haven't used Scryer in a while, but will take a stab at this.

There are a couple of things going on here. First, the Dockerfile needs to be updated and tested locally to make sure Scryer still starts. @phmarek , is there a reason to use unstable or testing? Even stable-slim doesn't have any critical vulnerabilities.

Then, to @bakaq's point, the base image of at least the latest N tags could be updated on a regular basis using a scheduled job.

@panasenco
Copy link
Contributor

I just tried and I remember now that there's a libssl error I documented in the Dockerfile, though it's kind of cryptic, even to me.
image

@bakaq
Copy link
Contributor

bakaq commented Oct 28, 2024

Probably relevant: #2013

@panasenco
Copy link
Contributor

Well there we go, thanks @bakaq! Looks like we pinned it to be old on purpose 😅

@panasenco
Copy link
Contributor

Nah, we're good. I just wrote a new version of the Dockerfile that keeps the rust builder and executable versions consistent, and it works now: https://github.com/panasenco/scryer-prolog/blob/master/Dockerfile

Next, the CI update...

@panasenco
Copy link
Contributor

@bakaq , I can't find any easy way to do something like "rebuild the latest N tags". Can you think of any approaches? If not, would it be acceptable to keep the existing tags frozen and ask security-conscious folks to use latest?

I just don't want to write something crazy that no one but me can maintain. 😅

@panasenco
Copy link
Contributor

panasenco commented Oct 28, 2024

Also occurs to me that even if we found a way to rebuild the previous tags, they're all using bullseye-slim as the base, which has reached end-of-life and has critical CVEs, so the security-conscious won't want to use them anyway,

@bakaq
Copy link
Contributor

bakaq commented Oct 28, 2024

[...] I can't find any easy way to do something like "rebuild the latest N tags". Can you think of any approaches?

I'm not very familiar with Docker, so I guess I won't be able to help a lot here. I guess a way would be to find which tags are published in the Docker repository and only update them, because then I think we could just delete tags to deprecate them, but I have no idea how you could do that in a way that is not incredibly obscure.

[...] would it be acceptable to keep the existing tags frozen and ask security-conscious folks to use latest?

Probably not, because latest builds against master, which isn't stable and so is not really fitting for a lot of use cases, including the ones where security matters the most.

Also occurs to me that even if we found a way to rebuild the previous tags, they're all using bullseye-slim as the base, which has reached end-of-life and has critical CVEs, so the security-conscious won't want to use them anyway,

I can't think of a way to solve this that doesn't involve making branches for the old releases to fix this. But also, I don't think bullseye-slim is deprecated, it was last pushed 12 days ago and is still listed in the image overview. They probably just haven't updated it yet.

@gruhn
Copy link
Contributor

gruhn commented Oct 28, 2024

I can't find any easy way to do something like "rebuild the latest N tags". Can you think of any approaches?

I'm also looking into this now and it is annoying. I guess we could have a scheduled job that

  • checks out the individual tags one by one
  • overwrites the Dockerfile with the one from master
  • build -> push

I can imagine that this will break often though. We would have to maintain a Dockerfile that's compatible with the repos states at each version tag.

I just don't want to write something crazy that no one but me can maintain. 😅

Don't worry, I'm also committed to maintain at least the docker/CI business 😉

@gruhn
Copy link
Contributor

gruhn commented Oct 28, 2024

[...] would it be acceptable to keep the existing tags frozen and ask security-conscious folks to use latest?

Probably not, because latest builds against master, which isn't stable and so is not really fitting for a lot of use cases, including the ones where security matters the most.

I'm starting to lean towards the lazy way out as well :S Maybe the maintenance overhead is not worth it, for a Docker image that's rarely used. I would even suggest to remove all those version-tag based images and only keep the latest one. At least nobody uses these outdated images then with a false sense of security. That's definitely not enterprise level service, but for people who seriously want to use Scryer in production, it's not hard to create their own Docker image. For me at least, the Docker image was just a quick and easy way to play with Scryer, without also having to install the entire Rust tool chain (not even true anymore with Nix package, WASM build, etc).

@phmarek
Copy link
Author

phmarek commented Oct 29, 2024

How about producing a minimal image, ie. one that only contains the scryer-prolog binary, libc6, libnss, and as few other libraries as possible?
Most of the stuff listed in the vulnerability report (perl-base, libsystemd0, e2fsprogs, ...) are not needed in a Scryer image; if the image contains only the bare minimum, updates will be required less often, too.

Basically something like this:

# Installation as now

RUN mkdir /image
RUN rsync -vaR \
      /usr/local/bin/scryer-prolog \
      /usr/lib64/... and other files \
    /image/

# Last stage, return actual image
FROM scratch

COPY --from=0 /image/ /
CMD ["/usr/local/bin/scryer-prolog", "--no-add-history"]

That's what we're doing, see https://gitlab.opencode.de/brz/containerplattform/minimal-image.

Another idea would be to remove all unnecessary packages, but that means fighting dpkgs notion of essential packages and is (in my experience) not faster - it just leaves more stuff behind, as the granularity is higher (packages instead of files).

If there's a list of required files, and from that a list of required packages, a periodic job can check whether one of these changed - and only then run the GitHub build process.
Doing that for latest and the last release (which could be hardcoded in the script) should be good enough, IMO.

@gruhn
Copy link
Contributor

gruhn commented Oct 29, 2024

Oh, sounds like this got auto-closed when #2647 was merged. @mthom can you re-open?

How about producing a minimal image [...] if the image contains only the bare minimum, updates will be required less often, too.

I agree that's a good idea in general. But we would still need some process to update the base image of old versions. It wouldn't be that much work to do it manually once in a while, but @mthom owns the DockerHub credentials. So, either we have to ask him to do it for us, or he needs to give one of us permissions. I think both is not a fair ask.

So it should be some automatic process. But I can't think of a way to make this run trivially stable long-term. For example, say we go with the approach I suggested:

  • a schedules GitHub action that
  • checks out the Git tag for each Scryer version
  • but uses the Dockerfile from master
  • builds + pushes to DockerHub

Then at some point a dependency is updated in the Dockerfile, which is incompatible with version 0.9.2 or whatever. Then one month later when the scheduled pipeline runs, the image build fails. This stuff is just annoying to fix. It's not just: find problem -> open PR -> merge -> done. Maybe we have to create a branch off-of the the version tag and adjust the Dockerfile there. Maybe we have to re-push the Git version tag, to be align it with that change (have ask @mthom do that for us). Then we probably also have to ask @mthom to manually re-push the docker image. That's all not a big deal. But the size of the deal should be proportional to the value, that the Docker image provides. And I suspect not many people care.

If there's a list of required files, and from that a list of required packages, a periodic job can check whether one of these changed - and only then run the GitHub build process.
Doing that for latest and the last release (which could be hardcoded in the script) should be good enough, IMO.

But where do we maintain this list of files? Usually you would just spell it out in the Dockerfile. When something changes, we change the Dockerfile. But where do we maintain this information for past versions of Scryer. We could have a dedicated branch for each version as @bakaq suggested. But now we are starting to bend the repo just to accommodate the Docker image. If there is a simple process to just keep the base image of the last release up to date, it's probably not much more work to keep the base image of all prior releases up to date. I'm thinking: some tool like Dependabot but I haven't found anything that really fits the use case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants