-
Notifications
You must be signed in to change notification settings - Fork 361
Dev Meeting 2020.10.09 (security focus)
This is an expanded meeting of the opam dev team to specifically discuss the security roadmap.
Goal of discussion:
- Help us form an end to end security roadmap for opam in 2021
- @samoht started Opam-supply-chain
- Determine the immediate features to add to opam 2.2 to unblock them
- Need to understand current pain points in opam 2.1 for:
- conex
- orb
- opam-bin
- Rough timelines for those projects so we can coordinate release schedules
Other items:
- urgent depext release needed
Present: @altgr @avsm @dra27 @emillon @rjbou @samoht @lefessan @hannesm @talex5
@avsm introduced. Generally, opam has done better than most package managers where security is concerned - for example, the addition of sandboxing and so forth, but that there's still more to do. Now that it's quite common to have packages with many hundreds of dependencies and with the move towards trying to distribute binary artefacts, we need to head towards a security roadmap for opam for 2021. @samoht has started a wiki document on the "supply chain" for opam. @samoht - initial conclusion is that the problem is not strictly using MD5, but the end-to-end security of the metadata as a whole.
Three projects of interest at the moment where this is concerned - conex (@hannesm), orb (@rjbou), and opam-bin (@lefessan).
@hannesm - meeting sprang from a Discuss post which was perhaps with more hyperbole than necessary had claimed that opam-repository was very insecure and that opam developers had no interest in security! @hannesm slight agreement where the process with opam-repository maintenance is concerned (e.g. not altering hashes of tarballs and so forth). @avsm notes that this is only done in exceptional circumstances with a maintainer vetting the change.
@hannesm introduced the tool. It sprang from concern that at the time opam was simply using insecure curl download. Idea behind Conex was not to have centralised administration, but to have the ability to use distributed trust on allowing the authority for altering packages to be delegated for particular packages with a threshold of maintainer signatures signing off on changes.
Current state - worked on up until 2 years ago; it's used in the distribution of an IM client. Wasn't huge amount of interest at the time from the opam community, so with a lack of funding developing had stagnated, however just received funding, so the next 12 months are looking good!
At present, Conex is very similar to TUF - aim at the moment is to go further and, for example, be able to use CI to sign the building of the artefacts (so signing "I was able to build this, here are the signed checksums of what was produced").
@avsm - at present, not much else is using this @hannesm - some interest in Rust and Python communities, though not terribly convinced with how they're doing it; especially in Python there's a single point of failure with a single (offline) signing key. @avsm - problem with Notary was that it was off by default and hard to deploy. Aim here would be to have a way of doing this which would be universally used.
@hannesm - integration with opam is via the hooks. The system gets bootstrapped with the signing keys of known maintainers. If you use a custom repository, you have to install the signing keys manually. @AltGr - support already in place for adding the keys (none for verifying). The present version works on any of the backends.
@avsm - possible first step of having an opam-repository fork containing just the packages required for the platform tools and then encourage just that set of developers initially to sign the packages. @hannesm - there is already a mode which can cope with some packages being unsigned; Conex allows this up until a package is first released with a signature, at which point the package must always be signed in future. What's missing at the moment is some UX work @avsm - and backend CI support.
@avsm - is anything blocking Conex? @hannesm - no. @avsm - so at the moment the opam client is sufficient, the issue would be sorting out the opam-repository side. CI is being worked on extensively (@talex5) - this could be being worked on as soon as the initial transfer from Datakit-CI is done. What's unclear at the moment is the management of the keys - GitHub signing, for example? @hannesm - not entirely clear yet; what's crucial at the moment is to limit the number of failure modes which here means that the cryptographic data and the metadata are in the same repository (to limit the failure case where one is available and the other is not).
@AltGr - only concern at the moment is to ensure that this can be tested so that we're completely sure it works in order to prevent the nightmare of the whole thing exploding in people's faces. @avsm - hooks at the moment are only really being used for sandboxing, but we've already had issues with updating those. Are we sure that the hooks mechanism is working well enough?
@hannesm - enabling by default would be fine in the future, it would simply requiring shipping connex-verify with opam.
@avsm - more general issue at the moment is management of the hooks themselves, e.g. if the sandbox script has been updated. @AltGr - there are some checks in opam 2.1 for this which advise re-running opam init
.
@avsm - slight concern that the hooks are for experimental, there should be something more structured for features which are supported. @hannesm - indeed, if Connex becomes universally available, then the support for calling Connex would simply be baked in (potentially even directly linked).
@lefessan - opam-bin has two goals. Firstly, to speed up the recompilation of packages by re-using packages compiled in the same dependency cone. Secondly, to allow the binary packages to be shared and published to a separate opam-repository which can you then opt in to. Pretty stable at the moment - discussions this week with user who's deployed it in CI. CI is an area where it's expected it could be particularly useful, for example, for nightly builds within a company or other places where CI is fully controlled.
It uses the hooks mechanism in opam and therefore can suffer from problems with the permutations of hooks (Connex, sandboxing and opam-bin order of calling, etc.).
Expecting to move to version 1 within 1-2 weeks as the project is done.
@avsm - are there any changes required in opam at this stage? @lefessan - problem that hooks can't return any information to the user. In particular, one wants to be able to indicate to the user whether binary packages were used or whether they had to be built, but there's a very recent PR to assist with this! There's still some improvement which can be done. A new hook which could assist which would short-circuit downloading the source archive before discovering that there's a binary package. @avsm - doesn't the repository priority mechanism already do this? @lefessan - not quite, unfortunately, because the versions are different. For example, the version mechanism could select a higher version than you selected. Considered the possibility of always generating the binary package equivalent, but this doesn't work because of depopts (so a source package can have depopts, but of course a binary package cannot). The conclusion at the moment is that you cannot use both the source and binary repositories at the same time.
@AltGr - clarifying that the problem for the new hook is if you install a source package for which you have a binary package cached already, opam will download the source first and then later in the build stage it notices that the binary version is available. @lefessan - there's even the possibility of downloading the binary package from opam-repository unnecessarily too.
@avsm - sounds like there are two phases, as for Connex. At present, it's experimenting with whether it's worthwhile - and it clearly is, given the CI use-case already and other's (e.g. for Alpine). For it become more universally useful, would it be the same conclusion that it would need to be baked directly into opam. @AltGr - not clear yet, only because it's not entirely obvious what the interface looks like.
@samoht - similarly, would need to be clear how this interacts with Connex. For example, Connex may have verified the sources and then opam-bin effectively "undermines" it by changing what happens! @lefessan - one possibility is to be switching to an opam-repository whether all the packages are binary packages and there are no source packages. @avsm - similar to homebrew's bottles, therefore (binaries available if all things are equal).
@avsm - it sounds like integration of binary packaging could be a candidate for opam 2.3 - it feels like it should be gated on signing.
@lefessan - long-term it would certainly be nice to have opam-bin integrated into opam. At present, it's definitely too early.
@hannesm - distributing binary packages should definitely be gated on reproducibility. @lefessan - problem is that there seem to be more people who care about getting the packages quickly than necessarily securely, or are willing to delegate the trust (e.g. trusting binaries published by particular organisations). It doesn't feel that the distribution of binary repositories should be gated on their reproducibility. @rjbou - this is true for the testing, but not for the integration. @AltGr - agreed; it's fine to have the ability to opt in to trusted shared binary artefacts, but it can't be done transparently.
@avsm - several threat models. The easiest is with no binaries shared outside your host (e.g. Dune cache). Concern then arises with sharing - beginning with "locally" (e.g. for CI, as being used now). Similarly, this can be done for specific cases (e.g. current fdopen fork of opam-repository for Windows, fast repo model for Alpine in CI) without compromising the threat model. The final leap towards being able to distribute universally shared binaries shouldn't be underestimated (just based on experience with Docker!). Definitely sounds as though some CI integration of opam-bin would be a big win.
@lefessan - opam-bin of course requires relocatable packages and compilers which is done on-the-fly with various source patches. Clearly much work to be done to upstream this (and of course limit the security implications of patching on-the-fly). At present, there is a repository containing the patches.
@avsm - the situation is improving. e.g. @bobot has been working on relocation within Dune. @lefessan - this work has been more related to plugins and other parts. @samoht - the implementation of the feature is general, though, as it generalises the concept of a path which is different between compilation and runtime.
@avsm - generally keen to avoid having patches in opam-repository for the compiler other than to deal with build failures. However, the plan is to have an exception for this in that once upstream has a working version of relocation, this may be back-ported in opam-repository to previous versions of OCaml. @dra27 - the upstream support is planned for 4.13.
@rjbou - tool captures the information necessary information to assess the reproducibility of binary artefacts - so the dependent packages, patches, and so forth in addition to the environment in which the packages were built (C compiler of the switch and so forth).
@hannesm - some fields were not included in opam switch export
, but both orb and opam 2.1 have been extended with additional fields in the export.
@avsm - any reason why orb can't be enabled on CI (e.g. to provide a badge for reproducibility). @rjbou - this is a long-term aim, yes. @hannesm - if CI can do various checks on the checksums and so forth, this would be a good start towards this. orb requires opam 2.1 (for the extra fields). @avsm - this is sensitive to the host used? @hannesm - this is handled by orb, yes.
@hannesm - all the support required for orb is in opam 2.1, so once it's released more work can be done on orb. It's missing a lot of UX things. Would like for Mirage to develop a repository where the packages are keyed by hash, so you can then see if you produced the same binaries as another user.
Concern with distributing binaries of licence violations as to whether CI can support this.
@avsm - a lot of this seems to be gated on CI! @talex5 - one thing which would assist the CI would be to have an opam download command which definitely downloads without running any user code to allow downloading and then caching of sources securely. @avsm - any reason why opam download can't be a plugin? @rjbou - there is an open PR at the moment for an option to install just to download the sources but not build anything.
@hannesm - orb does some tracking of timestamps in order to track reproducibility (C compilers, assemblers embedding these, and so forth). Initially tried to assess the reproducibility over multiple directories, but this wasn't particularly great - went more down the Nix approach of using a common path for builds, and so forth.
@avsm - these seem very integrated and it seems sensible to try to integrate these in a single opam release. There is an OCSF meeting coming up - organisations involved in OCSF cannot receive funding from it, but this could be used, e.g. for funding parts of Connex development?
@hannesm - wouldn't necessarily tie all these into a single release, certainly where orb is concerned - there are still unanswered questions on exactly what the reproducibility means (e.g. is it required that a Fedora system running GCC x definitely produces the same binary as a Debian system running GCC X - the answer isn't yet clear).
@avsm - opam 2.3 might not necessarily achieve the complete security objectives, but the aim would be bring the three projects together at once. So, towards the end of 2021, would it be make sense to be pulling these together into a release?
@lefessan - at the moment, big need is for feedback. @avsm - how about recommending an OCSF internship looking into an opam-bin deployment?
@talex5 - slight concern on delaying some of the security, just for the perception issue. At the moment, it's a problem the people can arrive, see MD5 and no signatures and form an immediate judgement. @dra27 - integrating Connex into 2.3 definitely doesn't mean that we can't move the repository to improved checksums and signing even before the client essentially mandates it.
@samoht - there should also be a move to having the client warn or even reject MD5 checksums at some point in the future.
@avsm - do we want to have an external security audit of the plan? @hannesm - better once there's a complete system which is a candidate for being widely released to bring in an external review of it.