Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configurable client signing algorithms #1724

Open
tetsuo-cpp opened this issue Oct 2, 2023 · 7 comments
Open

Allow configurable client signing algorithms #1724

tetsuo-cpp opened this issue Oct 2, 2023 · 7 comments
Labels
enhancement New feature or request

Comments

@tetsuo-cpp
Copy link

tetsuo-cpp commented Oct 2, 2023

Description

I've filed similar issues under Cosign and Fulcio. I realise there's a lot of overlap in maintainers, but wanted to make sure that we discuss each project that we plan to touch. Apologies if this feels a bit spammy.

Hi there! At Trail of Bits, we're looking at potentially implementing part of the Configurable Crypto Algorithms proposal (specifically Phase 1). I wanted to float this idea to each of the relevant Sigstore sub-projects so we can hash out the details in a more concrete way.

Across the Sigstore stack, we default to using ECDSA for signatures and SHA256 for hashing. There's more detail in the linked proposal but there are a number of motivations for wanting to customise the signatures that are generated, including paving the way for post-quantum signatures. The proposed design includes having a "supported algorithm" registry (perhaps this can go in the Protobuf specs) that outlines enumerates the approved signature/hash algorithm combinations. We specifically don't want to allow arbitrary mixing and matching of signature and hash algorithm to avoid some of the security pitfalls listed in the proposal.

I'm not super familiar with the Rekor side of things but I expect that there's a few places where the assumption that the client signing algorithm is ECDSA-SHA256 is hardcoded, so we'll need to make those more flexible. We can also add a flag like --client-signing-algorithms=alg-1,alg-2 that can be used in case a given Rekor instance wants to constrain the list of supported algorithms to be more restrictive than the aforementioned registry.

@tetsuo-cpp tetsuo-cpp added the enhancement New feature or request label Oct 2, 2023
@tetsuo-cpp
Copy link
Author

tetsuo-cpp commented Oct 4, 2023

Taking some notes on where we'll have to make changes:

  • pkg/types/hashedrekord/v0.0.1/entry.go:validate: This seems to support most key types however, there's an explicit error check for Ed25519. There's some discussion on why that check exists at Improve error message when using ED25519 with hashedrekord #851. So this path should work for all key types supported by the Go crypto library. We'll need to a), special case Ed25519 to support it and b), add a check that rejects any keys that don't belong to the "supported algorithms" registry.
  • pkg/signer/memory.go:NewMemory: This creates an in-memory signer. Defaults to using ECDSA. This is unrelated to supporting configurable signing algorithms on the client-side, but we could do this at some point in the future.

@haydentherapper
Copy link
Contributor

Rekor signs its checkpoints and SignedEntryTimestamps, and verifies signed artifacts. Also note that the merkle tree uses sha256 for hashing its leaves, but this does not need to be updated for PQ.

Like Fulcio, signing keys are either stored in memory (for testing instances), on disk (again, for testing), or KMS. The comments made in sigstore/fulcio#1388 (comment) are relevant here. We can update the in-memory signer to be configurable. Most changes will need to occur in sigstore/sigstore. Making the changes in sigstore/sigstore will also let Rekor accept artifacts signed with PQ algorithms.

The hashedrekord + ed25519 edge case can be fixed by using ed25519ph, which should be a part of the registry.

@ret2libc
Copy link
Contributor

I've started the work to add support for ed25519ph in sigstore/sigstore#1595 and the related changes in Rekor with #1945 . However, I'm not sure about the changes to hashedrekor. Do we need a new hashedrekor version to allow sha512 as a possible value in spec.data.hash.algorithm?

@ret2libc
Copy link
Contributor

@haydentherapper do you think --client-signing-algorithm in Rekor should restrict only the type of algorithms of hashedrekord or for all kind of records? Shall we start with hashedrekord only or support all types from the start?

@haydentherapper
Copy link
Contributor

No, we shouldn't need a new hashedrekord version to support an additional hash algorithm. sha256 is encoded in the request, like https://github.com/sigstore/cosign/blob/1ebb6d95ec93a5873614e756c2c62ce46af7167b/pkg/cosign/tlog.go#L241C55-L241C55. We might ignore it in hashedrekord now, but the client must specify the hash algorithm, even if the choice is only sha256.

For --client-signing-algorithm , this is a good question and a bit tricky. Each Rekor type supports a "verifier". The most common verifier is "x509" (not the most precise naming), which includes certificates or keys. That's what is used for hashedrekord, rekord, dsse, and intoto, the most used Rekor types. There are also other verifiers - pgp key, ssh cert, minisign, pkcs7 (for JARs) and TUF metadata - which are used throughout the types. For example, rekord supports 4 of these.

Given the purpose of this work to support PQ, I would have this flag dictate the algorithms only for the x509 (cert/key) verifier initially. It would be ideal to also be able to dictate which algorithms can be used for ssh, pgp, etc, but a) this might be tricky depending on the underlying library used for verification, and b) handling the cert/key case will cover the most common types. (For what it's worth, we've actually run into this already, that our PGP library version doesn't support ed25519. It'd be nice to be able to note that, but it's not top priority)

@ret2libc
Copy link
Contributor

diff --git a/pkg/api/entries.go b/pkg/api/entries.go
index 4efba7b..2ecce57 100644
--- a/pkg/api/entries.go
+++ b/pkg/api/entries.go
@@ -18,11 +18,17 @@ package api
 import (
        "bytes"
        "context"
+       "crypto"
+       "crypto/ecdsa"
+       "crypto/ed25519"
+       "crypto/rsa"
+       "crypto/x509"
        "encoding/hex"
        "errors"
        "fmt"
        "net/http"
        "net/url"
+       "strings"

        "github.com/cyberphone/json-canonicalization/go/src/webpki.org/jsoncanonicalizer"
        "github.com/go-openapi/runtime"
@@ -177,12 +183,72 @@ func GetLogEntryByIndexHandler(params entries.GetLogEntryByIndexParams) middlewa
        return entries.NewGetLogEntryByIndexOK().WithPayload(logEntry)
 }

+func checkEntryAlgorithms(entry types.EntryImpl) (bool, error) {
+       verifiers, err := entry.Verifiers()
+       if err != nil {
+               return false, fmt.Errorf("getting verifiers: %w", err)
+       }
+       // Get artifact hash from entry
+       artifactHash, err := entry.ArtifactHash()
+       if err != nil {
+               return false, fmt.Errorf("getting artifact hash: %w", err)
+       }
+       artifactHashAlgorithm := artifactHash[:strings.Index(artifactHash, ":")]
+       var artifactHashValue crypto.Hash
+       switch artifactHashAlgorithm {
+       case "sha256":
+               artifactHashValue = crypto.SHA256
+       case "sha512":
+               artifactHashValue = crypto.SHA512
+       default:
+               return false, fmt.Errorf("unsupported artifact hash algorithm %s", artifactHashAlgorithm)
+       }
+
+       // Check if all the verifiers public keys (together with the ArtifactHash)
+       // are allowed according to the policy
+       for _, v := range verifiers {
+               identities, err := v.Identities()
+               if err != nil {
+                       return false, fmt.Errorf("getting identities: %w", err)
+               }
+
+               for _, identity := range identities {
+                       var publicKey crypto.PublicKey
+                       switch identityCrypto := identity.Crypto.(type) {
+                       case *x509.Certificate:
+                               publicKey = identityCrypto.PublicKey
+                       case *rsa.PublicKey:
+                               publicKey = identityCrypto
+                       case *ecdsa.PublicKey:
+                               publicKey = identityCrypto
+                       case ed25519.PublicKey:
+                               publicKey = identityCrypto
+                       default:
+                               continue
+                       }
+                       if !api.algorithmRegistry.IsAlgorithmPermitted(publicKey, artifactHashValue) {
+                               return false, nil
+                       }
+               }
+       }
+       return true, nil
+}
+
 func createLogEntry(params entries.CreateLogEntryParams) (models.LogEntry, middleware.Responder) {
        ctx := params.HTTPRequest.Context()
        entry, err := types.CreateVersionedEntry(params.ProposedEntry)
        if err != nil {
                return nil, handleRekorAPIError(params, http.StatusBadRequest, err, fmt.Sprintf(validationError, err))
        }
+
+       areEntryAlgorithmsAllowed, err := checkEntryAlgorithms(entry)
+       if err != nil {
+               return nil, handleRekorAPIError(params, http.StatusBadRequest, err, fmt.Sprintf(validationError, err))
+       }
+       if !areEntryAlgorithmsAllowed {
+               return nil, handleRekorAPIError(params, http.StatusBadRequest, errors.New("entry algorithms are not allowed"), fmt.Sprintf(validationError, "entry algorithms are not allowed"))
+       }
+
        leaf, err := types.CanonicalizeEntry(ctx, entry)
        if err != nil {
                if _, ok := (err).(types.ValidationError); ok {

Right now I got something working with the above patch.

However I wonder whether this is the right approach or if the verifiers should have a way to accept the AlgorithmRegistry and do the check themselves. That might be a bit hard to do as we need to pass the algorithmRegistry across a bunch of layers, I think.

@haydentherapper
Copy link
Contributor

It's most straightforward to add this where you suggested. If you pass this down to the verifier, you'll also need context if this is an upload vs fetch (for example, I could imagine we might restrict rsa-2048 in a distance future, but we don't want to break verification of existing entries).

One upside is that this'll also gate keys in other verifier structures, like ssh and pgp. If this is unexpected, we might see feedback that deployers want to control this per-verifier rather than across the entire API, but i'm good with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants