Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use SIGSTORE_REKOR_PUBLIC_KEY, remove SIGSTORE_TRUST_REKOR_API_PUBLIC_KEY #211

Closed
asraa opened this issue Jun 13, 2022 · 10 comments
Closed
Labels
enhancement New feature or request

Comments

@asraa
Copy link
Contributor

asraa commented Jun 13, 2022

Description

Users should be using verification material out of band, and we should deprecate SIGSTORE_TRUST_REKOR_API_PUBLIC_KEY.

Instead, the scaffolding setup should export SIGSTORE_REKOR_PUBLIC_KEY with the location of the public key file, similar to the CT log public key.

@asraa asraa added the enhancement New feature or request label Jun 13, 2022
@asraa
Copy link
Contributor Author

asraa commented Jun 29, 2022

@k4leung4 @vaikas Would one of you be able to help me here? I think another alternative is to create a TUF root inside scaffolding to point as a mirror for TUF. Happy to add some code to (sigstore/sigstore) to create a simple TUF root with a single root keyholder. That way scaffolding doesn't need to set so many env variables.

@vaikas
Copy link
Contributor

vaikas commented Jul 1, 2022

Hey there, sorry for the tardy reply. Was OOO & traveling. The one other thing that I would like to chat about that may be related is that if folks want to verify against multiple sigstores in the policy_controller on how that should be handled. For example, if customer wanted to have a private sigstore as well as trust the public one, how that might be best dealt with.

@asraa
Copy link
Contributor Author

asraa commented Jul 1, 2022

For example, if customer wanted to have a private sigstore as well as trust the public one, how that might be best dealt with.

I could support this! Could you create an issue for this in sigstore/sigstore? We can have the sigstore TUF client pull from all TUF repositories initialized in the TUF_ROOT directory, rather than just 1.

@vaikas
Copy link
Contributor

vaikas commented Jul 1, 2022

perfetto! Yes, I think that would be great, then I think if we can support that, as well as have scaffolding create this as part of standing things up, and then having an e2e test that we could have using a custom image (like today) against a private one, and then to test against a known good one in the public sigstore. Sound good?
I'll add an issue.

@asraa
Copy link
Contributor Author

asraa commented Jul 21, 2022

Hey! I've been trying to work on this issue, and each week I get a little closer to debugging why I couldn't run the setup scripts locally. It turns out that I probably have some firewall enabled on my work machine so I can't run local setup.

Do you know anyone else who's run into websocket problems running the knative activators? Or how I can workaround?
I'm seeing stuff like this:

$ kubectl logs activator-64cdb747d9-br9fh --namespace knative-serving
2022/07/15 21:16:01 Registering 3 clients
2022/07/15 21:16:01 Registering 3 informer factories
2022/07/15 21:16:01 Registering 3 informers
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.768515093Z","caller":"logging/config.go:116","message":"Successfully created the logger."}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.768541838Z","caller":"logging/config.go:117","message":"Logging level set to: info"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.868734076Z","logger":"activator","caller":"activator/main.go:134","message":"Starting the knative activator","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.870637206Z","logger":"activator","caller":"activator/main.go:179","message":"Connecting to Autoscaler at ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.870673347Z","logger":"activator","caller":"profiling/server.go:64","message":"Profiling enabled: false","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.870688241Z","logger":"activator","caller":"websocket/connection.go:162","message":"Connecting to ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.877134239Z","logger":"activator","caller":"metrics/metrics_worker.go:76","message":"Flushing the existing exporter before setting up the new exporter.","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.877380111Z","logger":"activator","caller":"metrics/prometheus_exporter.go:51","message":"Created Prometheus exporter with config: &{knative.dev/internal/serving activator prometheus 5000000000 <nil>  false 9090 0.0.0.0}. Start the server for Prometheus exporter.","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.877420256Z","logger":"activator","caller":"metrics/metrics_worker.go:91","message":"Successfully updated the metrics exporter; old config: <nil>; new config &{knative.dev/internal/serving activator prometheus 5000000000 <nil>  false 9090 0.0.0.0}","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"INFO","timestamp":"2022-07-15T21:16:01.877719746Z","logger":"activator","caller":"activator/request_log.go:45","message":"Updated the request log template.","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh","template":""}
{"severity":"WARNING","timestamp":"2022-07-15T21:16:02.276120095Z","logger":"activator","caller":"handler/healthz_handler.go:36","message":"Healthcheck failed: connection has not yet been established","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"WARNING","timestamp":"2022-07-15T21:16:02.276557127Z","logger":"activator","caller":"handler/healthz_handler.go:36","message":"Healthcheck failed: connection has not yet been established","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh"}
{"severity":"ERROR","timestamp":"2022-07-15T21:16:04.871865992Z","logger":"activator","caller":"websocket/connection.go:145","message":"Websocket connection could not be established","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh","error":"dial tcp: i/o timeout","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func1\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:145\nknative.dev/pkg/websocket.(*ManagedConnection).connect.func1\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:226\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\tk8s.io/apimachinery@v0.21.4/pkg/util/wait/wait.go:211\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\tk8s.io/apimachinery@v0.21.4/pkg/util/wait/wait.go:399\nknative.dev/pkg/websocket.(*ManagedConnection).connect\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:223\nknative.dev/pkg/websocket.NewDurableConnection.func2\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:163"}
{"severity":"ERROR","timestamp":"2022-07-15T21:16:05.205004838Z","logger":"activator","caller":"websocket/connection.go:192","message":"Failed to send ping message to ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh","error":"connection has not yet been established","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func3\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:192"}
{"severity":"ERROR","timestamp":"2022-07-15T21:16:07.994527656Z","logger":"activator","caller":"websocket/connection.go:145","message":"Websocket connection could not be established","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh","error":"dial tcp: i/o timeout","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func1\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:145\nknative.dev/pkg/websocket.(*ManagedConnection).connect.func1\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:226\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\tk8s.io/apimachinery@v0.21.4/pkg/util/wait/wait.go:211\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\tk8s.io/apimachinery@v0.21.4/pkg/util/wait/wait.go:399\nknative.dev/pkg/websocket.(*ManagedConnection).connect\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:223\nknative.dev/pkg/websocket.NewDurableConnection.func2\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:163"}
{"severity":"ERROR","timestamp":"2022-07-15T21:16:08.537824962Z","logger":"activator","caller":"websocket/connection.go:192","message":"Failed to send ping message to ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"7499ae2","knative.dev/controller":"activator","knative.dev/pod":"activator-64cdb747d9-br9fh","error":"connection has not yet been established","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func3\n\tknative.dev/pkg@v0.0.0-20211206113427-18589ac7627e/websocket/connection.go:192"}

@vaikas
Copy link
Contributor

vaikas commented Jul 22, 2022

Have you tried just running it on gke? I seem to recall running things on kind while at Google, but it's been awhile :)

I'd be happy to try your new bits on my local machine running kind tmw if that would help however :)

@vaikas
Copy link
Contributor

vaikas commented Aug 9, 2022

@asraa now that Scaffolding creates the TUF root, I think that's better solution than SIGSTORE_REKOR_PUBLIC_KEY?

@asraa
Copy link
Contributor Author

asraa commented Aug 9, 2022

@asraa now that Scaffolding creates the TUF root, I think that's better solution than SIGSTORE_REKOR_PUBLIC_KEY?

I agree, in
#276

I think we can remove it altogether now, right?

@hectorj2f
Copy link
Contributor

I believe we can close this issue, i don't have permission for that.

@vaikas
Copy link
Contributor

vaikas commented Aug 10, 2022

Thanks @hectorj2f !

@vaikas vaikas closed this as completed Aug 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants