Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance metrics #406

Closed
ncskier opened this issue Feb 3, 2020 · 5 comments
Closed

Performance metrics #406

ncskier opened this issue Feb 3, 2020 · 5 comments
Labels
area/testing Issues or PRs related to testing lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@ncskier
Copy link
Member

ncskier commented Feb 3, 2020

@dibyom commented from #356 (comment):

I think having some performance numbers on the number of Triggers we can support would be nice!

Given that we relaxed the validation to not check for the presence of bindings/templates, I don't think creating the EL is where we'll run into issues.

We do use the Kubernetes/Triggers clientset to fetch resources in the EL sink at runtime (e.g. fetch the EL, then iterate over all the triggers; fetch the secret for GH/GL interceptors etc.) That might be problematic if we have a large number of incoming requests

Expected Behavior

I expect to know the limitations of the Triggers project when using it on a large scale.

Actual Behavior

I do not know the limitations of the Triggers project when using it on a large scale.

Additional Info

This is a relatively open-ended issue, so please leave comments about performance metrics that you would find helpful.

@lawrencejones
Copy link
Contributor

We have ran into an issue where the fetching of the kubernetes secret for the github interceptors validation is causing us to timeout our github webhooks. We're at about 40 triggers, and each triggers takes about 250ms to process, causing the webhook request to exceed the 10s Github timeout.

I think it's clear we need to cache these secrets. Do you have a preference on how? We could change the interceptors to use a caching Kubernetes client like a lot of controllers use, or we can build a GetCachedSecretToken that caches the value for a default TTL.

@tekton-robot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 14, 2020
@tekton-robot
Copy link

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dibyom
Copy link
Member

dibyom commented Aug 14, 2020

/lifecycle frozen

I think @savitaashture did some initial work on some perf metrics in one of here porposals!

@tekton-robot tekton-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/testing Issues or PRs related to testing lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

4 participants