Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure number of pipelines running in parallel #868

Closed
gorkem opened this issue May 16, 2019 · 6 comments
Closed

Configure number of pipelines running in parallel #868

gorkem opened this issue May 16, 2019 · 6 comments
Labels
design This task is about creating and discussing a design help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.

Comments

@gorkem
Copy link
Contributor

gorkem commented May 16, 2019

Tekton should be able to limit the number of PipelineRuns running concurrently on a namespace. PipelineRuns on a namespace should be queued if they exceed the configured limit. The limit should cover all PipelineRuns on the namespace regardless of the Pipeline that they reference.

@vdemeester vdemeester added design This task is about creating and discussing a design kind/feature Categorizes issue or PR as related to a new feature. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels May 16, 2019
@shashwathi
Copy link
Contributor

@gorkem :
If the goal is to limit the underlying resource utilization in each namespace then isn't it better to use resource-quotas instead of limit on PipelineRuns.
Another idea would be apply quota limits on Pod. Since TaskRuns CRD constructs k8s pod resource as part of reconciliation you can also apply limit on the number of pods(taskruns) in each namespace for similar effect.

Are there any use cases which are not covered by either of those explanations? If so could you please explain the reasoning or thought behind this request?

@gorkem
Copy link
Contributor Author

gorkem commented May 23, 2019

The main idea is to avoid starving pipelines when resource-quotas are applied. I do not think Pipelines check whether there is enough resources to run a PipelineRun through therefore multiple pipelines that are started on the same namespace can starve each other and fail.

@ghost
Copy link

ghost commented May 24, 2019

Hey there @gorkem you're right that Pipelines don't check whether there are enough available resources for a Task to execute in a namespace. We're tracking that problem in #734 and I'm about ~80% of the way through development. You can see my latest commit to retry pod creation in the face of ResourceQuota errors.

I'm going to close this as a duplicate of #734. Feel free to reopen if you think this issue is describing a different problem or drop comments on the design doc or ping me on slack (user Scott). Cheers!

@ghost ghost closed this as completed May 24, 2019
@ghost
Copy link

ghost commented May 24, 2019

One other interesting observation around this:

When a pipeline is running and a task is unable to fit on the node, then the Pod is held in a Pending state until space is freed up on the node or the task times out. However, when a task is unable to fit due to a resource quota, the Pod is rejected immediately and the task fails. I find this kubernetes behaviour slightly confusing - why does one kind of resource limit (node limit) cause a pending+retry behaviour while another limit (resource quota) cause immediate rejection? Anyway, working on this now.

@holly-cummins
Copy link

I have another use case for this, which is to do with cross-talk between concurrent runs. In an integration test scenario, for example, the tasks depend on an external resource. If that resource is stateful (like a database), some tasks are rebuilding the database while others might be executing tests which use the database. I'd love to be able to single-thread pipeline runs through the integration test phase.

@holly-cummins
Copy link

I guess #2828 is a newer version of this request.

pradeepitm12 pushed a commit to openshift/tektoncd-pipeline that referenced this issue Jan 27, 2021
SYSTEM_NAMESPACE is set to the namespace of the EventListener by the reconciler
which means that with the current logic the interceptor url is wrong. Use
default namespace to always use `tekton-pipelines` as the namespace. This is
hardcoded and the logic is temporary till we implement tektoncd#868.

Also, actually wire up the HTTP handler in the interceptors server.
pradeepitm12 pushed a commit to openshift/tektoncd-pipeline that referenced this issue Jan 27, 2021
The EventListener did not have knowledge of which namespace Triggers was
installed in. Instead it always assumed it was `tekton-pipelines` leading to
the bug described in tektoncd#923.  This commit fixes this by having the Reconciler
send the installation namespace as a environment variable set on the
EventListener's pod.

NOTE: This fix is temporary and should not be necessary once tektoncd#868 is
implemented since then we will resolve the Interceptor's address using
information from its CRD.

Fixes tektoncd#923

Signed-off-by: Dibyo Mukherjee <dibyo@google.com>
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design This task is about creating and discussing a design help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants