Extend Azure Pipelines Scaler to honor demands to implement GitLab's K8 Executor Feature #2308
SebastianSchuetze
started this conversation in
Ideas
Replies: 1 comment 5 replies
-
Thanks for the write-up! If I get it correctly, you want to contribute |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I got an idea I need to share and see if I would give it a go for implementation
Background / Motivation
I recently was able to run the new Azure Container Apps with Azure Pipelines and the KEDA scaler. Which is already a huge improvement, because you can use scalable container based agent pools. Which is on the roadmap for Azure DevOps a long time, but has not been implemented yet.
In my company we use two DevOps tool platforms. Azure DevOps Services and GitLab. Our IT company (DB Systel) who is providing IT for the whole Deutsche Bahn is pretty much container based. They have teams managing Kubernetes Clusters which can be used by all DevOps-Teams or Partners who want to use container based solutions.
Also teams (often inner sourced) provide build container for specific use cases (build java, run whitesource bolt code checks, build CSharp etc.) which is actually fine because you separate concerns and not just make one big image of 10GB and then hope it works for everybody.
Azure Pipelines Agents vs Gitlab Runners
Because of this. Our company makes heavy use of the GitLab Kubernetes Executor. This can be used within GitLab Pipelines. This works as follow:
Now compared to Azure Pipelines, Azure DevOps looks pretty old school. You always have to run the container on the same host where the Agent is running. This works fine with VMs. But with Docker in Docker you get problems (especially using PaaS who prohibit opening up the Docker socket) That means you can't use container based pipelines and still use docker images on the host.
So long story short. I think this could be achieved by extending the Azure Pipelines scaler to also take demands of jobs into account!
How does this help?
The current scaler is calling a not public documented API (which is fine, because the API documentation of AzDO is just really incomplete!) but stable! It calls
and returns, besides the job itself, also the matching agents and the "demands" property. Demands means, what software should be installed on the agent in order to be selected. If no agent has the demanded capability, then the pipeline fails saying no agent available. The capabilities can be customized and controlled (via environment variables) when connecting an agent.
The current scaler would scale the container regardless of the demands. So of the container running the agent is not including the demand. It would scale but never take a job.
So the idea would be to also give the scaler a list of capabilities that the scaler should look for. If the capability is not part of a waiting job, then this job is not counted.
If I could do this, then I could have multiple different container agents within one pool that scale only up when a job with certain demands is requested. And I could have one pool controlling which type of docker image should be used indirectly.
This would for me be the closest thing of implementing GitLabs K8 Executor approach.
Example
Think of this example queue in an agent pool
No imagine that a new container would spin up for each new waiting job then we would have three containers running. But if I would give the scaler additionally to only count on jobs with capabilities
cap3
then we would have only two containers running.Final question
Do you see any problem with this approach. And since I know @tomkerkhove knows Azure DevOps he might be able to understand if this could be beneficial.
Beta Was this translation helpful? Give feedback.
All reactions