-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Allow jobs to be scheduled on AWS Fargate #5555
Comments
@yuhuishi-convect we might want to switch to argo v3 emissary executor: https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary |
This can be resolved by #1654, when we switch to that executoor. |
I got a walkaround under version 1.2 to allow scheduling jobs onto Fargate nodes. Here are the things I did:
and change
Then apply the transformation to every
So in the pipeline
will hint the task can be scheduled on Fargate. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
With KFP 1.7.0-rc.2, we support using argo emissary executor and it should be able to run on fargate. (I verified it works on GKE autopilot) |
Thanks for the update @Bobgy. Will this executor mode be enabled by default under 1.17 or editing
|
1.7.0-rc.2 default to emissary, but I am reverting that. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Closing this issue. This seems resolved, but if it's not please open another issue. /close |
@rimolive: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Feature Area
/area backend
What feature would you like to see?
I am trying to run a large number of kubeflow pipeline jobs on AWS Fargate.
The kubeflow pipeline components are deployed on AWS EKS. While the EKS has a Fargate profile that allows scheduling pods onto virtual nodes, Kubeflow pipeline jobs contain privileged containers that prevent them from using Fargate machine resources (https://docs.aws.amazon.com/eks/latest/userguide/fargate.html).
What is the use case or pain point?
This feature enables more cost-efficient job scheduling since many jobs (e.g., hyperparameter tuning, scenario analysis ...) are ephermal, so scheduling them on a serverless machine pool such as provided by Fargate makes more sense. This avoids the need to reserve a pool of nodes upfront while supporting the burst type of workloads.
However, kubeflow pipeline jobs use privileged containers that are not supported by Fargate. For example, the
wait
containerneeds further configurations under
securityContext
.I am wondering if there are any workarounds or better solutions to make the jobs schedulable on serverless resource pools such as Fargate.
Is there a workaround currently?
I do not see any solutions so far.
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
The text was updated successfully, but these errors were encountered: