Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support elastic training #1515

Closed
qiankunli opened this issue Jan 6, 2022 · 4 comments
Closed

support elastic training #1515

qiankunli opened this issue Jan 6, 2022 · 4 comments

Comments

@qiankunli
Copy link
Contributor

qiankunli commented Jan 6, 2022

elastic training design doc: https://mp.weixin.qq.com/s/93Fb1cyAJ3tuzTwwwC4CUQ

I'm pushing volcano support elastic scheduler with pr volcano-sh/volcano#1884

in order to support elastic training, operator should collaborate with volcano

  1. operator set podgroup.minMember = elasticPolicy.min
  2. operator ensures that there are a total of elasticPolicy.max pods anytime. in detail, if volcano delete a pod when scaling down, operator should create a new pod in the same time.

in order to support elastic training, operator needs to make the following changes

  1. support elasticPolicy field(done)
  2. in elastic mode, there are no master, all pods are workers. rank = 0 worker is master (done)
  3. set podgroup.minMember = elasticPolicy.min (not support)
  4. in elastic mode, if there is a elastic job whose elasticPolicy.min is 5 and elasticPolicy.max is 10, if there is a failed pod, it is ok for job continue to run. so we should change the successPolicy and failPolicy of job(not support)
  5. whether in elastic mode or not, ensure that the master(rank = 0 worker) starts before the workers (not support in elastic mode)

we can discuss with the design, and I can submit a pr after reaching an agreement.

@gaocegege
Copy link
Member

/cc @kubeflow/wg-training-leads

@zw0610
Copy link
Member

zw0610 commented Jan 6, 2022

  1. Are we referring elastic training with PyTorchJob or this proposal is applied to all training jobs?
  2. when re-creating the pod deleted/evicted by volcano or any other scheduler, any requirement for the new pod like pod name, pod spec remaining the same as the deleted/evicted one?
  3. For non-elastic mode, why should we ensure worker with rank 0 start before other workers?

@qiankunli
Copy link
Contributor Author

qiankunli commented Jan 7, 2022

@zw0610

  1. it works only on the allreduce model, it may be work on ps model(like tensorflow), but I haven't thought about it
  2. yes, pod spec should remain the same as the deleted/evicted one
  3. in pytorch scenario, worker with rank 0(we call it master) is used as Service Registry, worker connect to master to find other workers, every worker have an env MASTER_ADDR=xxx-worker-0
    1. in my compony, Pod.restartPolicy=Never, because AI developer always change his train script, it is normal that train script runs error. so if error happen, it is better to exit directly.
    2. but if worker start before master, it will connect MASTER_ADDR fail because master service have not created yet, MASTER_ADDR is not accessable, container will exit and can not be restart because Pod.restartPolicy=Never, .

@stale
Copy link

stale bot commented Apr 16, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot closed this as completed Apr 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants