Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

When will higher allow the use of DDP (distributed data parallel)? #116

Open
brando90 opened this issue Sep 28, 2021 · 4 comments
Open

When will higher allow the use of DDP (distributed data parallel)? #116

brando90 opened this issue Sep 28, 2021 · 4 comments

Comments

@brando90
Copy link

brando90 commented Sep 28, 2021

I remember from this post #99 that DDP seemed to be a feature that was in process of dev for higher (I remember @albanD was tagged in that post).

I was curious on the progress for that?

Thanks!

related: #98

@kamalojasv181
Copy link

Any updates when this will happen?

@kamalojasv181
Copy link

@brando90 Is it possible for me to just use your stateless model and not your optimizers and fast net API, and do distributed training? Like I calculate the parameters of the fnet manually by subtracting the parameters by the gradient, and using my updated params to meta learn? Will this work for now?

@brando90
Copy link
Author

brando90 commented Feb 1, 2022 via email

@kamalojasv181
Copy link

kamalojasv181 commented Feb 2, 2022

@brando90 Im sorry, which code? I used the pronoun "your" to refer to the higher code.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants