Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

DDP usage example #47

Open
ytwanghaoyu opened this issue Jul 17, 2024 · 0 comments
Open

DDP usage example #47

ytwanghaoyu opened this issue Jul 17, 2024 · 0 comments

Comments

@ytwanghaoyu
Copy link

Hi

First of all, thanks for the great work.

Recently I've been dive into this research, notice that we need multi backward among shared model. This is not allowed in DDP usage, and raise an run time error

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index *** with name ****.weight has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration.

I've already tried to use no_sync() context manager or _set_static_graph() but it's still not working, wonder if there is a DDP version of this algorithm, which will be very helpful.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant