Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_MambaDFuse.py: error: unrecognized arguments: --local-rank=1 #3

Open
Jamie-Cheung opened this issue May 5, 2024 · 2 comments

Comments

@Jamie-Cheung
Copy link

If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
[2024-05-05 12:13:12,367] torch.distributed.run: [WARNING]
[2024-05-05 12:13:12,367] torch.distributed.run: [WARNING] *****************************************
[2024-05-05 12:13:12,367] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-05-05 12:13:12,367] torch.distributed.run: [WARNING] *****************************************
usage: train_MambaDFuse.py [-h] [--opt OPT] [--launcher LAUNCHER] [--local_rank LOCAL_RANK] [--dist DIST]
train_MambaDFuse.py: error: unrecognized arguments: --local-rank=0
usage: train_MambaDFuse.py [-h] [--opt OPT] [--launcher LAUNCHER] [--local_rank LOCAL_RANK] [--dist DIST]
train_MambaDFuse.py: error: unrecognized arguments: --local-rank=1
[2024-05-05 12:13:17,382] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 2) local_rank: 0 (pid: 25314) of binary: /home/zzj/anaconda3/envs/mamba/bin/python
Traceback (most recent call last):
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in
main()
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/zzj/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train_MambaDFuse.py FAILED

Failures:
[1]:
time : 2024-05-05_12:13:17
host : zzj
rank : 1 (local_rank: 1)
exitcode : 2 (pid: 25315)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2024-05-05_12:13:17
host : zzj
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 25314)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Could you tell me how to solve this problem? Thank you for your time.

@Laulen
Copy link

Laulen commented Sep 6, 2024

I also encountered the same problem. Have you solved it

@cwl520wwh
Copy link

好像是因为博主代码是多GPU训练,而自己在跑的时候只有一个GPU的话是单GPU训练,需要修改成单GPU训练

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants