You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear developers!
I have an installed and proper working ALIGNN v. 2024.2.4. I cloned the ALIGNNTL repository and trying to reproduce the FineTuning example, first of all, I face that the train_folder.py script (which is suggested to run ) does not contain all_models = {...} required for the TL. Then I found that train.py script contains this code, so I tried to run
from .named_optimizer import _NamedOptimizer File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/optim/named_optimizer.py", line 11, in <module> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/__init__.py", line 1, in <module> from ._flat_param import FlatParameter as FlatParameter File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 30, in <module> from torch.distributed.fsdp._common_utils import ( File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_common_utils.py", line 35, in <module> from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_fsdp_extensions.py", line 8, in <module> from torch.distributed._tensor import DeviceMesh, DTensor File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py", line 6, in <module> import torch.distributed._tensor.ops File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/ops/__init__.py", line 2, in <module> from .embedding_ops import * # noqa: F403 File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/ops/embedding_ops.py", line 8, in <module> import torch.distributed._functional_collectives as funcol File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py", line 12, in <module> from . import _functional_collectives_impl as fun_col_impl File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_functional_collectives_impl.py", line 36, in <module> from torch._dynamo import assume_constant_result File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/__init__.py", line 2, in <module> from . import convert_frame, eval_frame, resume_execution File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 40, in <module> from . import config, exc, trace_rules File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 11, in <module> from .utils import counters File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 5, in <module> import cProfile File "/home/anton/miniconda3/envs/alignn/lib/python3.10/cProfile.py", line 23, in <module> run.__doc__ = _pyprofile.run.__doc__ AttributeError: module 'profile' has no attribute 'run'
From the setup.py I see that the expected version of ALIGNN is 2021.11.16, but is it mandatory and this is the cause of the problems? If yes, aren't you going to update the ALIGNNTL code to support the actual version of ALIGNN?
Best regards,
Anton.
The text was updated successfully, but these errors were encountered:
Dear developers!
I have an installed and proper working ALIGNN v. 2024.2.4. I cloned the ALIGNNTL repository and trying to reproduce the FineTuning example, first of all, I face that the train_folder.py script (which is suggested to run ) does not contain
all_models = {...}
required for the TL. Then I found that train.py script contains this code, so I tried to runpython alignn/train.py --root_dir "../examples" --config "../examples/config_example.json" --id_prop_file "id_prop.csv" --output_dir=model
but get the following errors
from .named_optimizer import _NamedOptimizer File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/optim/named_optimizer.py", line 11, in <module> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/__init__.py", line 1, in <module> from ._flat_param import FlatParameter as FlatParameter File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 30, in <module> from torch.distributed.fsdp._common_utils import ( File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_common_utils.py", line 35, in <module> from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_fsdp_extensions.py", line 8, in <module> from torch.distributed._tensor import DeviceMesh, DTensor File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py", line 6, in <module> import torch.distributed._tensor.ops File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/ops/__init__.py", line 2, in <module> from .embedding_ops import * # noqa: F403 File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/ops/embedding_ops.py", line 8, in <module> import torch.distributed._functional_collectives as funcol File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py", line 12, in <module> from . import _functional_collectives_impl as fun_col_impl File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_functional_collectives_impl.py", line 36, in <module> from torch._dynamo import assume_constant_result File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/__init__.py", line 2, in <module> from . import convert_frame, eval_frame, resume_execution File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 40, in <module> from . import config, exc, trace_rules File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 11, in <module> from .utils import counters File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 5, in <module> import cProfile File "/home/anton/miniconda3/envs/alignn/lib/python3.10/cProfile.py", line 23, in <module> run.__doc__ = _pyprofile.run.__doc__ AttributeError: module 'profile' has no attribute 'run'
From the setup.py I see that the expected version of ALIGNN is 2021.11.16, but is it mandatory and this is the cause of the problems? If yes, aren't you going to update the ALIGNNTL code to support the actual version of ALIGNN?
Best regards,
Anton.
The text was updated successfully, but these errors were encountered: