Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training with cpu or rocm? #22

Open
sirus20x6 opened this issue Sep 7, 2023 · 0 comments
Open

Training with cpu or rocm? #22

sirus20x6 opened this issue Sep 7, 2023 · 0 comments

Comments

@sirus20x6
Copy link

Can you train with cpu or rocm? I don't have a cuda card and I get

/usr/lib/python3.11/site-packages/h5py/__init__.py:36: UserWarning: h5py is running against HDF5 1.14.2 when it was built against 1.14.1, this may cause problems
  _warn(("h5py is running against HDF5 {0} when it was built against {1}, "
/usr/lib/python3.11/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
  warnings.warn("Can't initialize NVML")
Traceback (most recent call last):
  File "/code/git/Platypus/finetune.py", line 13, in <module>
    from peft import (
  File "/usr/lib/python3.11/site-packages/peft/__init__.py", line 22, in <module>
    from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
  File "/usr/lib/python3.11/site-packages/peft/mapping.py", line 16, in <module>
    from .peft_model import (
  File "/usr/lib/python3.11/site-packages/peft/peft_model.py", line 31, in <module>
    from .tuners import (
  File "/usr/lib/python3.11/site-packages/peft/tuners/__init__.py", line 21, in <module>
    from .lora import LoraConfig, LoraModel
  File "/usr/lib/python3.11/site-packages/peft/tuners/lora.py", line 40, in <module>
    import bitsandbytes as bnb
  File "/usr/lib/python3.11/site-packages/bitsandbytes/__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "/usr/lib/python3.11/site-packages/bitsandbytes/research/__init__.py", line 1, in <module>
    from . import nn
  File "/usr/lib/python3.11/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "/usr/lib/python3.11/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "/usr/lib/python3.11/site-packages/bitsandbytes/optim/__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "/usr/lib/python3.11/site-packages/bitsandbytes/cextension.py", line 13, in <module>
    setup.run_cuda_setup()
  File "/usr/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py", line 120, in run_cuda_setup
    binary_name, cudart_path, cc, cuda_version_string = evaluate_cuda_setup()
                                                        ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py", line 341, in evaluate_cuda_setup
    cuda_version_string = get_cuda_version()
                          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py", line 311, in get_cuda_version
    major, minor = map(int, torch.version.cuda.split("."))
                            ^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant