Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run demo #74

Open
WencongY opened this issue May 11, 2023 · 2 comments
Open

Cannot run demo #74

WencongY opened this issue May 11, 2023 · 2 comments

Comments

@WencongY
Copy link

WencongY commented May 11, 2023

Hello,

I'm trying to run the below simple demo but I've got an error for Accelerator.

Demo:

from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList

tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model.half().cuda()

class StopOnTokens(StoppingCriteria):
    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
        stop_ids = [50278, 50279, 50277, 1, 0]
        for stop_id in stop_ids:
            if input_ids[0][-1] == stop_id:
                return True
        return False

system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""

prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
  **inputs,
  max_new_tokens=64,
  temperature=0.7,
  do_sample=True,
  stopping_criteria=StoppingCriteriaList([StopOnTokens()])
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))

===========================================================================

Error:

Traceback (most recent call last):
  File "/packages/miniconda/envs/user/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1146, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "/packages/miniconda/envs/user/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/packages/miniconda/envs/user/lib/python3.10/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 32, in <module>
    from ...modeling_utils import PreTrainedModel
  File "/packages/miniconda/envs/user/lib/python3.10/site-packages/transformers/modeling_utils.py", line 83, in <module>
    from accelerate import __version__ as accelerate_version
  File "/home/user/.local/lib/python3.10/site-packages/accelerate/__init__.py", line 3, in <module>
    from .accelerator import Accelerator
ImportError: cannot import name 'Accelerator' from 'accelerate.accelerator'

===========================================================================

These are the packages in this environment:
Accelerate 0.19.0
Python 3.10.10
Pytorch 2.0.0
Transformers 4.28.1

I've double-checked the accelerate library and it is correctly installed. Can anyone please share what version of your libraries and let me know what could be the problem?

Thank you!

@mcmonkey4eva
Copy link

That classpath hasn't moved in a while: https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L132

It looks like your conda env is it a bit mixed up - you're running miniconda in env user, but accelerate is instead in your system's main python install instead of the env.

@WencongY
Copy link
Author

Thank you so much for the good catch. The path is fixed and now it's running.

However, I notice that it often fills up the CUDA memory really quickly even right after the model checkpoint is loaded. I'm running it on an A100 GPU, which shouldn't be. Any insight into this matter is much appreciated!

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants