Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transformers json logits processor crashes with Llama3 8B #1115

Closed
Dan-wanna-M opened this issue Aug 25, 2024 · 1 comment
Closed

transformers json logits processor crashes with Llama3 8B #1115

Dan-wanna-M opened this issue Aug 25, 2024 · 1 comment
Labels

Comments

@Dan-wanna-M
Copy link

Dan-wanna-M commented Aug 25, 2024

Describe the issue as clearly as possible:

transformers json logits processor crashes when calling generate(). Llama3 8B model is used.

Steps/code to reproduce the bug:

import os
 
import torch
from outlines.models.transformers import TransformerTokenizer
from outlines.processors import JSONLogitsProcessor
from pydantic import BaseModel
from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
class A(BaseModel):
    a:int
 
model = AutoModelForCausalLM.from_pretrained("NurtureAI/Meta-Llama-3-8B-Instruct-32k",
                                                 device_map="cuda",
                                                 torch_dtype=torch.bfloat16,
                                                 attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("NurtureAI/Meta-Llama-3-8B-Instruct-32k")
outlines_tokenizer = TransformerTokenizer(tokenizer)
logits_processor = LogitsProcessorList([JSONLogitsProcessor(A, outlines_tokenizer)])
model.generation_config.pad_token_id = tokenizer.eos_token_id
inputs = tokenizer(["Something"], return_tensors='pt').to("cuda:0")
outputs = model.generate(**inputs, logits_processor=logits_processor,
                                 max_new_tokens=100)

Expected result:

Some complete or incomplete json string

Error message:

../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [1,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [2,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [3,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [4,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [5,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [6,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [7,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [8,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [9,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [10,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [11,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [13,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [14,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [15,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [16,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [17,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [18,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [19,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [20,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [21,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [22,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [23,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [24,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [25,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [26,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [27,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [28,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [6,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
  File "/home/xs28/.config/JetBrains/PyCharm2024.2/scratches/scratch.py", line 22, in
    outputs = model.generate(**inputs, logits_processor=logits_processor,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/xs28/formatron/venv/lib64/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/xs28/formatron/venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 1989, in generate
    result = self._sample(
             ^^^^^^^^^^^^^
  File "/home/xs28/formatron/venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 2971, in _sample
    next_tokens = torch.argmax(next_token_scores, dim=-1)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Outlines/Python version information:

Version information outlines==0.0.46 Python 3.11.9 (main, Jun 19 2024, 10:02:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)]

accelerate==0.33.0
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
attrs==24.2.0
certifi==2024.7.4
charset-normalizer==3.3.2
cloudpickle==3.0.0
cmake==3.30.1
cramjam==2.8.3
datasets==2.21.0
dill==0.3.8
diskcache==5.6.3
einops==0.8.0
exllamav2 @ file:///home/xs28/formatron/exllamav2-0.1.9%2Bcu121.torch2.4.0-cp311-cp311-linux_x86_64.whl#sha256=c2f5ae8e34674df9d10941195a6a00aeef52dda65d306195d1f033c00de1b4bc
fastdiff==0.3.0
fastparquet==2024.5.0
filelock==3.15.4
flash-attn==2.6.3
formatron @ file:///home/xs28/formatron
frozenlist==1.4.1
fsspec==2024.5.0
huggingface-hub==0.24.5
idna==3.7
iniconfig==2.0.0
interegular==0.3.3
Jinja2==3.1.4
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
kbnf==0.3.3
lark==1.2.2
llvmlite==0.43.0
lm-format-enforcer==0.10.6
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.16
nest-asyncio==1.6.0
networkx==3.3
ninja==1.11.1.1
numba==0.60.0
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.5.82
nvidia-nvtx-cu12==12.1.105
outlines==0.0.46
packaging==24.1
pandas==2.2.2
pip3-autoremove==1.2.2
pluggy==1.5.0
protobuf==5.27.3
psutil==6.0.0
pyairports==2.1.1
pyarrow==17.0.0
pycountry==24.6.1
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
pytest==8.3.2
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
referencing==0.35.1
regex==2024.7.24
requests==2.32.3
rich==13.7.1
rpds-py==0.20.0
rwkv==0.8.26
safetensors==0.4.3
sentencepiece==0.2.0
six==1.16.0
snapshottest==0.6.0
sniffio==1.3.1
sympy==1.13.1
termcolor==2.4.0
tokenizers==0.19.1
torch==2.4.0
tqdm==4.66.4
transformers==4.43.3
triton==3.0.0
typing_extensions==4.12.2
tzdata==2024.1
urllib3==2.2.2
wasmer==1.1.0
wasmer-compiler-cranelift==1.1.0
websockets==12.0
xxhash==3.5.0
yarl==1.9.4

Context for the issue:

No response

@lapp0
Copy link
Contributor

lapp0 commented Aug 30, 2024

I managed to reproduce your error on outlines==0.0.46 Fortunately, the error doesn't occur main.

You can install the pre-release and resolve your issue via

pip install --upgrade git+https://github.com/outlines-dev/outlines

If you have any further questions / issues or I missed something, please feel free to re-open and ask!

@lapp0 lapp0 closed this as completed Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants