You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm trying to get llama.cpp up and running on Ubuntu 24.04 (kernel 6.8.0-44 generic) with Ipex-llm, and it seems I can't select opencl as the api to run the model on. I'm not sure what option to choose, considering I thought the A770 was an openCL device.
Here's the readout:
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 15.93it/s]
2024-09-15 00:56:22,587 - INFO - Converting the current model to sym_int4 format......
Traceback (most recent call last):
File "/home/cbytes/demo.py", line 11, in <module>
model = model.to('opencl')
^^^^^^^^^^^^^^^^^^
File "/home/cbytes/miniforge3/envs/llm/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2905, in to
return super().to(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cbytes/miniforge3/envs/llm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1174, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/home/cbytes/miniforge3/envs/llm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 780, in _apply
module._apply(fn)
File "/home/cbytes/miniforge3/envs/llm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 780, in _apply
module._apply(fn)
File "/home/cbytes/miniforge3/envs/llm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 805, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/home/cbytes/miniforge3/envs/llm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in convert
return t.to(
^^^^^
RuntimeError: PyTorch is not linked with support for opencl devices
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to get llama.cpp up and running on Ubuntu 24.04 (kernel 6.8.0-44 generic) with Ipex-llm, and it seems I can't select opencl as the api to run the model on. I'm not sure what option to choose, considering I thought the A770 was an openCL device.
Here's the readout:
The text was updated successfully, but these errors were encountered: