Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with convert 7B weights in ubuntu 20.04 LTS #1064

Closed
kaleavess opened this issue Apr 19, 2023 · 2 comments
Closed

Issue with convert 7B weights in ubuntu 20.04 LTS #1064

kaleavess opened this issue Apr 19, 2023 · 2 comments

Comments

@kaleavess
Copy link

kaleavess commented Apr 19, 2023

Hi All

I am very new using AI model and I tried a few of translation models like Aplaca.cpp & GPT4AALL however those are base on 7B. I want to run a 30B/65B in my server. I followed the installation guide start with 7B for first trial its without problem till convert the .pth file to .ggml.

Error massage showed below:
llama@llama:~/llama.cpp$ python3 convert.py models/7B/
Loading model file models/7B/consolidated.00.pth
Loading vocab file models/tokenizer.model
Writing vocab...
[ 1/291] Writing tensor tok_embeddings.weight | size 32000 x 4096 | type UnquantizedDataType(name='F16')
[ 2/291] Writing tensor norm.weight | size 4096 | type UnquantizedDataType(name='F32')
[ 3/291] Writing tensor output.weight | size 32000 x 4096 | type UnquantizedDataType(name='F16')
Traceback (most recent call last):
File "/home/llama/llama.cpp/convert.py", line 1149, in <module>
main()
File "/home/llama/llama.cpp/convert.py", line 1144, in main
OutputFile.write_all(outfile, params, model, vocab)
File "/home/llama/llama.cpp/convert.py", line 953, in write_all
for i, ((name, lazy_tensor), ndarray) in enumerate(zip(model.items(), ndarrays)):
File "/home/llama/llama.cpp/convert.py", line 875, in bounded_parallel_map
result = futures.pop(0).result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/llama/llama.cpp/convert.py", line 950, in do_itemv
return lazy_tensor.load().to_ggml().ndarray
File "/home/llama/llama.cpp/convert.py", line 489, in load
ret = self._load()
File "/home/llama/llama.cpp/convert.py", line 497, in load
return self.load().astype(data_type)
File "/home/llama/llama.cpp/convert.py", line 489, in load
ret = self._load()
File "/home/llama/llama.cpp/convert.py", line 695, in load
return UnquantizedTensor(storage.load(storage_offset, elm_count).reshape(size))
File "/home/llama/llama.cpp/convert.py", line 680, in load
fp = self.zip_file.open(info)
File "/usr/lib/python3.10/zipfile.py", line 1535, in open
raise BadZipFile("Bad magic number for file header")
zipfile.BadZipFile: Bad magic number for file header

Machine spec:
CPU: Ryzen 5700G
GPU: RTX 2060 it 12GB
Ram: 32GB

Thank you so much

@Azeirah
Copy link
Contributor

Azeirah commented Apr 19, 2023

Are you certain the model you downloaded is not corrupt? The error you're getting is that one of the zipfiles is missing a header that all zipfiles should have. Is your download of the 7B model unfinished perhaps? Not sure

@prusnak
Copy link
Collaborator

prusnak commented Apr 20, 2023

Check downloaded files via sha256sum --ignore-missing -c SHA256SUMS

Please reopen if downloaded files are giving correct hashes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants