You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
llama.cpp: loading model from models/ausboss-llama-30b-supercot-q8_0.bin
error loading model: llama.cpp: tensor '�+� ��s��93:�a-�%��Y��8Ɓ0�&�M,�9�4������"/�@�չ�"*+c�5�������9�>+n��!������O...' should not be 2563577093-dimensional
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/ausboss-llama-30b-supercot-q8_0.bin'
main: error: unable to load model
I re-converted the model with 7e4ea5b; apparently the old file had been
llama_model_load_internal: format = ggjt v2 (latest)
and the new one is
llama_model_load_internal: format = ggjt v3 (latest)
(and 6% smaller!)
It would be nice if there was an error saying that ggjt v2 is not supported, instead of dumping out garbage tensor names and mind-bendingly large tensor dimensionalities 😁 but I suppose this doesn't necessarily need any action right now.
I freshly pulled 7e4ea5b and
make clean && make
d and it fails to load a model converted from pytorch using the tools from revision 63d2046 (using https://github.com/akx/ggify):I re-converted the model with 7e4ea5b; apparently the old file had been
and the new one is
(and 6% smaller!)
It would be nice if there was an error saying that ggjt v2 is not supported, instead of dumping out garbage tensor names and mind-bendingly large tensor dimensionalities 😁 but I suppose this doesn't necessarily need any action right now.
This seems to be related to
The text was updated successfully, but these errors were encountered: