You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to fine-tune gpt4all-lora-quantized.bin which supported by the llama.cpp but i can't not find the tokenizer config.json file in the repo LLukas22/gpt4all-lora-quantized-ggjt, I just found .bin model file. could any one help with that.
or told me how to fine-tune gpt4all model which i could run with llama.cpp
The text was updated successfully, but these errors were encountered:
right now, llama.cpp does not support finetuning.
the issue kind of tracking finetuning for llama.cpp/ggml : ggerganov/ggml#8
it can also be found on the lowp section of #1220
I want to fine-tune gpt4all-lora-quantized.bin which supported by the llama.cpp but i can't not find the tokenizer config.json file in the repo LLukas22/gpt4all-lora-quantized-ggjt, I just found .bin model file. could any one help with that.
or told me how to fine-tune gpt4all model which i could run with llama.cpp
The text was updated successfully, but these errors were encountered: