We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Because the model is divided into four parts in https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b/tree/main, the above problems in the process of merging model, how to solve this problem, please?
python .\convert-hf-to-gguf.py .\llava-med-v1.5-mistral-7b\ --outfile .\llava-med\ Loading model: llava-med-v1.5-mistral-7b Traceback (most recent call last): File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 1876, in main() File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 1857, in main model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian) File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 50, in init self.model_arch = self._get_model_architecture() File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 281, in _get_model_architecture raise NotImplementedError(f'Architecture "{arch}" not supported!') NotImplementedError: Architecture "LlavaMistralForCausalLM" not supported!
No response
The text was updated successfully, but these errors were encountered:
I met the same issue
Sorry, something went wrong.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Sounds like a valid issue
No branches or pull requests
Prerequisites
Feature Description
Because the model is divided into four parts in https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b/tree/main, the above problems in the process of merging model, how to solve this problem, please?
Motivation
python .\convert-hf-to-gguf.py .\llava-med-v1.5-mistral-7b\ --outfile .\llava-med\ Loading model: llava-med-v1.5-mistral-7b Traceback (most recent call last): File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 1876, in main() File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 1857, in main model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian) File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 50, in init self.model_arch = self._get_model_architecture() File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 281, in _get_model_architecture raise NotImplementedError(f'Architecture "{arch}" not supported!') NotImplementedError: Architecture "LlavaMistralForCausalLM" not supported!
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: