Skip to content

Commit d8d4e86

Browse files
authoredApr 2, 2023
Add a missing step to the gpt4all instructions (#690)
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
1 parent e986f94 commit d8d4e86

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed
 

‎README.md

+4-2
Original file line numberDiff line numberDiff line change
@@ -232,13 +232,15 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
232232

233233
- Obtain the `gpt4all-lora-quantized.bin` model
234234
- It is distributed in the old `ggml` format which is now obsoleted
235-
- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py):
235+
- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py). You may also need to
236+
convert the model from the old format to the new format with [./migrate-ggml-2023-03-30-pr613.py](./migrate-ggml-2023-03-30-pr613.py):
236237

237238
```bash
238239
python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model
240+
python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-7B/gpt4all-lora-quantized.bin models/gpt4all-7B/gpt4all-lora-quantized-new.bin
239241
```
240242

241-
- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models
243+
- You can now use the newly generated `gpt4all-lora-quantized-new.bin` model in exactly the same way as all other models
242244
- The original model is saved in the same folder with a suffix `.orig`
243245

244246
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data

0 commit comments

Comments
 (0)
Please sign in to comment.