llama.cpp - You must either implement templateMessages or _streamChat #8647
Closed
standas
started this conversation in
Models + Providers
Replies: 1 comment
-
|
Found this link which recommended switching provider to openai: Updated yaml which works is now (note adding v1 to apiBase): |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Starting with a fresh install of Continue extension in VS Code. Coming back to using Continue and I am migrating from my previously working config.json to config.yaml.
Erased ~/.continue before performing fresh plugin install.
Continue Plugin Version: 1.2.10
I have llama swap confirmed working at 10.0.2.22:8080 (running and tested with OpenWebUI), wanted to test an initial model before making updates and modified the config.yaml as following:
Receiving this error:
You must either implement templateMessages or _streamChat
I tried different models and '/v1/' endpoints appended to the apiBase, as well as the old workaround for using Ollama as the provider, but am still getting the error.
Here is the log:
Am I missing a config.yaml parameter or is there some other item?
Thanks in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions