-
-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Ollama integration #156
Comments
最近,ollama 已实现对 OpenAI 的兼容性。czg 支持 Ollama 一个可行办法是利用 ‘ollama cp’ 命令,将现有模型名复制到一个临时名。在命令行中具体执行像下面这样的命令:
接下来,修改 .czrc 文件内容,仿如下所示:
|
感谢! 我会在最近开始这方面的开发 🫠 # ollama cp
ollama pull gemma
ollama ls
ollama cp <target-module> gpt-3.5-turbo
ollama ls # check cp success It is recommended to use commands for configuration here. The loading path of AI configuration is different from that of other configurations. npx czg --api-key=" " --api-endpoint="http://localhost:11434/v1" |
But every first request fails |
I have released a next pre-release version 🌟(v1.10.0-beta.1)🌟 for testing. Welcome to try it out! 👀 Let me know if there are still any issues. 🎉 npm install -g czg@next
czg -v # ensure version is v1.10.0-beta.1
czg -h # show help
# Usage
### 1. Use the default `gpt-4o-mini` model
czg ai
### 2. Set specify a model in current session
czg ai -M=gpt-3.5-turbo
### 3. Setup the default model
czg --api-model="gpt-4" See more setup details in https://deploy-preview-185--cz-git.netlify.app/recipes/openai#setup-openai-token |
Thanks for your work. I just don't see how this relates to using local LLMs. |
💭 Describe the feature
In addition to OpenAI and its associated cost it would be great if we might be able to use Ollama
💡 Proposed Solution
Extend the AI integration to use Ollama which allows running various LLM models locally without a fee.
The text was updated successfully, but these errors were encountered: