You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tested this app a few weeks ago and it's an elegant proof of concept - thanks so much for sharing!
I have a specific system prompts I enable via a modelfile and then creating my own "custom model".
Using the current codebase, I was not able to overwrite the system prompt from whatever model called in the api.js file. The model would switch properly as defined in line 19, but entering a custom system prompt at line 87 onward never seemed to make any difference.
I'm guessing I would need to upload/publish my customization (really just a system prompt) as its own model to hugging face and then see if ollama could download that model instead. Would that work?
I'm wondering if the new npm install has you rethinking any of this project and if there's a possibility to make customizations like those in the modelfile available for building a local app. This would be super-powerful (and note, I'm not a developer... I'm just following instructions from the readme - so again, well done! This is already very accessible)
The text was updated successfully, but these errors were encountered:
Yes, creating a custom modelfile with a different system file would work. If you're just using it locally you can use ollama create then specify the name in chatd. You don't necessarily need to publish it with ollama push. I'll take a look at adding some options for custom system prompt in the coming weeks if I get chance.
As for the new ollama-js project. I'm currently working on transitioning some parts of this project over to using it, but I don't plan on supporting all functionality. My goal with this project is to keep things simple for a wide-range of users. If you're looking for a project that allows active modification of the models there are quite a few options.
what excited me about the chatd project, specifically, was the ability to wrap it all up in an app. I work with an older demographic that would be more amenable to something they can click and install.
I've used ollama's modelfile to customize the model and incorporate prompts (and eventually a simple RAG) that should help with accessibility for this group as well.
If I need to go through the steps of uploading to HuggingFace and publishing, then I could call the model with ollama via chatd, is that correct?
Thanks again - and if there's a better spot for a question like this, feel free to redirect me.
I tested this app a few weeks ago and it's an elegant proof of concept - thanks so much for sharing!
I have a specific system prompts I enable via a modelfile and then creating my own "custom model".
Using the current codebase, I was not able to overwrite the system prompt from whatever model called in the api.js file. The model would switch properly as defined in line 19, but entering a custom system prompt at line 87 onward never seemed to make any difference.
I'm guessing I would need to upload/publish my customization (really just a system prompt) as its own model to hugging face and then see if ollama could download that model instead. Would that work?
I'm wondering if the new npm install has you rethinking any of this project and if there's a possibility to make customizations like those in the modelfile available for building a local app. This would be super-powerful (and note, I'm not a developer... I'm just following instructions from the readme - so again, well done! This is already very accessible)
The text was updated successfully, but these errors were encountered: