You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is Darth Gius's suggestion on the Reddit thread.
Sure, what I'd like to add is a completion endpoint for my node.js app, most chatbots create this making a local server and printing a link (to not start every time the chatbot) and the external app sends the prompt and receives the output through the link, like here (I think, it's the Tauri page for the api), or this (uses openai api in python to generates answers), but I'm no expert on this, for now I can/prefer to send context+prompt directly to your app and start it every time I need an answer (I already have a nodejs code to start a python chatbot and send prompts there and receive outputs in nodejs, now I would have to understand how your code works and change the python chatbot with your app).
On the rustformers github page I see that one of the commands to generate the answer is llm llama infer -m ggml-gpt4all-j-v1.3-groovy.bin -p "Rust is a cool programming language because", my basic idea for now is to change the Tauri app to let it do -p prompt, which receives from my code through the link or through a shared variable (if I don't use the link and start different times your app)
The text was updated successfully, but these errors were encountered:
This is Darth Gius's suggestion on the Reddit thread.
The text was updated successfully, but these errors were encountered: