-
Notifications
You must be signed in to change notification settings - Fork 11.4k
Ability to take in a config file as initial prompt #141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
On Windows I just use CMD files. I can just drop a prompt file on the CMD script, or use the terminal. Here is an example: @REM explicitly state all possible parameters:
if NOT [%1]==[] llama.exe ^
--model "llama-30B\ggml-model-q4_0.bin" ^
--seed 1973 ^
--threads 15 ^
--n_predict 1024 ^
--repeat_last_n 64 ^
--batch_size 8 ^
--repeat_penalty 1.4 ^
--top_k 40 ^
--top_p 0.9 ^
--temp 0.8 ^
--file %1 2>>llama.stderr.txt Then I have another one setup for interactive mode - "User:" style prompt files, etc. |
In bash, you can just load a prompt directly into the string: ./main -m ./models/65B/ggml-model-q4_0.bin -t 8 -n 256 -p "$(<FILE_NAME_HERE)" |
Thanks @nschulzke, I have been doing this, but I would ideally want to do something like:
|
You could also just use But for @MLTQ, I think using a driver script is really the way to go, especially if you want to use a configuration language like YAML. I wrote a Python script to do exactly that. You're welcome to modify it to suit your purposes. |
Following on to the "Store preprocessed prompts", it would be good to be able to take in a text file with a generic prompt & flags to start a chatbot or similar.
Such a config file could be a yaml or toml and include flags for running, model locations, prompt locations, etc.
The text was updated successfully, but these errors were encountered: