Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 9b0a4d4

Browse files
authoredApr 24, 2023
examples/main README improvements and some light refactoring (#1131)
1 parent 2ec8342 commit 9b0a4d4

File tree

5 files changed

+18
-10
lines changed

5 files changed

+18
-10
lines changed
 

‎README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ Here is an example of a few-shot interaction, invoked with the command
241241
./main -m ./models/13B/ggml-model-q4_0.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
242242
```
243243

244-
Note the use of `--color` to distinguish between user input and generated text.
244+
Note the use of `--color` to distinguish between user input and generated text. Other parameters are explained in more detail in the [README](examples/main/README.md) for the `main` example program.
245245

246246
![image](https://user-images.githubusercontent.com/1991296/224575029-2af3c7dc-5a65-4f64-a6bb-517a532aea38.png)
247247

‎examples/common.cpp

+1-3
Original file line numberDiff line numberDiff line change
@@ -156,10 +156,8 @@ bool gpt_params_parse(int argc, char ** argv, gpt_params & params) {
156156
params.interactive = true;
157157
} else if (arg == "--embedding") {
158158
params.embedding = true;
159-
} else if (arg == "--interactive-start") {
160-
params.interactive = true;
161159
} else if (arg == "--interactive-first") {
162-
params.interactive_start = true;
160+
params.interactive_first = true;
163161
} else if (arg == "-ins" || arg == "--instruct") {
164162
params.instruct = true;
165163
} else if (arg == "--color") {

‎examples/common.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ struct gpt_params {
4343
bool interactive = false; // interactive mode
4444

4545
bool embedding = false; // get only sentence embedding
46-
bool interactive_start = false; // wait for user input immediately
46+
bool interactive_first = false; // wait for user input immediately
4747

4848
bool instruct = false; // instruction mode (used for Alpaca models)
4949
bool ignore_eos = false; // do not stop generating after eos

‎examples/main/README.md

+12-2
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,20 @@ To get started right away, run the following command, making sure to use the cor
2121
./main -m models/7B/ggml-model.bin --prompt "Once upon a time"
2222
```
2323

24+
The following command generates "infinite" text from a starting prompt (you can use `Ctrl-C` to stop it):
25+
26+
```bash
27+
./main -m models/7B/ggml-model.bin --ignore-eos --n_predict -1 --keep -1 --prompt "Once upon a time"
28+
```
29+
2430
For an interactive experience, try this command:
2531

2632
```bash
2733
./main -m models/7B/ggml-model.bin -n -1 --color -r "User:" --in-prefix " " --prompt $'User: Hi\nAI: Hello. I am an AI chatbot. Would you like to talk?\nUser: Sure!\nAI: What would you like to talk about?\nUser:'
2834
```
2935

36+
Note that the newline characters in the prompt string above only work on Linux. On Windows, you will have to use the ``--file`` option (see below) to load a multi-line prompt from file instead.
37+
3038
## Common Options
3139

3240
In this section, we cover the most commonly used options for running the `main` program with the LLaMA models:
@@ -84,6 +92,8 @@ Instruction mode is particularly useful when working with Alpaca models, which a
8492

8593
- `-ins, --instruct`: Enable instruction mode to leverage the capabilities of Alpaca models in completing tasks based on user-provided instructions.
8694

95+
Technical detail: the user's input is internally prefixed with the reverse prompt (or ``### Instruction:`` as the default), and followed by ``### Response:`` (except if you just press Return without any input, to keep generating a longer response).
96+
8797
By understanding and utilizing these interaction options, you can create engaging and dynamic experiences with the LLaMA models, tailoring the text generation process to your specific needs.
8898

8999
## Context Management
@@ -114,7 +124,7 @@ The following options are related to controlling the text generation process, in
114124

115125
The `--n_predict` option controls the number of tokens the model generates in response to the input prompt. By adjusting this value, you can influence the length of the generated text. A higher value will result in longer text, while a lower value will produce shorter text. A value of -1 will cause text to be generated without limit.
116126

117-
It is important to note that the generated text may be shorter than the specified number of tokens if an End-of-Sequence (EOS) token or a reverse prompt is encountered. In interactive mode text generation will pause and control will be returned to the user. In non-interactive mode, the program will end. In both cases, the text generation may stop before reaching the specified `n_predict` value.
127+
It is important to note that the generated text may be shorter than the specified number of tokens if an End-of-Sequence (EOS) token or a reverse prompt is encountered. In interactive mode text generation will pause and control will be returned to the user. In non-interactive mode, the program will end. In both cases, the text generation may stop before reaching the specified `n_predict` value. If you want the model to keep going without ever producing End-of-Sequence on its own, you can use the ``--ignore-eos`` parameter.
118128

119129
### RNG Seed
120130

@@ -126,7 +136,7 @@ The RNG seed is used to initialize the random number generator that influences t
126136

127137
- `--temp N`: Adjust the randomness of the generated text (default: 0.8).
128138

129-
Temperature is a hyperparameter that controls the randomness of the generated text. It affects the probability distribution of the model's output tokens. A higher temperature (e.g., 1.5) makes the output more random and creative, while a lower temperature (e.g., 0.5) makes the output more focused, deterministic, and conservative. The default value is 0.8, which provides a balance between randomness and determinism.
139+
Temperature is a hyperparameter that controls the randomness of the generated text. It affects the probability distribution of the model's output tokens. A higher temperature (e.g., 1.5) makes the output more random and creative, while a lower temperature (e.g., 0.5) makes the output more focused, deterministic, and conservative. The default value is 0.8, which provides a balance between randomness and determinism. At the extreme, a temperature of 0 will always pick the most likely next token, leading to identical outputs in each run.
130140

131141
Example usage: `--temp 0.8`
132142

‎examples/main/main.cpp

+3-3
Original file line numberDiff line numberDiff line change
@@ -178,12 +178,12 @@ int main(int argc, char ** argv) {
178178

179179
// in instruct mode, we inject a prefix and a suffix to each input by the user
180180
if (params.instruct) {
181-
params.interactive_start = true;
181+
params.interactive_first = true;
182182
params.antiprompt.push_back("### Instruction:\n\n");
183183
}
184184

185185
// enable interactive mode if reverse prompt or interactive start is specified
186-
if (params.antiprompt.size() != 0 || params.interactive_start) {
186+
if (params.antiprompt.size() != 0 || params.interactive_first) {
187187
params.interactive = true;
188188
}
189189

@@ -246,7 +246,7 @@ int main(int argc, char ** argv) {
246246
#endif
247247
" - Press Return to return control to LLaMa.\n"
248248
" - If you want to submit another line, end your input in '\\'.\n\n");
249-
is_interacting = params.interactive_start;
249+
is_interacting = params.interactive_first;
250250
}
251251

252252
bool is_antiprompt = false;

0 commit comments

Comments
 (0)