-
Notifications
You must be signed in to change notification settings - Fork 404
/
llama3.yml
322 lines (257 loc) · 9.96 KB
/
llama3.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
# Collection of all the prompts
prompts:
- task: general
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}{% if relevant_chunks != None and relevant_chunks != '' %}
This is some relevant context:
```markdown
{{ relevant_chunks }}
```{% endif %}
- "{{ history | to_chat_messages }}"
# Prompt for detecting the user message canonical form.
- task: generate_user_intent
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}
Your task is to generate the user intent in a conversation given the last user message similar to the examples below.
Do not provide any explanations, just output the user intent.
# Examples:
{{ examples | verbose_v1 }}
- "{{ sample_conversation | first_turns(2) | to_messages }}"
- "{{ history | colang | to_messages }}"
- type: assistant
content: |
Bot thinking: potential user intents are: {{ potential_user_intents }}
output_parser: "verbose_v1"
# Prompt for generating the next steps.
- task: generate_next_steps
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}
Your task is to generate the next steps in a conversation given the last user message similar to the examples below.
Do not provide any explanations, just output the user intent and the next steps.
# Examples:
{{ examples | remove_text_messages | verbose_v1 }}
- "{{ sample_conversation | first_turns(2) | to_intent_messages }}"
- "{{ history | colang | to_intent_messages }}"
output_parser: "verbose_v1"
# Prompt for generating the bot message from a canonical form.
- task: generate_bot_message
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}{% if relevant_chunks != None and relevant_chunks != '' %}
This is some relevant context:
```markdown
{{ relevant_chunks }}
```{% endif %}
Your task is to generate the bot message in a conversation given the last user message, user intent and bot intent.
Similar to the examples below.
Do not provide any explanations, just output the bot message.
# Examples:
{{ examples | verbose_v1 }}
- "{{ sample_conversation | first_turns(2) | to_intent_messages_2 }}"
- "{{ history | colang | to_intent_messages_2 }}"
output_parser: "verbose_v1"
# Prompt for generating the user intent, next steps and bot message in a single call.
- task: generate_intent_steps_message
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}{% if relevant_chunks != None and relevant_chunks != '' %}
This is some relevant context:
```markdown
{{ relevant_chunks }}
```{% endif %}
Your task is to generate the user intent and the next steps in a conversation given the last user message similar to the examples below.
Do not provide any explanations, just output the user intent and the next steps.
# Examples:
{{ examples | verbose_v1 }}
- "{{ sample_conversation | first_turns(2) | to_messages }}"
- "{{ history | colang | to_messages }}"
- type: assistant
content: |
Bot thinking: potential user intents are: {{ potential_user_intents }}
output_parser: "verbose_v1"
# Prompt for generating the value of a context variable.
- task: generate_value
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}
Your task is to generate value for the ${{ var_name }} variable..
Do not provide any explanations, just output value.
# Examples:
{{ examples | verbose_v1 }}
- "{{ sample_conversation | first_turns(2) | to_messages }}"
- "{{ history | colang | to_messages }}"
- type: assistant
content: |
Bot thinking: follow the following instructions: {{ instructions }}
${{ var_name }} =
output_parser: "verbose_v1"
# Colang 2 prompts below.
# Prompt for detecting the user message canonical form.
- task: generate_user_intent_from_user_action
models:
- llama3
- llama-3
messages:
- type: system
content: "{{ general_instructions }}"
- type: system
content: "This is how a conversation between a user and the bot can go:"
- "{{ sample_conversation | to_messages_v2 }}"
- type: system
content: |-
"These are the most likely user intents:"
{{ examples }}
- type: system
content: "This is the current conversation between the user and the bot:"
- "{{ history | colang | to_messages_v2}}"
- type: user
content: "user action: {{ user_action }}"
- type: system
content: "Derive `user intent:` from user action considering the intents from section 'These are the most likely user intents':"
- task: generate_user_intent_and_bot_action_from_user_action
models:
- llama3
- llama-3
messages:
- type: system
content: "{{ general_instructions }}"
- type: system
content: "This is how a conversation between a user and the bot can go:"
- "{{ sample_conversation | to_messages_v2 }}"
- type: system
content: |
{% if context.relevant_chunks %}
# This is some additional context:
```markdown
{{ context.relevant_chunks }}
```
{% endif %}
- type: system
content: |-
"These are the most likely user intents:"
{{ examples }}
- type: system
content: "This is the current conversation between the user and the bot:"
- "{{ history | colang | to_messages_v2}}"
- type: user
content: "user action: {{ user_action }}"
- type: system
content: "Continuation of the interaction starting with a `user intent:` from the section 'These are the most likely user intents':"
# Prompt for generating the value of a context variable.
- task: generate_value_from_instruction
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}
Your task is to generate value for the ${{ var_name }} variable..
Do not provide any explanations, just output value.
- type: system
content: "This is how a conversation between a user and the bot can go:"
- "{{ sample_conversation | to_messages_v2 }}"
- type: system
content: "This is the current conversation between the user and the bot:"
- "{{ history | colang | to_messages_v2}}"
- type: assistant
content: |
Follow these instruction `{{ instructions }}` to generate a value that is assigned to:
${{ var_name }} =
# Prompt for generating a flow from instructions.
- task: generate_flow_from_instructions
models:
- llama3
- llama-3
content: |-
# Example flows:
{{ examples }}
# Complete the following flow based on its instruction:
flow {{ flow_name }}
"""{{ instructions }}"""
# Prompt for generating a flow from name.
- task: generate_flow_from_name
models:
- llama3
- llama-3
messages:
- type: system
content: |
{{ general_instructions }}
Your task is to generate a flow from the provided flow name ${{ flow_name }}.
Do not provide any explanations, just output value.
- type: system
content: "This is the current conversation between the user and the bot:"
- "{{ history | colang | to_messages_v2}}"
- type: system
content: |-
These are some example flows:
{{ examples }}
- type: system
content: |-
Complete the following flow based on its name:
flow {{ flow_name }}
Do not provide any explanations, just output value.
stop:
- "\nflow"
# Prompt for generating the continuation for the current conversation.
- task: generate_flow_continuation
models:
- llama3
- llama-3
messages:
- type: system
content: "{{ general_instructions }}"
- type: system
content: "This is how a conversation between a user and the bot can go:"
- "{{ sample_conversation | to_messages_v2 }}"
- type: system
content: "This is the current conversation between the user and the bot:"
- "{{ history | colang | to_messages_v2 }}"
- type: system
content: |
{% if context.relevant_chunks %}
# This is some additional context:
```markdown
{{ context.relevant_chunks }}
```
{% endif %}
- type: system
content: "Continuation of interaction:"
- task: generate_flow_continuation_from_flow_nld
models:
- llama3
- llama-3
messages:
- type: system
content: "Directly response with expected answer. Don't provide any pre- or post-explanations."
- type: system
content: |-
{{ flow_nld }}