You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The LLM pipeline should be modified to handle this format in addition to existing text prompt support. After this change, the following LLM pipeline calls will be equivalent.
messages= [
{"role": "system", "content": "You are a friendly assistant. You answer questions from users."},
{"role": "user", "content":"""Answer the following question using only the context below. Only include information specifically discussed.question: What is the speed of light?context: The speed of light is 300K km/s"""}
]
llm(messages)
prompt="""<|im_start|>systemYou are a friendly assistant. You answer questions from users.<|im_end|><|im_start|>userAnswer the following question using only the context below. Only include information specifically discussed.question: What is the speed of light?context: The speed of light is 300K km/s<|im_end|><|im_start|>assistant"""llm(prompt)
The text was updated successfully, but these errors were encountered:
All of the LLM generation frameworks (Hugging Face Transformers, llama.cpp and LiteLLM) support passing chat messages for completion.
The LLM pipeline should be modified to handle this format in addition to existing text prompt support. After this change, the following LLM pipeline calls will be equivalent.
The text was updated successfully, but these errors were encountered: