You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've tested this approach on a single-language (English) LlaMA, and it worked great, except:
it didn't get the LinkedIn layoff answer right
it didn't output any spaces between words
But the thing that I wonder about is real-life use: when you address a question to an LLM, you don't normally provide the context as well.
Is there a way to provide it anyway?
Also, is there any specific finetuning procedure that'd make the model better at using this approach?
The text was updated successfully, but these errors were encountered:
I've tested this approach on a single-language (English) LlaMA, and it worked great, except:
But the thing that I wonder about is real-life use: when you address a question to an LLM, you don't normally provide the context as well.
Is there a way to provide it anyway?
Also, is there any specific finetuning procedure that'd make the model better at using this approach?
The text was updated successfully, but these errors were encountered: