Has anyone run gpt-researcher with fine-tuned models? #921
Replies: 3 comments
-
Sup @adrianhensler Agreed. Happy to hear how it plays out. |
Beta Was this translation helpful? Give feedback.
-
Yes, I've certainly not pre-trained any models let alone tested this tool with one. I've done some initial review. I was sort of hoping someone with more insight might be able to jump start it in at least a bit of direction that makes sense. I'll post what I've got hopefully later today or tomorrow when I get it into something a bit more sensible. |
Beta Was this translation helpful? Give feedback.
-
I really want to try it. I have been trying to install it for 2 days. Just keep getting errors. Nothing has worked for me. |
Beta Was this translation helpful? Give feedback.
-
I originally started thinking about fine-tuning a local model to assist directly (somehow?) with research or report creation - but fine-tuning an OpenAI GPT model seems like a better idea.
Curious if anyone has tried this (running with a fine-tuned model), and what the use case was? I'd like to think that this would provide some markedly better results, depending on the domain and training.
Beta Was this translation helpful? Give feedback.
All reactions