You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I realized that we have been using GPT-4o rather than GPT-4o mini (the more economical version). When I switch to use the cheaper model (by enforcing this on the server) I notice a subtle deterioration in the quality of the tutor, in particular it seems to give away the answers more quickly. I think we need some additional prompt engineering to get this working properly.
The text was updated successfully, but these errors were encountered:
Maybe hold off with this for now, since we might explore another approach to not have to pay for the LLM. My understanding is that nothing is prohibitively expensive right now?
I realized that we have been using GPT-4o rather than GPT-4o mini (the more economical version). When I switch to use the cheaper model (by enforcing this on the server) I notice a subtle deterioration in the quality of the tutor, in particular it seems to give away the answers more quickly. I think we need some additional prompt engineering to get this working properly.
The text was updated successfully, but these errors were encountered: