Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code for evaluation with GPT-3.5? #69

Open
RuskinManku opened this issue Jul 22, 2024 · 3 comments
Open

Code for evaluation with GPT-3.5? #69

RuskinManku opened this issue Jul 22, 2024 · 3 comments

Comments

@RuskinManku
Copy link

The results mention the scores of GPT-3.5 but I don't see how I can evaluate GPT using the code as it doesn't have that model.

@bys0318
Copy link
Member

bys0318 commented Jul 23, 2024

The GPT-3.5-Turbo-16k model evaluated in our paper has already been deprecated. You can try gpt-3.5-turbo-0125 (16k), or the most recent gpt-4o-mini (128k), according to OpenAI (https://platform.openai.com/docs/models).

@RuskinManku
Copy link
Author

Thanks for responding. Yes I can evaluate those, but I didn't find code where I can just change the open ai model and evaluate different ones.

@bys0318
Copy link
Member

bys0318 commented Jul 24, 2024

Right. We didn't provide code for evaluating API models. You can modify the get_pred() fucntion to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants