Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create walkthrough example with data #102

Merged
merged 7 commits into from
Oct 11, 2024

Conversation

marko-polo-cheno
Copy link
Contributor

This PR provides a quick guide with all needed data to get a useful leaderboard on AutoArena. The notebook explains the core principles of why AutoArena uses an Elo leaderboard, and shows how it works.

@codecov-commenter
Copy link

codecov-commenter commented Oct 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.92%. Comparing base (a3b7745) to head (13f43fe).
Report is 1 commits behind head on trunk.

Additional details and impacted files
@@           Coverage Diff           @@
##            trunk     #102   +/-   ##
=======================================
  Coverage   96.92%   96.92%           
=======================================
  Files          34       34           
  Lines        1465     1465           
=======================================
  Hits         1420     1420           
  Misses         45       45           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor, but considering that these are checked into the source tree, smaller is better — would you mind transcoding to JPEG?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the number of votes versus the number of responses, it looks like there are many unmatched prompts between these models. Can we pare down the CSVs to only include matched prompts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can pare down the CSVs, but unfortunately there are very few prompts shared amongst all models.

@gordonhart
Copy link
Member

gordonhart commented Oct 4, 2024

When I run this myself, I get llama-2-70b ranked above gpt-4-0613 — have you seen this? It's an unexpected result, that might be due to the particular prompts+responses included in the CSVs.

Screenshot 2024-10-04 at 11 22 38 AM

Edit: looks like the culprit is a heavily skewed H2H distribution. If gpt-4-0613 has the lion's share of its H2Hs against the champion, it'll look worse than it really is on the main leaderboard:

Screenshot 2024-10-04 at 11 23 42 AM

@marko-polo-cheno
Copy link
Contributor Author

When I run this myself, I get llama-2-70b ranked above gpt-4-0613 — have you seen this? It's an unexpected result, that might be due to the particular prompts+responses included in the CSVs.

This is unexpected. Opponents shouldn't be allowed to "farm" Elo off of each other (gpt-4-1106-preview is untouchable).

@marko-polo-cheno marko-polo-cheno merged commit 7585f34 into trunk Oct 11, 2024
11 checks passed
@marko-polo-cheno marko-polo-cheno deleted the mc/create_walkthrough_example branch October 11, 2024 16:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants