-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BFCL April 19th Release (Dataset & Pipeline) #377
Merged
ShishirPatil
merged 25 commits into
ShishirPatil:main
from
HuanzhiMao:executable-overhaul
Apr 25, 2024
Merged
BFCL April 19th Release (Dataset & Pipeline) #377
ShishirPatil
merged 25 commits into
ShishirPatil:main
from
HuanzhiMao:executable-overhaul
Apr 25, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
WIP: Undergoing thorough testing |
ShishirPatil
pushed a commit
that referenced
this pull request
Apr 25, 2024
In this PR: 1. Update the evaluation metric for BFCL, in sync with #377. 2. Change the button layout on the landing page. This PR **does not** change the leaderboard value. --------- Co-authored-by: Charlie Cheng-Jie Ji <CharlieJCJ@users.noreply.github.com>
Fully tested on model-generated results
Action Taken:
|
CharlieJCJ
approved these changes
Apr 25, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
ShishirPatil
approved these changes
Apr 25, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
ShishirPatil
pushed a commit
that referenced
this pull request
Apr 26, 2024
…se (#387) - As mentioned in #377, this PR updates the leaderboard to reflect the score changes resulting from the updates in the executable test category evaluation pipeline. - As mentioned in #386, this PR also adds five new models to the leaderboard. - It also adds a `last_updated` field to the leaderboard. This PR **DOES** change the leaderboard score. --------- Co-authored-by: Charlie Cheng-Jie Ji <charliechengjieji@berkeley.edu>
devanshamin
pushed a commit
to devanshamin/gorilla
that referenced
this pull request
Jul 9, 2024
This PR is for the BFCL April 19th Release. In this release: - Bug fix for the evaluation dataset in the executable test categories. This includes updates to both prompts and function docs. - The `evaluation_result` field has been removed to accommodate the variability in API execution results across different evaluation runs. Instead, a human-verified `ground_truth` is now included for the executable test categories. During each evaluation run, `evaluation_result` is generated anew using the `ground_truth`, and then compared against the model output. - A stricter metric has been adopted when using the `structural_match` (aka. type match) evaluation criteria ---- For `list` results, the lengths are compared; for `dict` results, the keys are matched. This is to account for the fast-changing nature of some of the real-time API results while ensuring the evaluation remains meaningful. - Added another evaluation criterion `real_time_match` for the executable category, which is a looser form of `exact_match` specifically for numerical execution results. The execution result must be within a certain percentage threshold (20%) from the expected result to accommodate the live updates of API responses. Users can change this threshold value in `eval_checker_constant.py`. - Added support to distinguish Cohere's optimized score vs. original score. - Resolved ShishirPatil#363 This PR **DOES** change the leaderboard score. We will update the leaderboard shortly, in a different PR. We will also update our HuggingFace dataset accordingly. --------- Co-authored-by: Charlie Cheng-Jie Ji <charliechengjieji@berkeley.edu> Co-authored-by: Fanjia Yan <fanjiayan@berkeley.edu>
aw632
pushed a commit
to vinaybagade/gorilla
that referenced
this pull request
Aug 22, 2024
This PR is for the BFCL April 19th Release. In this release: - Bug fix for the evaluation dataset in the executable test categories. This includes updates to both prompts and function docs. - The `evaluation_result` field has been removed to accommodate the variability in API execution results across different evaluation runs. Instead, a human-verified `ground_truth` is now included for the executable test categories. During each evaluation run, `evaluation_result` is generated anew using the `ground_truth`, and then compared against the model output. - A stricter metric has been adopted when using the `structural_match` (aka. type match) evaluation criteria ---- For `list` results, the lengths are compared; for `dict` results, the keys are matched. This is to account for the fast-changing nature of some of the real-time API results while ensuring the evaluation remains meaningful. - Added another evaluation criterion `real_time_match` for the executable category, which is a looser form of `exact_match` specifically for numerical execution results. The execution result must be within a certain percentage threshold (20%) from the expected result to accommodate the live updates of API responses. Users can change this threshold value in `eval_checker_constant.py`. - Added support to distinguish Cohere's optimized score vs. original score. - Resolved ShishirPatil#363 This PR **DOES** change the leaderboard score. We will update the leaderboard shortly, in a different PR. We will also update our HuggingFace dataset accordingly. --------- Co-authored-by: Charlie Cheng-Jie Ji <charliechengjieji@berkeley.edu> Co-authored-by: Fanjia Yan <fanjiayan@berkeley.edu>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is for the BFCL April 19th Release. In this release:
evaluation_result
field has been removed to accommodate the variability in API execution results across different evaluation runs. Instead, a human-verifiedground_truth
is now included for the executable test categories. During each evaluation run,evaluation_result
is generated anew using theground_truth
, and then compared against the model output.structural_match
(aka. type match) evaluation criteria ---- Forlist
results, the lengths are compared; fordict
results, the keys are matched. This is to account for the fast-changing nature of some of the real-time API results while ensuring the evaluation remains meaningful.real_time_match
for the executable category, which is a looser form ofexact_match
specifically for numerical execution results. The execution result must be within a certain percentage threshold (20%) from the expected result to accommodate the live updates of API responses. Users can change this threshold value ineval_checker_constant.py
.This PR DOES change the leaderboard score. We will update the leaderboard shortly, in a different PR.
We will also update our HuggingFace dataset accordingly.
Co-authored-by: Charlie Cheng-Jie Ji charliechengjieji@berkeley.edu
Co-authored-by: Fanjia Yan fanjiayan@berkeley.edu