Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adds basic ragas eval #193

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

RobotSail
Copy link
Member

@RobotSail RobotSail commented Dec 6, 2024

This PR introduces Rubric-based evaluation through Ragas using the default with-reference rubric that they provide.

The current evaluation supports the two following modes:

  1. Being given a dataset contains records which hold user_input (question), reference (golden answer), and response (model answer)
  2. Being given a dataset with the user_input and reference, and additionally a ModelConfiguration of a model which will be used to generate the response for each question. This can be any model running on an OpenAI-compatible endpoint

Signed-off-by: Oleg S 97077423+RobotSail@users.noreply.github.com

@mergify mergify bot added dependencies Pull requests that update a dependency file ci-failure labels Dec 6, 2024
@mergify mergify bot added ci-failure and removed ci-failure labels Dec 6, 2024
@mergify mergify bot removed the ci-failure label Dec 6, 2024
requirements.txt Outdated
@@ -10,3 +10,5 @@ pandas
pandas-stubs
lm-eval>=0.4.4
httpx

ragas

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing newline at EOF

def __init__(self):
pass

def run(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for this a user is expected to bring a list of Sample objects, which hold the input, prediction, and ground truth? Are we going to provide a way to build this list of Samples from given files or lists of each category, or is this moreso just for use with self-built scripts that import the Sample object and build?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated it such that dataset now is either a pathlib.Path object or a list of samples, and we read what we need to accordingly

Signed-off-by: Oleg S <97077423+RobotSail@users.noreply.github.com>
We want ragas to read from both a list as well as a list of samples

Signed-off-by: Oleg S <97077423+RobotSail@users.noreply.github.com>
When a dataset is provided and is missing the `response` field, we will need to generate these responses. This commit ensures that when this case happens, we will error out when a student model is not configured. Otherwise, we will always generate these responses if the student model exists, regardless if `response` is in the dataframe or not.

Signed-off-by: Oleg S <97077423+RobotSail@users.noreply.github.com>
@abhi1092
Copy link
Member

abhi1092 commented Dec 9, 2024

@RobotSail let me know once you are done with testing the code. Other than that LGTM.


max_tokens: int = 768

# Random seed for reproducibility. This is not supported everywhere and therefore is unreliable.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had discussed this earlier, I think you're going to want to remove this comment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alimaredia I'll update this comment because I believe you are confusing it's meaning with our earlier conversation. This comment relates to the seed not being supported by every model serving framework.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants