Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Miss the requirement file #2

Open
wytbwytb opened this issue Oct 21, 2024 · 6 comments
Open

Miss the requirement file #2

wytbwytb opened this issue Oct 21, 2024 · 6 comments

Comments

@wytbwytb
Copy link

Congratulations to your nice work! But I don't find the requirement.txt of the environment, can you provide this file for me? Thank you!

@Carol-gutianle
Copy link
Collaborator

I've added a requirements file generated by pipreqs, located in MLLMGuard/requirements.txt. If you have any further questions, please feel free to reach out.

@wytbwytb
Copy link
Author

Thank you for your response! And I have other requests:

  1. can you release the download link of your well-trained GuardRank? and
  2. I would like to evaluate my model on your complete benchmark, so can you provide the requested form of the unsanitized subset?

Thank you!

@Carol-gutianle
Copy link
Collaborator

Thank you for your attention!

  1. You can download the well-trained GuardRank model using the following link: GuardRank Download. It includes two evaluators: the first evaluator is designed to assess Privacy, Bias, Toxicity, and Legality using a LoRA-style approach, while the second evaluator is fine-tuned directly on RoBERTa for evaluating Hallucination.
  2. The requested form for the unsanitized subset can be found here: Unsanitized Subset Form. The review results will be sent to your email within 1-2 business days.

@wytbwytb
Copy link
Author

Thank you for your timely response!I appreciate your valuable work and will continue to focus on your further multimodal security work!

@wytbwytb
Copy link
Author

Hello!Would it be possible for you to provide access to the training dataset of GuardRank?

Thank you!

@Carol-gutianle
Copy link
Collaborator

I'm sorry, I understand your need for the training set of GuardRank, but due to concerns about users potentially cheating on our Evaluator based on the GuardRank training set, we currently have no plans to make the GuardRank annotated dataset public. If you have any questions regarding the evaluation, feel free to ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants