From 1619889c712e347be1cb4f78ec66e7cf414ac1a6 Mon Sep 17 00:00:00 2001 From: Haotian Liu Date: Wed, 11 Oct 2023 17:15:55 -0700 Subject: [PATCH] Release evaluation scripts. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 805376773..d5dada153 100644 --- a/README.md +++ b/README.md @@ -243,7 +243,7 @@ New options to note: In LLaVA-1.5, we evaluate models on a diverse set of 12 benchmarks. To ensure the reproducibility, we evaluate the models with greedy decoding. We do not evaluate using beam search to make the inference process consistent with the chat demo of real-time outputs. -Detailed evaluation scripts coming soon. +See [Evaluation.md](https://github.com/haotian-liu/LLaVA/blob/main/docs/Evaluation.md). ### GPT-assisted Evaluation