From 33ecfab53f9f45d18d89a601bb5a5bb7e4e8873f Mon Sep 17 00:00:00 2001 From: MatthewGorelik <73647741+MatthewGorelik@users.noreply.github.com> Date: Wed, 3 Jul 2024 12:49:51 -0400 Subject: [PATCH] Update dial-rag-eval.md --- docs/video demos/demos-for-developers/dial-rag-eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/video demos/demos-for-developers/dial-rag-eval.md b/docs/video demos/demos-for-developers/dial-rag-eval.md index 58278ca3..6117c371 100644 --- a/docs/video demos/demos-for-developers/dial-rag-eval.md +++ b/docs/video demos/demos-for-developers/dial-rag-eval.md @@ -2,4 +2,4 @@ [Watch the video](https://youtu.be/HYg_L2dxu6U) -The RAG evaluation toolkit consists of three distinct tools: our DIAL Log Parser (https://gitlab.deltixhub.com/Deltix/openai-apps/dial-log-parser), our RAG evaluation library (https://gitlab.deltixhub.com/Deltix/openai-apps/dial-rag-eval), and our RAG evaluation UI. It is used to validate the quality of the retrieval of any RAG-based application. +The RAG evaluation toolkit consists of three distinct tools: our DIAL Log Parser, our RAG evaluation library, and our RAG evaluation UI. The parser converts DIAL logs into usable .parquet files containing all relevant information about prompts, responses, and user feedback. The evaluation library allows you to compare responses and retrieval algorithms with the expected ground truth facts. The UI allows you to visualize and analyze this data. It is used to validate the quality of the retrieval of any RAG-based application.