diff --git a/README.md b/README.md
index bfd99c303..6106bb57a 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,6 @@
![LOGO](assets/LOGO.svg)
A Toolkit for Evaluating Large Vision-Language Models.
-
-
📊Datasets, Models, Evaluation Results •
🏗️ QuickStart •
🛠️ Custom Benchmark & Model •
🎯 Goal •
🖊️ Citation
+
**VLMEvalKit** (the python package name is **vlmeval**) is an **open-source evaluation toolkit** of **large vision-language models (LVLMs)**. It enables **one-command evaluation** of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt **generation-based evaluation** for all LVLMs (obtain the answer via `generate` / `chat` interface), and provide the evaluation results obtained with both **exact matching** and **LLM(ChatGPT)-based answer extraction**.
@@ -102,9 +101,9 @@ torchrun --nproc-per-node=2 run.py --data MME --model qwen_chat --verbose
The evaluation results will be printed as logs, besides. **Result Files** will also be generated in the directory `$YOUR_WORKING_DIRECTORY/{model_name}`. Files ending with `.csv` contain the evaluated metrics.
-## 🛠️ How to implement a new Benchmark / VLM in VLMEvalKit?
+## 🛠️ Custom Benchmark or VLM
-Please refer to [Custom_Benchmark_and_Model](/Custom_Benchmark_and_Model.md).
+To implement a custom benchmark or VLM in VLMEvalKit, please refer to [Custom_Benchmark_and_Model](/Custom_Benchmark_and_Model.md).
## 🎯 The Goal of VLMEvalKit