From f8a35fee8583020b251e26d3581373043903956a Mon Sep 17 00:00:00 2001 From: sennnnn Date: Mon, 8 Jul 2024 14:54:37 +0800 Subject: [PATCH 1/2] Fix mistral tokenizer bug. --- requirements.txt | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements.txt b/requirements.txt index 4551744..44ba476 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,10 +2,10 @@ # basic dependencies torch==2.0.1 torchvision==0.15.2 -transformers==4.37.2 -tokenizers==0.15.1 +transformers==4.40.0 +tokenizers==0.19.1 deepspeed==0.13.1 -accelerate==0.21.0 +accelerate==0.26.1 peft==0.4.0 timm==1.0.3 numpy==1.24.4 From d84a8810205fa78948985d9ea7b1ffe338d4ce42 Mon Sep 17 00:00:00 2001 From: sennnnn Date: Mon, 8 Jul 2024 15:18:16 +0800 Subject: [PATCH 2/2] Update readme --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b39dc22..489353b 100644 --- a/README.md +++ b/README.md @@ -50,14 +50,15 @@ Basic Dependencies: * Python >= 3.8 * Pytorch >= 2.0.1 * CUDA Version >= 11.7 -* transformers >= 4.37.2 +* transformers >= 4.40.0 (for mistral tokenizer) +* tokenizers >= 0.19.1 (for mistral tokenizer) **[Online Mode]** Install required packages (better for development): ```bash git clone https://github.com/DAMO-NLP-SG/VideoLLaMA2 cd VideoLLaMA2 pip install -r requirements.txt -pip install flash-attn --no-build-isolation +pip install flash-attn==2.5.8 --no-build-isolation ``` **[Offline Mode]** Install VideoLLaMA2 as a Python package (better for direct use): @@ -66,7 +67,7 @@ git clone https://github.com/DAMO-NLP-SG/VideoLLaMA2 cd VideoLLaMA2 pip install --upgrade pip # enable PEP 660 support pip install -e . -pip install flash-attn --no-build-isolation +pip install flash-attn==2.5.8 --no-build-isolation ``` ## 🚀 Main Results