-
Clone this repository:
git clone https://github.com/OpenGVLab/InternVL.git
-
Create a conda virtual environment and activate it:
conda create -n internvl python=3.9 -y conda activate internvl
-
Install dependencies using
requirements.txt
:pip install -r requirements.txt
By default, our
requirements.txt
file includes the following dependencies:-r requirements/internvl_chat.txt
-r requirements/streamlit_demo.txt
-r requirements/classification.txt
-r requirements/segmentation.txt
The
clip_benchmark.txt
is not included in the default installation. If you require theclip_benchmark
functionality, please install it manually by running the following command:pip install -r requirements/clip_benchmark.txt
-
Install
flash-attn==2.3.6
:pip install flash-attn==2.3.6 --no-build-isolation
Alternatively you can compile from source:
git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention git checkout v2.3.6 python setup.py install
-
Install
mmcv-full==1.6.2
(optional, forsegmentation
):pip install -U openmim mim install mmcv-full==1.6.2
-
Install
apex
(optional, forsegmentation
):git clone https://github.com/NVIDIA/apex.git git checkout 2386a912164b0c5cfcd8be7a2b890fbac5607c82 # https://github.com/NVIDIA/apex/issues/1735 pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
If you encounter
ModuleNotFoundError: No module named 'fused_layer_norm_cuda'
, it is because apex's CUDA extensions are not being installed successfully. You can try uninstalling apex and the code will default to the PyTorch version of RMSNorm. Alternatively, if you prefer using apex, try adding a few lines tosetup.py
and then recompiling.