Requirements:
- Python 3.10+ (suggest using conda to manage python environment)
- System
- Linux: glibc 2.28+ and Cuda 12.0+ (if using GPU)
- Windows: WSL with Ubuntu 20.04+ and GEFORCE EXPERIENCE 535.104+ (if using GPU)
- MacOS: M1/M2/M3 Mac with Xcode 15.0+
Please create a clean Python virtual environment to avoid potential conflicts(Recommend Anaconda3).
To install the package, run:
conda create -n qanything-python python=3.10
conda activate qanything-python
git clone -b qanything-python https://github.com/netease-youdao/QAnything.git
cd QAnything
pip install -e .
If you want to use a more powerful PDF parsing function, please download the corresponding checkpoints in modelscope and put it in the path qanything_kernel/utils/loader/pdf_to_markdown/checkpoints/
bash scripts/run_for_3B_in_Linux_or_WSL.sh
bash scripts/run_for_7B_in_Linux_or_WSL.sh
Fill in the API key in scripts/run_for_openai_api_with_cpu_in_Linux_or_WSL.sh
bash scripts/run_for_openai_api_with_cpu_in_Linux_or_WSL.sh
Fill in the API key in scripts/run_for_openai_api_with_gpu_in_Linux_or_WSL.sh
bash scripts/run_for_openai_api_with_gpu_in_Linux_or_WSL.sh
Fill in the API key in scripts/run_for_openai_api_in_M1_mac.sh
bash scripts/run_for_openai_api_in_M1_mac.sh
bash scripts/run_for_ollama_api_in_M1_mac.sh
bash scripts/run_for_3B_in_M1_mac.sh
OpenCloud Need Run in Docker Container, Please Install Docker First: Docker version >= 20.10.5 and docker-compose version >= 2.23.3
docker-compose up -d
docker attach qanything-container
# Choose one of the 4 commands below to run:
bash scripts/run_for_3B_in_Linux_or_WSL.sh
bash scripts/run_for_7B_in_Linux_or_WSL.sh
bash scripts/run_for_openai_api_with_cpu_in_Linux_or_WSL.sh
bash scripts/run_for_openai_api_with_gpu_in_Linux_or_WSL.sh
Open http://0.0.0.0:8777/qanything/ in the browser to use the UI interface,
or open http://{your host ip}:8777/qanything/ in the browser to use the UI interface
Note that the trailing slash cannot be omitted, otherwise a 404 error will occur
API Documentation is available at API.md.
python scripts/new_knowledge_base.py # print kb_id
python scripts/upload_files.py <kb_id> scripts/weixiaobao.jpg # print file_id
python scripts/list_files.py <kb_id> # print files status
python scripts/stream_file.py <kb_id> # print llm res