RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
- 2024-05-21 Supports streaming output and text chunk retrieval API.
- 2024-05-15 Integrates OpenAI GPT-4o.
- 2024-05-08 Integrates LLM DeepSeek-V2.
- 2024-04-26 Adds file management.
- 2024-04-19 Supports conversation API (detail).
- 2024-04-16 Integrates an embedding model 'bce-embedding-base_v1' from BCEmbedding, and FastEmbed, which is designed specifically for light and speedy embedding.
- 2024-04-11 Supports Xinference for local LLM deployment.
- 2024-04-10 Adds a new layout recognition model for analyzing legal documents.
- 2024-04-08 Supports Ollama for local LLM deployment.
- 2024-04-07 Supports Chinese UI.
- Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
- Finds "needle in a data haystack" of literally unlimited tokens.
- Intelligent and explainable.
- Plenty of template options to choose from.
- Visualization of text chunking to allow human intervention.
- Quick view of the key references and traceable citations to support grounded answers.
- Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.
- Streamlined RAG orchestration catered to both personal and large businesses.
- Configurable LLMs as well as embedding models.
- Multiple recall paired with fused re-ranking.
- Intuitive APIs for seamless integration with business.
- CPU >= 4 cores
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.
-
Ensure
vm.max_map_count
>= 262144 (more):To check the value of
vm.max_map_count
:$ sysctl vm.max_map_count
Reset
vm.max_map_count
to a value at least 262144 if it is not.# In this case, we set it to 262144: $ sudo sysctl -w vm.max_map_count=262144
This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
vm.max_map_count
value in /etc/sysctl.conf accordingly:vm.max_map_count=262144
-
Clone the repo:
$ git clone https://github.com/infiniflow/ragflow.git
-
Build the pre-built Docker images and start up the server:
Running the following commands automatically downloads the dev version RAGFlow Docker image. To download and run a specified Docker version, update
RAGFLOW_VERSION
in docker/.env to the intended version, for exampleRAGFLOW_VERSION=v0.6.0
, before running the following commands.$ cd ragflow/docker $ chmod +x ./entrypoint.sh $ docker compose up -d
The core image is about 9 GB in size and may take a while to load.
-
Check the server status after having the server up and running:
$ docker logs -f ragflow-server
The following output confirms a successful launch of the system:
____ ______ __ / __ \ ____ _ ____ _ / ____// /____ _ __ / /_/ // __ `// __ `// /_ / // __ \| | /| / / / _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / /_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ /____/ * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:9380 * Running on http://x.x.x.x:9380 INFO:werkzeug:Press CTRL+C to quit
If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a
network anomaly
error because, at that moment, your RAGFlow may not be fully initialized. -
In your web browser, enter the IP address of your server and log in to RAGFlow.
With default settings, you only need to enter
http://IP_OF_YOUR_MACHINE
(sans port number) as the default HTTP serving port80
can be omitted when using the default configurations. -
In service_conf.yaml, select the desired LLM factory in
user_default_llm
and update theAPI_KEY
field with the corresponding API key.See ./docs/llm_api_key_setup.md for more information.
The show is now on!
When it comes to system configurations, you will need to manage the following files:
- .env: Keeps the fundamental setups for the system, such as
SVR_HTTP_PORT
,MYSQL_PASSWORD
, andMINIO_PASSWORD
. - service_conf.yaml: Configures the back-end services.
- docker-compose.yml: The system relies on docker-compose.yml to start up.
You must ensure that changes to the .env file are in line with what are in the service_conf.yaml file.
The ./docker/README file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the ./docker/README file are aligned with the corresponding configurations in the service_conf.yaml file.
To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80
to <YOUR_SERVING_PORT>:80
.
Updates to all system configurations require a system reboot to take effect:
$ docker-compose up -d
To build the Docker images from source:
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
To launch the service from source, please follow these steps:
- Clone the repository
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
- Create a virtual environment (ensure Anaconda or Miniconda is installed)
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt
If CUDA version is greater than 12.0, execute the following additional commands:
$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
- Copy the entry script and configure environment variables
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh
Use the following commands to obtain the Python path and the ragflow project path:
$ which python
$ pwd
Set the output of which python
as the value for PY
and the output of pwd
as the value for PYTHONPATH
.
If LD_LIBRARY_PATH
is already configured, it can be commented out.
# Adjust configurations according to your actual situation; the two export commands are newly added.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# Optional: Add Hugging Face mirror
export HF_ENDPOINT=https://hf-mirror.com
- Start the base services
$ cd docker
$ docker compose -f docker-compose-base.yml up -d
-
Check the configuration files Ensure that the settings in docker/.env match those in conf/service_conf.yaml. The IP addresses and ports for related services in service_conf.yaml should be changed to the local machine IP and ports exposed by the container.
-
Launch the service
$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
- Start the WebUI service
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# Modify proxy.target to 127.0.0.1:9380
$ npm run dev
- Deploy the WebUI service
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx
See the RAGFlow Roadmap 2024
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.