The DQA aka difficult questions attempted project utilises large language models (LLMs) to perform multi-hop question answering (MHQA). This project has been inspired by the tutorial 1 and the article 2, both by Dean Sacoransky. Unlike both the tutorial and the article, which use the LangGraph framework from LangChain for building agents, this project makes use of DSPy.
Following is a table of some updates regarding the project status. Note that these do not correspond to specific commits or milestones.
Date | Status | Notes or observations |
---|---|---|
February 15, 2025 | active | Custom adapter added for Deepseek models. |
January 26, 2025 | active | LlamaIndex Workflows replaced by DSPy. |
September 21, 2024 | active | Workflows made selectable. |
September 13, 2024 | active | Low parameter LLMs perform badly in unnecessary self-discovery, query refinements and ReAct tool selections. |
September 10, 2024 | active | Query decomposition may generate unnecessary sub-workflows. |
August 31, 2024 | active | Using built-in ReAct agent. |
August 29, 2024 | active | Project started. |
Install and activate a Python virtual environment in the directory where you have cloned this repository. Let us refer to this directory as the working directory or WD (interchangeably) hereonafter. Install poetry. Make sure you use Python 3.12.0 or later. To install the project with its dependencies in a virtual environment, run the following in the WD.
poetry install
In addition to Python dependencies, see the installation instructions of Ollama. You can install it on a separate machine. Download the tool calling model of Ollama that you want to use, e.g., llama3.1
or mistral-nemo
. Reinforcement learning based models such as deepseek-r1:7b
will also work.
Following is a list of environment variables that can be used to configure the DQA application. All environment variables should be supplied as quoted strings. They will be interpreted as the correct type as necessary.
For environment variables starting with GRADIO_
, See Gradio documentation for environment variables.
Variable | [Default value] and description |
---|---|
OLLAMA_URL |
[http://localhost:11434] URL of your intended Ollama host. |
LLM__OLLAMA_MODEL |
[mistral-nemo] See the available models. The model must be available on the selected Ollama server. The model must support tool calling. |
Create a .env
file in the working directory, to set the environment variables as above. Then, run the following in the WD to start the web server.
poetry run dqa-webapp
The web UI will be available at http://localhost:7860. To exit the server, use the Ctrl+C key combination.
Install pre-commit
for Git and ruff
. Then enable pre-commit
by running the following in the WD.
pre-commit install
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
MIT.