From a0d33d3136bfe9de07f4e98b8f61157d75f5a4d3 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 18:48:24 +0000 Subject: [PATCH 01/41] docker docs --- docs/docs/gpt-researcher/getting-started.md | 19 ++++++++++--------- multi_agents/frontend/next-env.d.ts | 5 +++++ 2 files changed, 15 insertions(+), 9 deletions(-) create mode 100644 multi_agents/frontend/next-env.d.ts diff --git a/docs/docs/gpt-researcher/getting-started.md b/docs/docs/gpt-researcher/getting-started.md index f1803084c..16ee28b8b 100644 --- a/docs/docs/gpt-researcher/getting-started.md +++ b/docs/docs/gpt-researcher/getting-started.md @@ -103,20 +103,21 @@ python -m uvicorn main:app --reload ## Try it with Docker -> **Step 1** - Install Docker +> **Step 1** - Install & Open Docker Desktop -Follow instructions at https://docs.docker.com/engine/install/ +Follow instructions at https://www.docker.com/products/docker-desktop/ -> **Step 2** - Create .env file with your OpenAI Key or simply export it +> **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' + +> **Step 3** - Within the docker-compose file comment out services that you don't want to run with Docker. ```bash -$ export OPENAI_API_KEY={Your API Key here} +$ docker-compose up --build ``` -> **Step 3** - Run the application +> **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: + - the Python server running on localhost:8000
+ - the React app running on localhost:3000
-```bash -$ docker-compose up -``` +Visit localhost:3000 on any browser and enjoy researching! -> **Step 4** - Go to http://localhost:8000 on any browser and enjoy researching! diff --git a/multi_agents/frontend/next-env.d.ts b/multi_agents/frontend/next-env.d.ts new file mode 100644 index 000000000..4f11a03dc --- /dev/null +++ b/multi_agents/frontend/next-env.d.ts @@ -0,0 +1,5 @@ +/// +/// + +// NOTE: This file should not be edited +// see https://nextjs.org/docs/basic-features/typescript for more information. From 68e49177cb4d15eaf62920b4ecdecfa60b30530f Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 18:51:26 +0000 Subject: [PATCH 02/41] better docker docs --- docs/docs/gpt-researcher/getting-started.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/docs/gpt-researcher/getting-started.md b/docs/docs/gpt-researcher/getting-started.md index 16ee28b8b..cc97d9a07 100644 --- a/docs/docs/gpt-researcher/getting-started.md +++ b/docs/docs/gpt-researcher/getting-started.md @@ -107,9 +107,14 @@ python -m uvicorn main:app --reload Follow instructions at https://www.docker.com/products/docker-desktop/ -> **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' -> **Step 3** - Within the docker-compose file comment out services that you don't want to run with Docker. +> **Step 2** - Follow this flow: + +https://www.youtube.com/watch?v=x1gKFt_6Us4 + +This mainly includes cloning the '.env.example' file, adding your API Keys to the cloned file and saving the file as '.env' + +> **Step 3** - Within root, run with Docker. ```bash $ docker-compose up --build From f511b8f9fce186b9fd4229966a3448b785e67809 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 18:58:32 +0000 Subject: [PATCH 03/41] deploy on linux docs --- docs/docs/gpt-researcher/linux-deployment.md | 71 ++++++++++++++++++++ docs/sidebars.js | 3 +- 2 files changed, 73 insertions(+), 1 deletion(-) create mode 100644 docs/docs/gpt-researcher/linux-deployment.md diff --git a/docs/docs/gpt-researcher/linux-deployment.md b/docs/docs/gpt-researcher/linux-deployment.md new file mode 100644 index 000000000..b6f745f79 --- /dev/null +++ b/docs/docs/gpt-researcher/linux-deployment.md @@ -0,0 +1,71 @@ +# How to Deploy on Linux + +This guide will walk you through the process of deploying GPT Researcher on a Linux server. + +## Server Requirements + +The default Ubuntu droplet option on DigitalOcean works well, but this setup should work on any hosting service with similar specifications: + +- 2 GB RAM +- 1 vCPU +- 50 GB SSD Storage + +Here's a screenshot of the recommended Ubuntu machine specifications: + +![Ubuntu Server Specifications](https://cdn.discordapp.com/attachments/1129340110916288553/1262372662299070504/Screen_Shot_2024-07-15_at_14.32.01.png?ex=66cf0c28&is=66cdbaa8&hm=c1798d9c37de585dc7df8558e92545144e31a2407d8a181cac7e8c16059fdcd6&) + +## Deployment Steps + +After setting up your server, follow these steps to install Docker, Docker Compose, and Nginx. + + +Some more commands to achieve that: + +### Step 1: Update the System +### First, ensure your package index is up-to-date: + +sudo apt update +### Step 2: Install Git +### Git is a version control system. Install it using: + +sudo apt install git -y + +### Verify the installation by checking the Git version: +git --version +### Step 3: Install Docker +### Docker is a platform for developing, shipping, and running applications inside containers. + +### Install prerequisites: + +sudo apt install apt-transport-https ca-certificates curl software-properties-common -y +### Add Dockerโ€™s official GPG key: + +curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg +### Set up the stable repository: + +echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null +### Update the package index again and install Docker: + +sudo apt update +sudo apt install docker-ce -y +### Verify Docker installation: + +sudo systemctl status docker +### Optionally, add your user to the docker group to run Docker without sudo: + +sudo usermod -aG docker ${USER} +### Log out and back in for the group change to take effect. + +Step 4: Install Nginx +### Nginx is a high-performance web server. + +### Install Nginx: + +sudo apt install nginx -y +### Start and enable Nginx: + +sudo systemctl start nginx +sudo systemctl enable nginx +### Verify Nginx installation: + +sudo systemctl status nginx \ No newline at end of file diff --git a/docs/sidebars.js b/docs/sidebars.js index f638e0132..341bd0892 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -22,7 +22,8 @@ 'gpt-researcher/getting-started', 'gpt-researcher/pip-package', 'gpt-researcher/frontend', - 'gpt-researcher/example', + 'gpt-researcher/linux-deployment', + 'gpt-researcher/example', 'gpt-researcher/troubleshooting', ], }, From 711a03b7c84085ecb64d8f7a85cec938a893a6c3 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:00:24 +0000 Subject: [PATCH 04/41] digital ocean link --- docs/docs/gpt-researcher/linux-deployment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/gpt-researcher/linux-deployment.md b/docs/docs/gpt-researcher/linux-deployment.md index b6f745f79..a815bf523 100644 --- a/docs/docs/gpt-researcher/linux-deployment.md +++ b/docs/docs/gpt-researcher/linux-deployment.md @@ -4,7 +4,7 @@ This guide will walk you through the process of deploying GPT Researcher on a Li ## Server Requirements -The default Ubuntu droplet option on DigitalOcean works well, but this setup should work on any hosting service with similar specifications: +The default Ubuntu droplet option on [DigitalOcean](https://m.do.co/c/1a2af257efba) works well, but this setup should work on any hosting service with similar specifications: - 2 GB RAM - 1 vCPU From 8b1c93f9a709c465de4b518be6cf54e707c4d506 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:03:48 +0000 Subject: [PATCH 05/41] deployment docs --- docs/docs/gpt-researcher/linux-deployment.md | 53 +++++++++++++++++++- 1 file changed, 52 insertions(+), 1 deletion(-) diff --git a/docs/docs/gpt-researcher/linux-deployment.md b/docs/docs/gpt-researcher/linux-deployment.md index a815bf523..7f37fb49f 100644 --- a/docs/docs/gpt-researcher/linux-deployment.md +++ b/docs/docs/gpt-researcher/linux-deployment.md @@ -24,6 +24,7 @@ Some more commands to achieve that: ### Step 1: Update the System ### First, ensure your package index is up-to-date: +```bash sudo apt update ### Step 2: Install Git ### Git is a version control system. Install it using: @@ -68,4 +69,54 @@ sudo systemctl start nginx sudo systemctl enable nginx ### Verify Nginx installation: -sudo systemctl status nginx \ No newline at end of file +sudo systemctl status nginx +``` + +Here's your nginx config file: + +```bash +events {} + +http { + server { + listen 80; + server_name name.example; + + location / { + proxy_pass http://localhost:3000; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection 'upgrade'; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + } + + location ~ ^/(ws|upload|files|outputs) { + proxy_pass http://localhost:8000; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "Upgrade"; + proxy_set_header Host $host; + } + } +} +``` + +And the relevant commands: + + +```bash +vim /etc/nginx/nginx.conf +### Edit it to reflect above. Then verify all is good with: + +sudo nginx -t +# If there are no errors: + +sudo systemctl restart nginx + +# Clone .env.example as .env +# Run from root: + +docker-compose up --build + +``` \ No newline at end of file From a7b681adce1294cbffcd102c263213a600d1e886 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:12:24 +0000 Subject: [PATCH 06/41] frontend docs --- docs/docs/gpt-researcher/frontend.md | 35 +++++++++++++++- .../getting-started-with-docker.md | 25 ++++++++++++ docs/docs/gpt-researcher/getting-started.md | 25 ------------ docs/sidebars.js | 2 + multi_agents/README.md | 40 ------------------- 5 files changed, 60 insertions(+), 67 deletions(-) create mode 100644 docs/docs/gpt-researcher/getting-started-with-docker.md diff --git a/docs/docs/gpt-researcher/frontend.md b/docs/docs/gpt-researcher/frontend.md index b7df49cdd..6e3949d7e 100644 --- a/docs/docs/gpt-researcher/frontend.md +++ b/docs/docs/gpt-researcher/frontend.md @@ -2,7 +2,38 @@ This frontend project aims to enhance the user experience of GPT-Researcher, providing an intuitive and efficient interface for automated research. It offers two deployment options to suit different needs and environments. -## Option 1: Static Frontend (FastAPI) + +## NextJS Frontend App + +The React app (located in `frontend` directory) is our Frontend 2.0 which we hope will enable us to display the robustness of the backend on the frontend, as well. + +It comes with loads of added features, such as: + - a drag-n-drop user interface for uploading and deleting files to be used as local documents by GPTResearcher. + - a GUI for setting your GPTR environment variables. + - the ability to trigger the multi_agents flow via the Backend Module or Langgraph Cloud Host (currently in closed beta). + - stability fixes + - and more coming soon! + +### Run the NextJS React App with Docker + +> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started#try-it-with-docker) + +> **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' + +> **Step 3** - Within the docker-compose file comment out services that you don't want to run with Docker. + +```bash +$ docker compose up --build +``` + +> **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: + - the Python server running on localhost:8000
+ - the React app running on localhost:3000
+ +Visit localhost:3000 on any browser and enjoy researching! + + +## Other Options: 1: Static Frontend (FastAPI) A lightweight solution using FastAPI to serve static files. @@ -28,7 +59,7 @@ A lightweight solution using FastAPI to serve static files. -## Option 2: NextJS Frontend +## Yet Another Option: Running NextJS Frontend via CLI A more robust solution with enhanced features and performance. diff --git a/docs/docs/gpt-researcher/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started-with-docker.md new file mode 100644 index 000000000..ff22c8dab --- /dev/null +++ b/docs/docs/gpt-researcher/getting-started-with-docker.md @@ -0,0 +1,25 @@ +## Try it with Docker + +> **Step 1** - Install & Open Docker Desktop + +Follow instructions at https://www.docker.com/products/docker-desktop/ + + +> **Step 2** - Follow this flow: + +https://www.youtube.com/watch?v=x1gKFt_6Us4 + +This mainly includes cloning the '.env.example' file, adding your API Keys to the cloned file and saving the file as '.env' + +> **Step 3** - Within root, run with Docker. + +```bash +$ docker-compose up --build +``` + +> **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: + - the Python server running on localhost:8000
+ - the React app running on localhost:3000
+ +Visit localhost:3000 on any browser and enjoy researching! + diff --git a/docs/docs/gpt-researcher/getting-started.md b/docs/docs/gpt-researcher/getting-started.md index cc97d9a07..c8ab2defe 100644 --- a/docs/docs/gpt-researcher/getting-started.md +++ b/docs/docs/gpt-researcher/getting-started.md @@ -101,28 +101,3 @@ python -m uvicorn main:app --reload
-## Try it with Docker - -> **Step 1** - Install & Open Docker Desktop - -Follow instructions at https://www.docker.com/products/docker-desktop/ - - -> **Step 2** - Follow this flow: - -https://www.youtube.com/watch?v=x1gKFt_6Us4 - -This mainly includes cloning the '.env.example' file, adding your API Keys to the cloned file and saving the file as '.env' - -> **Step 3** - Within root, run with Docker. - -```bash -$ docker-compose up --build -``` - -> **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: - - the Python server running on localhost:8000
- - the React app running on localhost:3000
- -Visit localhost:3000 on any browser and enjoy researching! - diff --git a/docs/sidebars.js b/docs/sidebars.js index 341bd0892..bc3ee3ac4 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -19,6 +19,8 @@ collapsed: false, items: [ 'gpt-researcher/introduction', + 'gpt-researcher/getting-started-with-docker', + 'gpt-researcher/getting-started', 'gpt-researcher/pip-package', 'gpt-researcher/frontend', diff --git a/multi_agents/README.md b/multi_agents/README.md index 7a942e30e..3004c6beb 100644 --- a/multi_agents/README.md +++ b/multi_agents/README.md @@ -105,43 +105,3 @@ langgraph up ``` From there, see documentation [here](https://github.com/langchain-ai/langgraph-example) on how to use the streaming and async endpoints, as well as the playground. - -## NextJS Frontend App - -The React app (located in `frontend` directory) is our Frontend 2.0 which we hope will enable us to display the robustness of the backend on the frontend, as well. - -It comes with loads of added features, such as: - - a drag-n-drop user interface for uploading and deleting files to be used as local documents by GPTResearcher. - - a GUI for setting your GPTR environment variables. - - the ability to trigger the multi_agents flow via the Backend Module or Langgraph Cloud Host (currently in closed beta). - - stability fixes - - and more coming soon! - -### Run the NextJS React App with Docker - -> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started#try-it-with-docker) - -> **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' - -> **Step 3** - Within the docker-compose file comment out services that you don't want to run with Docker. - -```bash -$ docker compose up --build -``` - -> **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: - - the Python server running on localhost:8000
- - the React app running on localhost:3000
- -Visit localhost:3000 on any browser and enjoy researching! - - -### Run the NextJS React App with NPM - -```bash -cd frontend -nvm install 18.17.0 -nvm use v18.17.0 -npm install --legacy-peer-deps -npm run dev -``` From 79dfef882b23ab5f515882496301962d8ec9eef7 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:18:09 +0000 Subject: [PATCH 07/41] automated tests docs --- docs/docs/gpt-researcher/automated-tests.md | 20 ++++++++++++++++++++ docs/sidebars.js | 5 ++++- 2 files changed, 24 insertions(+), 1 deletion(-) create mode 100644 docs/docs/gpt-researcher/automated-tests.md diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md new file mode 100644 index 000000000..f2bd892b4 --- /dev/null +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -0,0 +1,20 @@ + +## Automated Testing with Github Actions + +This repository contains the code for the automated testing of the GPT-Researcher Repo using Github Actions. + +The tests are triggered in a docker container which runs the tests via the `pytest` module. + + +Attaching here the required settings & screenshots on the github repo level: + +Step 1: Within the repo, press the "Settings" tab +Step 2: Create a new environment named "tests" (all lowercase) +Step 3: Click into the "tests" environment & add environment secrets: TAVILY_API_KEY & OPENAI_API_KEY + +![Screen Shot 2024-07-28 at 9 00 19](https://github.com/user-attachments/assets/7cd341c6-d8d4-461f-ab5e-325abc9fe509) +![Screen Shot 2024-07-28 at 9 02 55](https://github.com/user-attachments/assets/a3744f01-06a6-4c9d-8aa0-1fc742d3e866) + +If configured correctly, here's what the Github action should look like when opening a new PR or committing to an open PR: + +![Screen Shot 2024-07-28 at 8 57 02](https://github.com/user-attachments/assets/30dbc668-4e6a-4b3b-a02e-dc859fc9bd3d) \ No newline at end of file diff --git a/docs/sidebars.js b/docs/sidebars.js index bc3ee3ac4..1d73c0281 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -20,10 +20,13 @@ items: [ 'gpt-researcher/introduction', 'gpt-researcher/getting-started-with-docker', - 'gpt-researcher/getting-started', 'gpt-researcher/pip-package', 'gpt-researcher/frontend', + 'gpt-researcher/automated-tests', + + + 'gpt-researcher/linux-deployment', 'gpt-researcher/example', 'gpt-researcher/troubleshooting', From c33ea0a6aa25db392deb90100a85e78e5b932149 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:24:34 +0000 Subject: [PATCH 08/41] github actions docs for automated tests --- docs/docs/gpt-researcher/automated-tests.md | 13 +++++++++++++ docs/sidebars.js | 3 --- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md index f2bd892b4..df349185f 100644 --- a/docs/docs/gpt-researcher/automated-tests.md +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -5,6 +5,19 @@ This repository contains the code for the automated testing of the GPT-Researche The tests are triggered in a docker container which runs the tests via the `pytest` module. +## Running the Tests + +You can run the tests: + +### Via a docker command + +```bash +docker-compose --profile test run --rm gpt-researcher-tests +``` + +### Via a Github Action + +![image](https://github.com/user-attachments/assets/721fca20-01bb-4c10-9cf9-19d823bebbb0) Attaching here the required settings & screenshots on the github repo level: diff --git a/docs/sidebars.js b/docs/sidebars.js index 1d73c0281..228645017 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -24,9 +24,6 @@ 'gpt-researcher/pip-package', 'gpt-researcher/frontend', 'gpt-researcher/automated-tests', - - - 'gpt-researcher/linux-deployment', 'gpt-researcher/example', 'gpt-researcher/troubleshooting', From a284c1383d035572b21a8eeb1265bcca40820a6a Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:27:52 +0000 Subject: [PATCH 09/41] improving getting started docs --- docs/docs/gpt-researcher/automated-tests.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md index df349185f..17b924ed4 100644 --- a/docs/docs/gpt-researcher/automated-tests.md +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -22,8 +22,21 @@ docker-compose --profile test run --rm gpt-researcher-tests Attaching here the required settings & screenshots on the github repo level: Step 1: Within the repo, press the "Settings" tab + Step 2: Create a new environment named "tests" (all lowercase) -Step 3: Click into the "tests" environment & add environment secrets: TAVILY_API_KEY & OPENAI_API_KEY + +Step 3: Click into the "tests" environment & add environment secrets: + + +``` +OPENAI_API_KEY= +TAVILY_API_KEY= +``` + +Get the keys from here: +https://app.tavily.com/sign-in +https://platform.openai.com/api-keys + ![Screen Shot 2024-07-28 at 9 00 19](https://github.com/user-attachments/assets/7cd341c6-d8d4-461f-ab5e-325abc9fe509) ![Screen Shot 2024-07-28 at 9 02 55](https://github.com/user-attachments/assets/a3744f01-06a6-4c9d-8aa0-1fc742d3e866) From 032f1932d841cb67c163e44f7d9f073eb32efbde Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:29:24 +0000 Subject: [PATCH 10/41] improving getting started docs --- docs/docs/gpt-researcher/automated-tests.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md index 17b924ed4..1f405f9da 100644 --- a/docs/docs/gpt-researcher/automated-tests.md +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -25,13 +25,7 @@ Step 1: Within the repo, press the "Settings" tab Step 2: Create a new environment named "tests" (all lowercase) -Step 3: Click into the "tests" environment & add environment secrets: - - -``` -OPENAI_API_KEY= -TAVILY_API_KEY= -``` +Step 3: Click into the "tests" environment & add environment secrets of ```OPENAI_API_KEY``` & ```TAVILY_API_KEY``` Get the keys from here: https://app.tavily.com/sign-in From 99b77fb74e49aaed194adaa92d8aa43f5aa295ab Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:29:59 +0000 Subject: [PATCH 11/41] improving getting started docs --- docs/docs/gpt-researcher/automated-tests.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md index 1f405f9da..857ec1e49 100644 --- a/docs/docs/gpt-researcher/automated-tests.md +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -28,7 +28,9 @@ Step 2: Create a new environment named "tests" (all lowercase) Step 3: Click into the "tests" environment & add environment secrets of ```OPENAI_API_KEY``` & ```TAVILY_API_KEY``` Get the keys from here: + https://app.tavily.com/sign-in + https://platform.openai.com/api-keys From 9e1f4fa8ad78c0ca96b58fd356fdfe56f04d85ec Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:40:46 +0000 Subject: [PATCH 12/41] custom python script to query your elestio server --- .../gpt-researcher/deploy-llm-on-elestio.md | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 docs/docs/gpt-researcher/deploy-llm-on-elestio.md diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md new file mode 100644 index 000000000..d6f70e73d --- /dev/null +++ b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md @@ -0,0 +1,69 @@ + +# Deploy Custom LLM on Elestio + +Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. + +You can deploy an [Open WebUI](https://github.com/open-webui/open-webui/tree/main) server with [Elestio](https://elest.io/open-source/ollama) + +After deploying the Elestio server, you'll want to enter the [Open WebUI Admin App](https://github.com/open-webui/open-webui/tree/main) & download a custom LLM. + +For our example, let's choose to download the `gemma2:2b` model. + +This model now automatically becomes available via your Server's out-of-the-box API. + + +### Querying your Custom LLM with GPT-Researcher + +Here's the .env file you'll need to query your custom LLM with GPT-Researcher: + +```bash +OPENAI_API_KEY="123" +OPENAI_API_BASE="https://.vm.elestio.app:57987/v1" +OLLAMA_BASE_URL="https://.vm.elestio.app:57987/" +FAST_LLM_MODEL=gemma2:2b +SMART_LLM_MODEL=gemma2:2b +OLLAMA_EMBEDDING_MODEL=all-minilm +LLM_PROVIDER=openai +EMBEDDING_PROVIDER=ollama +``` + +And here's a custom python script you can use to query your custom LLM: + +```python + +import os +import asyncio +import logging +logging.basicConfig(level=logging.DEBUG) +from gpt_researcher.llm_provider.generic import GenericLLMProvider +from gpt_researcher.utils.llm import get_llm + +# Set up environment variables +os.environ["LLM_PROVIDER"] = "ollama" +os.environ["OLLAMA_BASE_URL"] = "https://ollama-ug3qr-u21899.vm.elestio.app:57987" +os.environ["FAST_LLM_MODEL"] = "llama3.1" + +# Create the GenericLLMProvider instance +llm_provider = get_llm( + "ollama", + base_url=os.environ["OLLAMA_BASE_URL"], + model=os.environ["FAST_LLM_MODEL"], + temperature=0.7, + max_tokens=2000, + verify_ssl=False # Add this line +) + +# Test the connection with a simple query +messages = [{"role": "user", "content": "sup?"}] + +async def test_ollama(): + try: + response = await llm_provider.get_chat_response(messages, stream=False) + print("Ollama response:", response) + except Exception as e: + print(f"Error: {e}") + +# Run the async function +asyncio.run(test_ollama()) + +``` \ No newline at end of file From 4d1b3762bad67d46495660920eb8463079741db9 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:48:50 +0000 Subject: [PATCH 13/41] video iframe --- docs/docs/gpt-researcher/deploy-llm-on-elestio.md | 3 ++- docs/docs/gpt-researcher/getting-started-with-docker.md | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md index d6f70e73d..e72c2dcf4 100644 --- a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md +++ b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md @@ -66,4 +66,5 @@ async def test_ollama(): # Run the async function asyncio.run(test_ollama()) -``` \ No newline at end of file +``` + diff --git a/docs/docs/gpt-researcher/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started-with-docker.md index ff22c8dab..88ec621e5 100644 --- a/docs/docs/gpt-researcher/getting-started-with-docker.md +++ b/docs/docs/gpt-researcher/getting-started-with-docker.md @@ -7,7 +7,8 @@ Follow instructions at https://www.docker.com/products/docker-desktop/ > **Step 2** - Follow this flow: -https://www.youtube.com/watch?v=x1gKFt_6Us4 + + This mainly includes cloning the '.env.example' file, adding your API Keys to the cloned file and saving the file as '.env' From c17d715843e882ff8b311ecb16008083c339a327 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:54:52 +0000 Subject: [PATCH 14/41] table of contents --- docs/docs/gpt-researcher/automated-tests.md | 2 ++ .../gpt-researcher/deploy-llm-on-elestio.md | 2 +- docs/sidebars.js | 32 ++++++++++++++++--- 3 files changed, 31 insertions(+), 5 deletions(-) diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md index 857ec1e49..728596c2e 100644 --- a/docs/docs/gpt-researcher/automated-tests.md +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -1,4 +1,6 @@ +# Automated Tests + ## Automated Testing with Github Actions This repository contains the code for the automated testing of the GPT-Researcher Repo using Github Actions. diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md index e72c2dcf4..0fb81e893 100644 --- a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md +++ b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md @@ -1,5 +1,5 @@ -# Deploy Custom LLM on Elestio +# Deploy LLM on Elestio Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. diff --git a/docs/sidebars.js b/docs/sidebars.js index 228645017..9c2e96747 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -12,23 +12,48 @@ module.exports = { docsSidebar: [ 'welcome', + { - type: 'category', - label: 'GPT Researcher', + type: 'getting-started', + label: 'Getting Started', collapsible: true, collapsed: false, items: [ 'gpt-researcher/introduction', 'gpt-researcher/getting-started-with-docker', 'gpt-researcher/getting-started', - 'gpt-researcher/pip-package', 'gpt-researcher/frontend', + ] + }, + + { + type: 'category', + label: 'GPT Researcher', + collapsible: true, + collapsed: false, + items: [ + + 'gpt-researcher/pip-package', 'gpt-researcher/automated-tests', 'gpt-researcher/linux-deployment', 'gpt-researcher/example', 'gpt-researcher/troubleshooting', ], }, + + + + { + type: 'llms', + label: 'Large Langeuage Models', + collapsible: true, + collapsed: false, + items: [ + 'gpt-researcher/llms', + 'gpt-researcher/deploy-llm-on-elestio.md' + ] + }, + { type: 'category', label: 'Customization', @@ -38,7 +63,6 @@ 'gpt-researcher/config', 'gpt-researcher/tailored-research', 'gpt-researcher/retrievers', - 'gpt-researcher/llms', 'gpt-researcher/vector-stores', ] }, From 5a902b08ef41de5aa6f33e295fe9ae4cf474d45f Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:57:27 +0000 Subject: [PATCH 15/41] elestio path --- .../docs/gpt-researcher/deploy-llm-on-elestio.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md index 0fb81e893..409d71b37 100644 --- a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md +++ b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md @@ -27,6 +27,22 @@ LLM_PROVIDER=openai EMBEDDING_PROVIDER=ollama ``` +#### Disable Elestio Authentication or Added Auth Headers + +To remove the basic auth you have to follow the below steps: +Go to your service -> Security -> at last Nginx -> in that find the below code: + +```bash +auth_basic "Authentication"; + +auth_basic_user_file /etc/nginx/conf.d/.htpasswd; +``` + +Comment these both these lines out and click the button "Update & Restart" to reflect the changes. + + +#### Run LLM Test Script for GPTR + And here's a custom python script you can use to query your custom LLM: ```python From 8100e077e3b286544dd3c665fabf7ae00b79bfcd Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 19:58:50 +0000 Subject: [PATCH 16/41] default doc path in .env --- .env.example | 3 ++- README.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/.env.example b/.env.example index 7c1bb4ec4..edb96a977 100644 --- a/.env.example +++ b/.env.example @@ -1,2 +1,3 @@ OPENAI_API_KEY= -TAVILY_API_KEY= \ No newline at end of file +TAVILY_API_KEY= +DOC_PATH=./docs/my-docs \ No newline at end of file diff --git a/README.md b/README.md index 31d02810a..524dae95c 100644 --- a/README.md +++ b/README.md @@ -178,7 +178,7 @@ You can instruct the GPT Researcher to run research tasks based on your local do Step 1: Add the env variable `DOC_PATH` pointing to the folder where your documents are located. ```bash -export DOC_PATH="./my-docs" +export DOC_PATH="./docs/my-docs" ``` Step 2: From b106903827fb91d0042bc230cfb2e80d8f5a5e53 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:02:41 +0000 Subject: [PATCH 17/41] restructure tables of contents --- README.md | 1 + docs/sidebars.js | 27 ++++++++++++++++++--------- 2 files changed, 19 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 524dae95c..f75e4fd00 100644 --- a/README.md +++ b/README.md @@ -185,6 +185,7 @@ Step 2: - If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options. - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "documents" when you instantiate the `GPTResearcher` class [code sample here](https://docs.tavily.com/docs/gpt-researcher/tailored-research). + ## ๐Ÿ‘ช Multi-Agent Assistant As AI evolves from prompt engineering and RAG to multi-agent systems, we're excited to introduce our new multi-agent assistant built with [LangGraph](https://python.langchain.com/v0.1/docs/langgraph/). diff --git a/docs/sidebars.js b/docs/sidebars.js index 9c2e96747..bbe02a853 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -14,7 +14,7 @@ 'welcome', { - type: 'getting-started', + type: 'category', label: 'Getting Started', collapsible: true, collapsed: false, @@ -23,6 +23,7 @@ 'gpt-researcher/getting-started-with-docker', 'gpt-researcher/getting-started', 'gpt-researcher/frontend', + 'gpt-researcher/linux-deployment', ] }, @@ -32,20 +33,28 @@ collapsible: true, collapsed: false, items: [ - 'gpt-researcher/pip-package', - 'gpt-researcher/automated-tests', - 'gpt-researcher/linux-deployment', 'gpt-researcher/example', + 'gpt-researcher/automated-tests', 'gpt-researcher/troubleshooting', ], }, + { + type: 'category', + label: 'Custom Context', + collapsible: true, + collapsed: false, + items: [ + 'gpt-researcher/tailored-research', + 'gpt-researcher/vector-stores', + ] + }, { - type: 'llms', - label: 'Large Langeuage Models', + type: 'category', + label: 'Large Language Models', collapsible: true, collapsed: false, items: [ @@ -53,17 +62,17 @@ 'gpt-researcher/deploy-llm-on-elestio.md' ] }, + + { type: 'category', - label: 'Customization', + label: 'More Customization', collapsible: true, collapsed: true, items: [ 'gpt-researcher/config', - 'gpt-researcher/tailored-research', 'gpt-researcher/retrievers', - 'gpt-researcher/vector-stores', ] }, { From 11d793444068a04ca33fc18af6c4d0ce17d8777a Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:07:23 +0000 Subject: [PATCH 18/41] docs on local docs in custom context section --- .../gpt-researcher/getting-started-with-docker.md | 2 +- docs/docs/gpt-researcher/local-docs.md | 14 ++++++++++++++ docs/sidebars.js | 2 +- 3 files changed, 16 insertions(+), 2 deletions(-) create mode 100644 docs/docs/gpt-researcher/local-docs.md diff --git a/docs/docs/gpt-researcher/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started-with-docker.md index 88ec621e5..2efcecfac 100644 --- a/docs/docs/gpt-researcher/getting-started-with-docker.md +++ b/docs/docs/gpt-researcher/getting-started-with-docker.md @@ -1,4 +1,4 @@ -## Try it with Docker +# Docker: Path of least resistance > **Step 1** - Install & Open Docker Desktop diff --git a/docs/docs/gpt-researcher/local-docs.md b/docs/docs/gpt-researcher/local-docs.md new file mode 100644 index 000000000..62b20cd3f --- /dev/null +++ b/docs/docs/gpt-researcher/local-docs.md @@ -0,0 +1,14 @@ + +# ๐Ÿ“„ Research on Local Documents + +You can instruct the GPT Researcher to run research tasks based on your local documents. Currently supported file formats are: PDF, plain text, CSV, Excel, Markdown, PowerPoint, and Word documents. + +Step 1: Add the env variable `DOC_PATH` pointing to the folder where your documents are located. + +```bash +export DOC_PATH="./docs/my-docs" +``` + +Step 2: + - If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options. + - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "documents" when you instantiate the `GPTResearcher` class [code sample here](https://docs.tavily.com/docs/gpt-researcher/tailored-research). diff --git a/docs/sidebars.js b/docs/sidebars.js index bbe02a853..a537f183a 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -47,6 +47,7 @@ collapsed: false, items: [ 'gpt-researcher/tailored-research', + 'gpt-researcher/local-docs', 'gpt-researcher/vector-stores', ] }, @@ -64,7 +65,6 @@ }, - { type: 'category', label: 'More Customization', From 1f3b90e36a4eea13fe6c5e7c0e0ecb636117aeb3 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:09:23 +0000 Subject: [PATCH 19/41] running on linux --- docs/docs/gpt-researcher/linux-deployment.md | 2 +- docs/sidebars.js | 5 ----- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/docs/docs/gpt-researcher/linux-deployment.md b/docs/docs/gpt-researcher/linux-deployment.md index 7f37fb49f..d830a0b8a 100644 --- a/docs/docs/gpt-researcher/linux-deployment.md +++ b/docs/docs/gpt-researcher/linux-deployment.md @@ -1,4 +1,4 @@ -# How to Deploy on Linux +# Running on Linux This guide will walk you through the process of deploying GPT Researcher on a Linux server. diff --git a/docs/sidebars.js b/docs/sidebars.js index a537f183a..8d9979e5a 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -12,7 +12,6 @@ module.exports = { docsSidebar: [ 'welcome', - { type: 'category', label: 'Getting Started', @@ -26,7 +25,6 @@ 'gpt-researcher/linux-deployment', ] }, - { type: 'category', label: 'GPT Researcher', @@ -39,7 +37,6 @@ 'gpt-researcher/troubleshooting', ], }, - { type: 'category', label: 'Custom Context', @@ -51,8 +48,6 @@ 'gpt-researcher/vector-stores', ] }, - - { type: 'category', label: 'Large Language Models', From 5a4b224fe6131dac9913897677e2714ef2223f2a Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:11:15 +0000 Subject: [PATCH 20/41] spacing --- docs/docs/gpt-researcher/automated-tests.md | 1 - docs/docs/gpt-researcher/deploy-llm-on-elestio.md | 1 - docs/docs/gpt-researcher/getting-started.md | 1 + docs/docs/gpt-researcher/langgraph.md | 1 + docs/docs/gpt-researcher/llms.md | 1 + docs/docs/gpt-researcher/tailored-research.md | 1 + docs/docs/gpt-researcher/troubleshooting.md | 1 + docs/docs/gpt-researcher/vector-stores.md | 1 + 8 files changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/automated-tests.md index 728596c2e..334e993d5 100644 --- a/docs/docs/gpt-researcher/automated-tests.md +++ b/docs/docs/gpt-researcher/automated-tests.md @@ -1,4 +1,3 @@ - # Automated Tests ## Automated Testing with Github Actions diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md index 409d71b37..a9da93b3d 100644 --- a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md +++ b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md @@ -1,4 +1,3 @@ - # Deploy LLM on Elestio Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. diff --git a/docs/docs/gpt-researcher/getting-started.md b/docs/docs/gpt-researcher/getting-started.md index c8ab2defe..57e47c12c 100644 --- a/docs/docs/gpt-researcher/getting-started.md +++ b/docs/docs/gpt-researcher/getting-started.md @@ -1,4 +1,5 @@ # Getting Started + > **Step 0** - Install Python 3.11 or later. [See here](https://www.tutorialsteacher.com/python/install-python) for a step-by-step guide. > **Step 1** - Download the project and navigate to its directory diff --git a/docs/docs/gpt-researcher/langgraph.md b/docs/docs/gpt-researcher/langgraph.md index a3b6f5e7b..e6014932f 100644 --- a/docs/docs/gpt-researcher/langgraph.md +++ b/docs/docs/gpt-researcher/langgraph.md @@ -1,4 +1,5 @@ # LangGraph + [LangGraph](https://python.langchain.com/docs/langgraph) is a library for building stateful, multi-actor applications with LLMs. This example uses Langgraph to automate the process of an in depth research on any given topic. diff --git a/docs/docs/gpt-researcher/llms.md b/docs/docs/gpt-researcher/llms.md index 13fcd7b15..89f7976d9 100644 --- a/docs/docs/gpt-researcher/llms.md +++ b/docs/docs/gpt-researcher/llms.md @@ -1,4 +1,5 @@ # Configure LLM + As described in the [introduction](/docs/gpt-researcher/config), the default LLM is OpenAI due to its superior performance and speed. With that said, GPT Researcher supports various open/closed source LLMs, and you can easily switch between them by adding the `LLM_PROVIDER` env variable and corresponding configuration params. Current supported LLMs are `openai`, `google` (gemini), `azure_openai`, `ollama`, `anthropic`, `mistral`, `huggingface` and `groq`. diff --git a/docs/docs/gpt-researcher/tailored-research.md b/docs/docs/gpt-researcher/tailored-research.md index 71f4ec5e7..38ea52ad1 100644 --- a/docs/docs/gpt-researcher/tailored-research.md +++ b/docs/docs/gpt-researcher/tailored-research.md @@ -1,4 +1,5 @@ # Tailored Research + The GPT Researcher package allows you to tailor the research to your needs such as researching on specific sources or local documents, and even specify the agent prompt instruction upon which the research is conducted. ### Research on Specific Sources ๐Ÿ“š diff --git a/docs/docs/gpt-researcher/troubleshooting.md b/docs/docs/gpt-researcher/troubleshooting.md index d02092c3f..86a50e1aa 100644 --- a/docs/docs/gpt-researcher/troubleshooting.md +++ b/docs/docs/gpt-researcher/troubleshooting.md @@ -1,4 +1,5 @@ # Troubleshooting + We're constantly working to provide a more stable version. If you're running into any issues, please first check out the resolved issues or ask us via our [Discord community](https://discord.gg/QgZXvJAccX). ### model: gpt-4 does not exist diff --git a/docs/docs/gpt-researcher/vector-stores.md b/docs/docs/gpt-researcher/vector-stores.md index 5a9fc9496..bd6100109 100644 --- a/docs/docs/gpt-researcher/vector-stores.md +++ b/docs/docs/gpt-researcher/vector-stores.md @@ -1,4 +1,5 @@ # Vector Stores + The GPT Researcher package allows you to integrate with existing langchain vector stores that have been populated. For a complete list of supported langchain vector stores, please refer to this [link](https://python.langchain.com/v0.2/docs/integrations/vectorstores/). From ba51a9b51a0f473eaeb02b5568a3ff5a3dff5c05 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:12:52 +0000 Subject: [PATCH 21/41] clean --- docs/docs/gpt-researcher/local-docs.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/docs/gpt-researcher/local-docs.md b/docs/docs/gpt-researcher/local-docs.md index 62b20cd3f..9c8115c5a 100644 --- a/docs/docs/gpt-researcher/local-docs.md +++ b/docs/docs/gpt-researcher/local-docs.md @@ -1,4 +1,3 @@ - # ๐Ÿ“„ Research on Local Documents You can instruct the GPT Researcher to run research tasks based on your local documents. Currently supported file formats are: PDF, plain text, CSV, Excel, Markdown, PowerPoint, and Word documents. From ce8e4819181c08f4e844c1ce777f673f4365a5c8 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:16:16 +0000 Subject: [PATCH 22/41] link to docker tutorial video --- docs/docs/gpt-researcher/getting-started-with-docker.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/docs/docs/gpt-researcher/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started-with-docker.md index 2efcecfac..ab8b819bc 100644 --- a/docs/docs/gpt-researcher/getting-started-with-docker.md +++ b/docs/docs/gpt-researcher/getting-started-with-docker.md @@ -5,10 +5,7 @@ Follow instructions at https://www.docker.com/products/docker-desktop/ -> **Step 2** - Follow this flow: - - - +> **Step 2** - [Follow this flow](https://www.youtube.com/watch?v=x1gKFt_6Us4) This mainly includes cloning the '.env.example' file, adding your API Keys to the cloned file and saving the file as '.env' From 24dee8d7d9a744fd04c79a615d773c9193ba6cd6 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Tue, 27 Aug 2024 20:30:58 +0000 Subject: [PATCH 23/41] ollama catalogue --- docs/docs/gpt-researcher/deploy-llm-on-elestio.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md index a9da93b3d..245733af9 100644 --- a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md +++ b/docs/docs/gpt-researcher/deploy-llm-on-elestio.md @@ -8,6 +8,13 @@ After deploying the Elestio server, you'll want to enter the [Open WebUI Admin A For our example, let's choose to download the `gemma2:2b` model. +Choose a model from [Ollama's Library of LLM's](https://ollama.com/library?sort=popular) + +Paste the model name & size into the Web UI: + +Screen Shot 2024-08-27 at 23 26 28 + + This model now automatically becomes available via your Server's out-of-the-box API. @@ -26,6 +33,9 @@ LLM_PROVIDER=openai EMBEDDING_PROVIDER=ollama ``` +Replace FAST_LLM_MODEL & SMART_LLM_MODEL with the model you downloaded from the Elestio Web UI in the previous step. + + #### Disable Elestio Authentication or Added Auth Headers To remove the basic auth you have to follow the below steps: From 4742c20a4c3d5841a3f853908df0ee66f246fffd Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:17:20 +0000 Subject: [PATCH 24/41] webhooks docs --- .../gpt-researcher/{ => frontend}/frontend.md | 0 .../frontend/playing-with-webhooks.md | 23 ++++++++++++++ .../getting-started-with-docker.md | 0 .../{ => getting-started}/getting-started.md | 0 .../{ => getting-started}/introduction.md | 0 .../{ => getting-started}/linux-deployment.md | 0 .../{ => gptr}/automated-tests.md | 0 .../docs/gpt-researcher/{ => gptr}/example.md | 0 .../gpt-researcher/{ => gptr}/pip-package.md | 0 .../{ => gptr}/troubleshooting.md | 0 docs/sidebars.js | 31 +++++++++++++------ 11 files changed, 44 insertions(+), 10 deletions(-) rename docs/docs/gpt-researcher/{ => frontend}/frontend.md (100%) create mode 100644 docs/docs/gpt-researcher/frontend/playing-with-webhooks.md rename docs/docs/gpt-researcher/{ => getting-started}/getting-started-with-docker.md (100%) rename docs/docs/gpt-researcher/{ => getting-started}/getting-started.md (100%) rename docs/docs/gpt-researcher/{ => getting-started}/introduction.md (100%) rename docs/docs/gpt-researcher/{ => getting-started}/linux-deployment.md (100%) rename docs/docs/gpt-researcher/{ => gptr}/automated-tests.md (100%) rename docs/docs/gpt-researcher/{ => gptr}/example.md (100%) rename docs/docs/gpt-researcher/{ => gptr}/pip-package.md (100%) rename docs/docs/gpt-researcher/{ => gptr}/troubleshooting.md (100%) diff --git a/docs/docs/gpt-researcher/frontend.md b/docs/docs/gpt-researcher/frontend/frontend.md similarity index 100% rename from docs/docs/gpt-researcher/frontend.md rename to docs/docs/gpt-researcher/frontend/frontend.md diff --git a/docs/docs/gpt-researcher/frontend/playing-with-webhooks.md b/docs/docs/gpt-researcher/frontend/playing-with-webhooks.md new file mode 100644 index 000000000..310613ccb --- /dev/null +++ b/docs/docs/gpt-researcher/frontend/playing-with-webhooks.md @@ -0,0 +1,23 @@ +# Playing with Webhooks + +The GPTR Frontend is powered by Webhooks streaming back from the Backend. This allows for real-time updates on the status of your research tasks, as well as the ability to interact with the Backend directly from the Frontend. + + +## Inspecting Webhooks + +When running reports via the frontend, you can inspect the websocket messages in the Network Tab. + +Here's how: + +![image](https://github.com/user-attachments/assets/15fcb5a4-77ea-4b3b-87d7-55d4b6f80095) + + +### Am I polling the right URL? + +If you're concerned that your frontend isn't hitting the right API Endpoint, you can check the URL in the Network Tab. + +Click into the WS request & go to the "Headers" tab + +![image](https://github.com/user-attachments/assets/dbd58c1d-3506-411a-852b-e1b133b6f5c8) + +For debugging, have a look at the getHost function. \ No newline at end of file diff --git a/docs/docs/gpt-researcher/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md similarity index 100% rename from docs/docs/gpt-researcher/getting-started-with-docker.md rename to docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md diff --git a/docs/docs/gpt-researcher/getting-started.md b/docs/docs/gpt-researcher/getting-started/getting-started.md similarity index 100% rename from docs/docs/gpt-researcher/getting-started.md rename to docs/docs/gpt-researcher/getting-started/getting-started.md diff --git a/docs/docs/gpt-researcher/introduction.md b/docs/docs/gpt-researcher/getting-started/introduction.md similarity index 100% rename from docs/docs/gpt-researcher/introduction.md rename to docs/docs/gpt-researcher/getting-started/introduction.md diff --git a/docs/docs/gpt-researcher/linux-deployment.md b/docs/docs/gpt-researcher/getting-started/linux-deployment.md similarity index 100% rename from docs/docs/gpt-researcher/linux-deployment.md rename to docs/docs/gpt-researcher/getting-started/linux-deployment.md diff --git a/docs/docs/gpt-researcher/automated-tests.md b/docs/docs/gpt-researcher/gptr/automated-tests.md similarity index 100% rename from docs/docs/gpt-researcher/automated-tests.md rename to docs/docs/gpt-researcher/gptr/automated-tests.md diff --git a/docs/docs/gpt-researcher/example.md b/docs/docs/gpt-researcher/gptr/example.md similarity index 100% rename from docs/docs/gpt-researcher/example.md rename to docs/docs/gpt-researcher/gptr/example.md diff --git a/docs/docs/gpt-researcher/pip-package.md b/docs/docs/gpt-researcher/gptr/pip-package.md similarity index 100% rename from docs/docs/gpt-researcher/pip-package.md rename to docs/docs/gpt-researcher/gptr/pip-package.md diff --git a/docs/docs/gpt-researcher/troubleshooting.md b/docs/docs/gpt-researcher/gptr/troubleshooting.md similarity index 100% rename from docs/docs/gpt-researcher/troubleshooting.md rename to docs/docs/gpt-researcher/gptr/troubleshooting.md diff --git a/docs/sidebars.js b/docs/sidebars.js index 8d9979e5a..47317ad37 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -18,11 +18,10 @@ collapsible: true, collapsed: false, items: [ - 'gpt-researcher/introduction', - 'gpt-researcher/getting-started-with-docker', - 'gpt-researcher/getting-started', - 'gpt-researcher/frontend', - 'gpt-researcher/linux-deployment', + 'gpt-researcher/getting-started/introduction', + 'gpt-researcher/getting-started/getting-started-with-docker', + 'gpt-researcher/getting-started/getting-started', + 'gpt-researcher/getting-started/linux-deployment', ] }, { @@ -31,12 +30,25 @@ collapsible: true, collapsed: false, items: [ - 'gpt-researcher/pip-package', - 'gpt-researcher/example', - 'gpt-researcher/automated-tests', - 'gpt-researcher/troubleshooting', + 'gpt-researcher/gptr/pip-package', + 'gpt-researcher/gptr/example', + 'gpt-researcher/gptr/automated-tests', + 'gpt-researcher/gptr/troubleshooting', ], }, + + { + type: 'category', + label: 'Frontend', + collapsible: true, + collapsed: false, + items: [ + 'gpt-researcher/frontend', + 'gpt-researcher/playing-with-webhooks', + ], + }, + + { type: 'category', label: 'Custom Context', @@ -58,7 +70,6 @@ 'gpt-researcher/deploy-llm-on-elestio.md' ] }, - { type: 'category', From 38a9e2b75b76b4bbbf0329ce5754cd9a4bd4263a Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:24:22 +0000 Subject: [PATCH 25/41] structured docs --- .../{ => context}/local-docs.md | 0 .../{ => context}/tailored-research.md | 0 .../{ => context}/vector-stores.md | 0 .../{ => customization}/config.md | 0 .../{ => customization}/retrievers.md | 0 .../{ => llms}/deploy-llm-on-elestio.md | 0 docs/docs/gpt-researcher/{ => llms}/llms.md | 0 .../{ => multi_agents}/langgraph.md | 0 docs/docs/{gpt-researcher => }/roadmap.md | 0 docs/sidebars.js | 21 ++++++++++--------- 10 files changed, 11 insertions(+), 10 deletions(-) rename docs/docs/gpt-researcher/{ => context}/local-docs.md (100%) rename docs/docs/gpt-researcher/{ => context}/tailored-research.md (100%) rename docs/docs/gpt-researcher/{ => context}/vector-stores.md (100%) rename docs/docs/gpt-researcher/{ => customization}/config.md (100%) rename docs/docs/gpt-researcher/{ => customization}/retrievers.md (100%) rename docs/docs/gpt-researcher/{ => llms}/deploy-llm-on-elestio.md (100%) rename docs/docs/gpt-researcher/{ => llms}/llms.md (100%) rename docs/docs/gpt-researcher/{ => multi_agents}/langgraph.md (100%) rename docs/docs/{gpt-researcher => }/roadmap.md (100%) diff --git a/docs/docs/gpt-researcher/local-docs.md b/docs/docs/gpt-researcher/context/local-docs.md similarity index 100% rename from docs/docs/gpt-researcher/local-docs.md rename to docs/docs/gpt-researcher/context/local-docs.md diff --git a/docs/docs/gpt-researcher/tailored-research.md b/docs/docs/gpt-researcher/context/tailored-research.md similarity index 100% rename from docs/docs/gpt-researcher/tailored-research.md rename to docs/docs/gpt-researcher/context/tailored-research.md diff --git a/docs/docs/gpt-researcher/vector-stores.md b/docs/docs/gpt-researcher/context/vector-stores.md similarity index 100% rename from docs/docs/gpt-researcher/vector-stores.md rename to docs/docs/gpt-researcher/context/vector-stores.md diff --git a/docs/docs/gpt-researcher/config.md b/docs/docs/gpt-researcher/customization/config.md similarity index 100% rename from docs/docs/gpt-researcher/config.md rename to docs/docs/gpt-researcher/customization/config.md diff --git a/docs/docs/gpt-researcher/retrievers.md b/docs/docs/gpt-researcher/customization/retrievers.md similarity index 100% rename from docs/docs/gpt-researcher/retrievers.md rename to docs/docs/gpt-researcher/customization/retrievers.md diff --git a/docs/docs/gpt-researcher/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/llms/deploy-llm-on-elestio.md similarity index 100% rename from docs/docs/gpt-researcher/deploy-llm-on-elestio.md rename to docs/docs/gpt-researcher/llms/deploy-llm-on-elestio.md diff --git a/docs/docs/gpt-researcher/llms.md b/docs/docs/gpt-researcher/llms/llms.md similarity index 100% rename from docs/docs/gpt-researcher/llms.md rename to docs/docs/gpt-researcher/llms/llms.md diff --git a/docs/docs/gpt-researcher/langgraph.md b/docs/docs/gpt-researcher/multi_agents/langgraph.md similarity index 100% rename from docs/docs/gpt-researcher/langgraph.md rename to docs/docs/gpt-researcher/multi_agents/langgraph.md diff --git a/docs/docs/gpt-researcher/roadmap.md b/docs/docs/roadmap.md similarity index 100% rename from docs/docs/gpt-researcher/roadmap.md rename to docs/docs/roadmap.md diff --git a/docs/sidebars.js b/docs/sidebars.js index 47317ad37..c0b669b3c 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -43,8 +43,8 @@ collapsible: true, collapsed: false, items: [ - 'gpt-researcher/frontend', - 'gpt-researcher/playing-with-webhooks', + 'gpt-researcher/frontend/frontend', + 'gpt-researcher/frontend/playing-with-webhooks', ], }, @@ -55,9 +55,9 @@ collapsible: true, collapsed: false, items: [ - 'gpt-researcher/tailored-research', - 'gpt-researcher/local-docs', - 'gpt-researcher/vector-stores', + 'gpt-researcher/context/tailored-research', + 'gpt-researcher/context/local-docs', + 'gpt-researcher/context/vector-stores', ] }, { @@ -66,8 +66,8 @@ collapsible: true, collapsed: false, items: [ - 'gpt-researcher/llms', - 'gpt-researcher/deploy-llm-on-elestio.md' + 'gpt-researcher/llms/llms', + 'gpt-researcher/llms/deploy-llm-on-elestio.md' ] }, @@ -77,8 +77,8 @@ collapsible: true, collapsed: true, items: [ - 'gpt-researcher/config', - 'gpt-researcher/retrievers', + 'gpt-researcher/customization/config', + 'gpt-researcher/customization/retrievers', ] }, { @@ -87,11 +87,12 @@ collapsible: true, collapsed: true, items: [ - 'gpt-researcher/langgraph', + 'gpt-researcher/multi_agents/langgraph', ] }, {'Examples': [{type: 'autogenerated', dirName: 'examples'}]}, 'contribute', + 'roadmap', ], // pydoc-markdown auto-generated markdowns from docstrings referenceSideBar: [require("./docs/reference/sidebar.json")] From 1995c9e6d24b54e6af5c3092075b87245411314a Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:25:15 +0000 Subject: [PATCH 26/41] cleanup --- docs/sidebars.js | 4 ---- 1 file changed, 4 deletions(-) diff --git a/docs/sidebars.js b/docs/sidebars.js index c0b669b3c..e01c3e322 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -36,7 +36,6 @@ 'gpt-researcher/gptr/troubleshooting', ], }, - { type: 'category', label: 'Frontend', @@ -47,8 +46,6 @@ 'gpt-researcher/frontend/playing-with-webhooks', ], }, - - { type: 'category', label: 'Custom Context', @@ -70,7 +67,6 @@ 'gpt-researcher/llms/deploy-llm-on-elestio.md' ] }, - { type: 'category', label: 'More Customization', From 79a46a8d342411ae13bf5f987aabf4edfacc2197 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:26:39 +0000 Subject: [PATCH 27/41] FAQ --- docs/docs/faq.md | 7 +++++++ docs/sidebars.js | 1 + 2 files changed, 8 insertions(+) diff --git a/docs/docs/faq.md b/docs/docs/faq.md index 9d8e54902..b6e979920 100644 --- a/docs/docs/faq.md +++ b/docs/docs/faq.md @@ -6,20 +6,27 @@ It really depends on what you're aiming for. If you're looking to connect your AI application to the internet with Tavily tailored API, check out the [Tavily API](https://docs.tavily.com/docs/tavily-api/introductionn) documentation. If you're looking to build and deploy our open source autonomous research agent GPT Researcher, please see [GPT Researcher](/docs/gpt-researcher/introduction) documentation. You can also check out demos and examples for inspiration [here](/docs/examples/examples). + ### What is GPT Researcher? + GPT Researcher is a popular open source autonomous research agent that takes care of the tedious task of research for you, by scraping, filtering and aggregating over 20+ web sources per a single research task. GPT Researcher is built with best practices for leveraging LLMs (prompt engineering, RAG, chains, embeddings, etc), and is optimized for quick and efficient research. It is also fully customizable and can be tailored to your specific needs. To learn more about GPT Researcher, check out the [documentation page](/docs/gpt-researcher/introduction). + ### How much does each research run cost? + A research task using GPT Researcher costs around $0.01 per a single run (for GPT-4 usage). We're constantly optimizing LLM calls to reduce costs and improve performance. + ### How do you ensure the report is factual and accurate? + we do our best to ensure that the information we provide is factual and accurate. We do this by using multiple sources, and by using proprietary AI to score and rank the most relevant and accurate information. We also use proprietary AI to filter out irrelevant information and sources. Lastly, by using RAG and other techniques, we ensure that the information is relevant to the context of the research task, leading to more accurate generative AI content and reduced hallucinations. ### What are your plans for the future? + We're constantly working on improving our products and services. We're currently working on improving our search API together with design partners, and adding more data sources to our search engine. We're also working on improving our research agent GPT Researcher, and adding more features to it while growing our amazing open source community. If you're interested in our roadmap or looking to collaborate, check out our [roadmap page](https://trello.com/b/3O7KBePw/gpt-researcher-roadmap). diff --git a/docs/sidebars.js b/docs/sidebars.js index e01c3e322..ea2a47fba 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -89,6 +89,7 @@ {'Examples': [{type: 'autogenerated', dirName: 'examples'}]}, 'contribute', 'roadmap', + 'faq', ], // pydoc-markdown auto-generated markdowns from docstrings referenceSideBar: [require("./docs/reference/sidebar.json")] From c6009523a4b9eb6b0385c50ece4de74032053aaf Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:32:47 +0000 Subject: [PATCH 28/41] cleaned ollama docs - with example env of running ollama locally --- ...m-on-elestio.md => running-with-ollama.md} | 41 ++++++++++++++----- 1 file changed, 30 insertions(+), 11 deletions(-) rename docs/docs/gpt-researcher/llms/{deploy-llm-on-elestio.md => running-with-ollama.md} (71%) diff --git a/docs/docs/gpt-researcher/llms/deploy-llm-on-elestio.md b/docs/docs/gpt-researcher/llms/running-with-ollama.md similarity index 71% rename from docs/docs/gpt-researcher/llms/deploy-llm-on-elestio.md rename to docs/docs/gpt-researcher/llms/running-with-ollama.md index 245733af9..13474a502 100644 --- a/docs/docs/gpt-researcher/llms/deploy-llm-on-elestio.md +++ b/docs/docs/gpt-researcher/llms/running-with-ollama.md @@ -1,10 +1,28 @@ -# Deploy LLM on Elestio +# Running with Ollama -Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. +Ollama is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Ollama. + +Read on to understand how to install a Custom LLM with the Ollama WebUI, and how to query it with GPT-Researcher. -You can deploy an [Open WebUI](https://github.com/open-webui/open-webui/tree/main) server with [Elestio](https://elest.io/open-source/ollama) -After deploying the Elestio server, you'll want to enter the [Open WebUI Admin App](https://github.com/open-webui/open-webui/tree/main) & download a custom LLM. +## Querying your Custom LLM with GPT-Researcher + +If you deploy ollama locally, a .env like so, should enable powering GPT-Researcher with Ollama: + +```bash +OPENAI_API_KEY="123" +OPENAI_API_BASE="http://127.0.0.1:11434/v1" +OLLAMA_BASE_URL="http://127.0.0.1:11434/" +FAST_LLM_MODEL=gemma2:2b +SMART_LLM_MODEL=gemma2:2b +OLLAMA_EMBEDDING_MODEL=all-minilm +LLM_PROVIDER=openai +EMBEDDING_PROVIDER=ollama +``` + +Replace FAST_LLM_MODEL & SMART_LLM_MODEL with the model you downloaded from the Elestio Web UI in the previous step. + +After deploying Ollama WebUI, you'll want to enter the [Open WebUI Admin App](https://github.com/open-webui/open-webui/tree/main) & download a custom LLM. For our example, let's choose to download the `gemma2:2b` model. @@ -18,9 +36,13 @@ Paste the model name & size into the Web UI: This model now automatically becomes available via your Server's out-of-the-box API. -### Querying your Custom LLM with GPT-Researcher +## Deploy Ollama on Elestio -Here's the .env file you'll need to query your custom LLM with GPT-Researcher: +Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. + +You can deploy an [Open WebUI](https://github.com/open-webui/open-webui/tree/main) server with [Elestio](https://elest.io/open-source/ollama) + +Here's an example .env file that will enable powering GPT-Researcher with Elestio: ```bash OPENAI_API_KEY="123" @@ -33,10 +55,7 @@ LLM_PROVIDER=openai EMBEDDING_PROVIDER=ollama ``` -Replace FAST_LLM_MODEL & SMART_LLM_MODEL with the model you downloaded from the Elestio Web UI in the previous step. - - -#### Disable Elestio Authentication or Added Auth Headers +#### Disable Elestio Authentication or Add Auth Headers To remove the basic auth you have to follow the below steps: Go to your service -> Security -> at last Nginx -> in that find the below code: @@ -50,7 +69,7 @@ auth_basic_user_file /etc/nginx/conf.d/.htpasswd; Comment these both these lines out and click the button "Update & Restart" to reflect the changes. -#### Run LLM Test Script for GPTR +## Run LLM Test Script for GPTR And here's a custom python script you can use to query your custom LLM: From 73bac92c8a7030405f845a2cdf1c0733faff0e8c Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:33:37 +0000 Subject: [PATCH 29/41] ollama docs --- docs/sidebars.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sidebars.js b/docs/sidebars.js index ea2a47fba..e6cc8813d 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -64,7 +64,7 @@ collapsed: false, items: [ 'gpt-researcher/llms/llms', - 'gpt-researcher/llms/deploy-llm-on-elestio.md' + 'gpt-researcher/llms/running-with-ollama.md' ] }, { From 21d22b9e3a02d09a0af8e1070e5b0e2a83228228 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:39:51 +0000 Subject: [PATCH 30/41] faster ollama examples in docs --- docs/docs/gpt-researcher/llms/running-with-ollama.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/docs/gpt-researcher/llms/running-with-ollama.md b/docs/docs/gpt-researcher/llms/running-with-ollama.md index 13474a502..c33ba2153 100644 --- a/docs/docs/gpt-researcher/llms/running-with-ollama.md +++ b/docs/docs/gpt-researcher/llms/running-with-ollama.md @@ -13,9 +13,9 @@ If you deploy ollama locally, a .env like so, should enable powering GPT-Researc OPENAI_API_KEY="123" OPENAI_API_BASE="http://127.0.0.1:11434/v1" OLLAMA_BASE_URL="http://127.0.0.1:11434/" -FAST_LLM_MODEL=gemma2:2b -SMART_LLM_MODEL=gemma2:2b -OLLAMA_EMBEDDING_MODEL=all-minilm +FAST_LLM_MODEL=qwen2:1.5b +SMART_LLM_MODEL=qwen2:1.5b +OLLAMA_EMBEDDING_MODEL=all-minilm:22m LLM_PROVIDER=openai EMBEDDING_PROVIDER=ollama ``` @@ -48,9 +48,9 @@ Here's an example .env file that will enable powering GPT-Researcher with Elesti OPENAI_API_KEY="123" OPENAI_API_BASE="https://.vm.elestio.app:57987/v1" OLLAMA_BASE_URL="https://.vm.elestio.app:57987/" -FAST_LLM_MODEL=gemma2:2b -SMART_LLM_MODEL=gemma2:2b -OLLAMA_EMBEDDING_MODEL=all-minilm +FAST_LLM_MODEL=qwen2:1.5b +SMART_LLM_MODEL=qwen2:1.5b +OLLAMA_EMBEDDING_MODEL=all-minilm:22m LLM_PROVIDER=openai EMBEDDING_PROVIDER=ollama ``` From 1d2fee43aaaec94bd7b733f2bed19cc3dc8668b6 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:45:11 +0000 Subject: [PATCH 31/41] added Product Tutorial link --- docs/docs/gpt-researcher/frontend/frontend.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/docs/gpt-researcher/frontend/frontend.md b/docs/docs/gpt-researcher/frontend/frontend.md index 6e3949d7e..10cee0078 100644 --- a/docs/docs/gpt-researcher/frontend/frontend.md +++ b/docs/docs/gpt-researcher/frontend/frontend.md @@ -2,10 +2,12 @@ This frontend project aims to enhance the user experience of GPT-Researcher, providing an intuitive and efficient interface for automated research. It offers two deployment options to suit different needs and environments. +View a Product Tutorial here: [GPT-Researcher Frontend Tutorial](https://www.youtube.com/watch?v=hIZqA6lPusk) + ## NextJS Frontend App -The React app (located in `frontend` directory) is our Frontend 2.0 which we hope will enable us to display the robustness of the backend on the frontend, as well. +The React app (located in the `frontend` directory) is our Frontend 2.0 which we hope will enable us to display the robustness of the backend on the frontend, as well. It comes with loads of added features, such as: - a drag-n-drop user interface for uploading and deleting files to be used as local documents by GPTResearcher. From ccc051a8d46e0bc9e9b3967cdee2ecfd81f98c9a Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 04:53:59 +0000 Subject: [PATCH 32/41] cleaner ollama docs --- .../llms/running-with-ollama.md | 84 ++++++++++--------- 1 file changed, 45 insertions(+), 39 deletions(-) diff --git a/docs/docs/gpt-researcher/llms/running-with-ollama.md b/docs/docs/gpt-researcher/llms/running-with-ollama.md index c33ba2153..cbbe5509b 100644 --- a/docs/docs/gpt-researcher/llms/running-with-ollama.md +++ b/docs/docs/gpt-researcher/llms/running-with-ollama.md @@ -5,49 +5,29 @@ Ollama is a platform that allows you to deploy and manage custom language models Read on to understand how to install a Custom LLM with the Ollama WebUI, and how to query it with GPT-Researcher. -## Querying your Custom LLM with GPT-Researcher - -If you deploy ollama locally, a .env like so, should enable powering GPT-Researcher with Ollama: - -```bash -OPENAI_API_KEY="123" -OPENAI_API_BASE="http://127.0.0.1:11434/v1" -OLLAMA_BASE_URL="http://127.0.0.1:11434/" -FAST_LLM_MODEL=qwen2:1.5b -SMART_LLM_MODEL=qwen2:1.5b -OLLAMA_EMBEDDING_MODEL=all-minilm:22m -LLM_PROVIDER=openai -EMBEDDING_PROVIDER=ollama -``` - -Replace FAST_LLM_MODEL & SMART_LLM_MODEL with the model you downloaded from the Elestio Web UI in the previous step. +## Fetching the Desired LLM Models After deploying Ollama WebUI, you'll want to enter the [Open WebUI Admin App](https://github.com/open-webui/open-webui/tree/main) & download a custom LLM. -For our example, let's choose to download the `gemma2:2b` model. - Choose a model from [Ollama's Library of LLM's](https://ollama.com/library?sort=popular) Paste the model name & size into the Web UI: Screen Shot 2024-08-27 at 23 26 28 +For our example, let's choose to download the `qwen2:1.5b` model. -This model now automatically becomes available via your Server's out-of-the-box API. +This model now automatically becomes available via your Server's out-of-the-box API - we'll leverage it within our GPT-Researcher .env file in the next step. -## Deploy Ollama on Elestio - -Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. - -You can deploy an [Open WebUI](https://github.com/open-webui/open-webui/tree/main) server with [Elestio](https://elest.io/open-source/ollama) +## Querying your Custom LLM with GPT-Researcher -Here's an example .env file that will enable powering GPT-Researcher with Elestio: +If you deploy ollama locally, a .env like so, should enable powering GPT-Researcher with Ollama: ```bash OPENAI_API_KEY="123" -OPENAI_API_BASE="https://.vm.elestio.app:57987/v1" -OLLAMA_BASE_URL="https://.vm.elestio.app:57987/" +OPENAI_API_BASE="http://127.0.0.1:11434/v1" +OLLAMA_BASE_URL="http://127.0.0.1:11434/" FAST_LLM_MODEL=qwen2:1.5b SMART_LLM_MODEL=qwen2:1.5b OLLAMA_EMBEDDING_MODEL=all-minilm:22m @@ -55,18 +35,7 @@ LLM_PROVIDER=openai EMBEDDING_PROVIDER=ollama ``` -#### Disable Elestio Authentication or Add Auth Headers - -To remove the basic auth you have to follow the below steps: -Go to your service -> Security -> at last Nginx -> in that find the below code: - -```bash -auth_basic "Authentication"; - -auth_basic_user_file /etc/nginx/conf.d/.htpasswd; -``` - -Comment these both these lines out and click the button "Update & Restart" to reflect the changes. +Replace FAST_LLM_MODEL & SMART_LLM_MODEL with the model you downloaded from the Elestio Web UI in the previous step. ## Run LLM Test Script for GPTR @@ -112,3 +81,40 @@ asyncio.run(test_ollama()) ``` +Replace `OLLAMA_BASE_URL` with the URL of your Ollama instance, and `FAST_LLM_MODEL` with the model you downloaded from the Ollama Web UI. + +Run the script to test the connection with your custom LLM. + + +## Deploy Ollama on Elestio + +Elestio is a platform that allows you to deploy and manage custom language models. This guide will walk you through deploying a custom language model on Elestio. + +You can deploy an [Open WebUI](https://github.com/open-webui/open-webui/tree/main) server with [Elestio](https://elest.io/open-source/ollama) + +Here's an example .env file that will enable powering GPT-Researcher with Elestio: + +```bash +OPENAI_API_KEY="123" +OPENAI_API_BASE="https://.vm.elestio.app:57987/v1" +OLLAMA_BASE_URL="https://.vm.elestio.app:57987/" +FAST_LLM_MODEL=qwen2:1.5b +SMART_LLM_MODEL=qwen2:1.5b +OLLAMA_EMBEDDING_MODEL=all-minilm:22m +LLM_PROVIDER=openai +EMBEDDING_PROVIDER=ollama +``` + +#### Disable Elestio Authentication or Add Auth Headers + +To remove the basic auth you have to follow the below steps: +Go to your service -> Security -> at last Nginx -> in that find the below code: + +```bash +auth_basic "Authentication"; + +auth_basic_user_file /etc/nginx/conf.d/.htpasswd; +``` + +Comment these both these lines out and click the button "Update & Restart" to reflect the changes. + From b66e99340ec9ee36cea25f227a94ecaed7f9a115 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 05:04:22 +0000 Subject: [PATCH 33/41] consolidate docs --- README.md | 2 +- {examples => docs/docs/examples}/pip-run.ipynb | 0 {examples => docs/docs/examples}/sample_report.py | 0 {examples => docs/docs/examples}/sample_sources_only.py | 0 docs/docs/gpt-researcher/getting-started/introduction.md | 2 +- docs/docs/gpt-researcher/gptr/pip-package.md | 2 +- 6 files changed, 3 insertions(+), 3 deletions(-) rename {examples => docs/docs/examples}/pip-run.ipynb (100%) rename {examples => docs/docs/examples}/sample_report.py (100%) rename {examples => docs/docs/examples}/sample_sources_only.py (100%) diff --git a/README.md b/README.md index f75e4fd00..05d133429 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ [![PyPI version](https://img.shields.io/pypi/v/gpt-researcher?logo=pypi&logoColor=white&style=flat)](https://badge.fury.io/py/gpt-researcher) ![GitHub Release](https://img.shields.io/github/v/release/assafelovic/gpt-researcher?style=flat&logo=github) -[![Open In Colab](https://img.shields.io/static/v1?message=Open%20in%20Colab&logo=googlecolab&labelColor=grey&color=yellow&label=%20&style=flat&logoSize=40)](https://colab.research.google.com/github/assafelovic/gpt-researcher/blob/master/examples/pip-run.ipynb) +[![Open In Colab](https://img.shields.io/static/v1?message=Open%20in%20Colab&logo=googlecolab&labelColor=grey&color=yellow&label=%20&style=flat&logoSize=40)](https://colab.research.google.com/github/assafelovic/gpt-researcher/blob/master/docs/docs/examples/pip-run.ipynb) [![Docker Image Version](https://img.shields.io/docker/v/elestio/gpt-researcher/latest?arch=amd64&style=flat&logo=docker&logoColor=white&color=1D63ED)](https://hub.docker.com/r/gptresearcher/gpt-researcher) [![Twitter Follow](https://img.shields.io/twitter/follow/assaf_elovic?style=social)](https://twitter.com/assaf_elovic) diff --git a/examples/pip-run.ipynb b/docs/docs/examples/pip-run.ipynb similarity index 100% rename from examples/pip-run.ipynb rename to docs/docs/examples/pip-run.ipynb diff --git a/examples/sample_report.py b/docs/docs/examples/sample_report.py similarity index 100% rename from examples/sample_report.py rename to docs/docs/examples/sample_report.py diff --git a/examples/sample_sources_only.py b/docs/docs/examples/sample_sources_only.py similarity index 100% rename from examples/sample_sources_only.py rename to docs/docs/examples/sample_sources_only.py diff --git a/docs/docs/gpt-researcher/getting-started/introduction.md b/docs/docs/gpt-researcher/getting-started/introduction.md index 4cdb193c1..0d19b9b4e 100644 --- a/docs/docs/gpt-researcher/getting-started/introduction.md +++ b/docs/docs/gpt-researcher/getting-started/introduction.md @@ -6,7 +6,7 @@ [![GitHub Repo stars](https://img.shields.io/github/stars/assafelovic/gpt-researcher?style=social)](https://github.com/assafelovic/gpt-researcher) [![Twitter Follow](https://img.shields.io/twitter/follow/assaf_elovic?style=social)](https://twitter.com/assaf_elovic) [![PyPI version](https://badge.fury.io/py/gpt-researcher.svg)](https://badge.fury.io/py/gpt-researcher) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/assafelovic/gpt-researcher/blob/master/examples/pip-run.ipynb) +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/assafelovic/gpt-researcher/blob/master/docs/docs/examples/pip-run.ipynb) **[GPT Researcher](https://gptr.dev) is an autonomous agent designed for comprehensive online research on a variety of tasks.** diff --git a/docs/docs/gpt-researcher/gptr/pip-package.md b/docs/docs/gpt-researcher/gptr/pip-package.md index ef3fbea1d..0e2093ae0 100644 --- a/docs/docs/gpt-researcher/gptr/pip-package.md +++ b/docs/docs/gpt-researcher/gptr/pip-package.md @@ -1,6 +1,6 @@ # PIP Package [![PyPI version](https://badge.fury.io/py/gpt-researcher.svg)](https://badge.fury.io/py/gpt-researcher) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/assafelovic/gpt-researcher/blob/master/examples/pip-run.ipynb) +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/assafelovic/gpt-researcher/blob/master/docs/docs/examples/pip-run.ipynb) ๐ŸŒŸ **Exciting News!** Now, you can integrate `gpt-researcher` with your apps seamlessly! From f5275e7bb3b799ec25ffac471168cca0143be655 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Wed, 28 Aug 2024 05:11:24 +0000 Subject: [PATCH 34/41] cleanup --- docs/sidebars.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sidebars.js b/docs/sidebars.js index e6cc8813d..88c6e33b1 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -64,7 +64,7 @@ collapsed: false, items: [ 'gpt-researcher/llms/llms', - 'gpt-researcher/llms/running-with-ollama.md' + 'gpt-researcher/llms/running-with-ollama' ] }, { From 742475fa54ad37bc298e835299c2f5fcc6903767 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sat, 7 Sep 2024 19:56:41 +0000 Subject: [PATCH 35/41] fixed main readme links to new docs directory structure --- README.md | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 05d133429..2410a4281 100644 --- a/README.md +++ b/README.md @@ -108,7 +108,7 @@ export TAVILY_API_KEY={Your Tavily API Key here} For a more permanent setup, create a `.env` file in the current `gpt-researcher` directory and input the env vars (without `export`). - The default LLM is [GPT](https://platform.openai.com/docs/guides/gpt), but you can use other LLMs such as `claude`, `ollama3`, `gemini`, `mistral` and more. To learn how to change the LLM provider, see the [LLMs documentation](https://docs.gptr.dev/docs/gpt-researcher/llms) page. Please note: this project is optimized for OpenAI GPT models. -- The default retriever is [Tavily](https://app.tavily.com), but you can refer to other retrievers such as `duckduckgo`, `google`, `bing`, `serper`, `searx`, `arxiv`, `exa` and more. To learn how to change the search provider, see the [retrievers documentation](https://docs.gptr.dev/docs/gpt-researcher/retrievers) page. +- The default retriever is [Tavily](https://app.tavily.com), but you can refer to other retrievers such as `duckduckgo`, `google`, `bing`, `serper`, `searx`, `arxiv`, `exa` and more. To learn how to change the search provider, see the [retrievers documentation](https://docs.gptr.dev/docs/gpt-researcher/customization/retrievers) page. ### Quickstart @@ -128,7 +128,7 @@ python -m uvicorn main:app --reload
-**To learn how to get started with [Poetry](https://docs.gptr.dev/docs/gpt-researcher/getting-started#poetry) or a [virtual environment](https://docs.gptr.dev/docs/gpt-researcher/getting-started#virtual-environment) check out the [documentation](https://docs.gptr.dev/docs/gpt-researcher/getting-started) page.** +**To learn how to get started with [Poetry](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started#poetry) or a [virtual environment](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started#virtual-environment) check out the [documentation](https://docs.gptr.dev/docs/gpt-researcher/getting-started) page.** ### Run as PIP package ```bash @@ -148,21 +148,27 @@ report = await researcher.write_report() ... ``` -**For more examples and configurations, please refer to the [PIP documentation](https://docs.gptr.dev/docs/gpt-researcher/pip-package) page.** +**For more examples and configurations, please refer to the [PIP documentation](https://docs.gptr.dev/docs/gpt-researcher/gptr/pip-package) page.** ## Run with Docker -> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started#try-it-with-docker) +> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started-with-docker) > **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' > **Step 3** - Within the docker-compose file comment out services that you don't want to run with Docker. ```bash -$ docker-compose up --build +docker-compose up --build ``` +If that doesn't work, try running it without the dash: +```bash +docker compose up --build +``` + + > **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: - the Python server running on localhost:8000
- the React app running on localhost:3000
@@ -193,7 +199,7 @@ By using LangGraph, the research process can be significantly improved in depth An average run generates a 5-6 page research report in multiple formats such as PDF, Docx and Markdown. -Check it out [here](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents) or head over to our [documentation](https://docs.gptr.dev/docs/gpt-researcher/langgraph) for more information. +Check it out [here](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents) or head over to our [documentation](https://docs.gptr.dev/docs/gpt-researcher/multi_agents/langgraph) for more information. ## ๐Ÿ–ฅ๏ธ Frontend Applications From 8dd4d26e5e47351d1e42c5f8007888a3b12fe8aa Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sat, 7 Sep 2024 20:03:25 +0000 Subject: [PATCH 36/41] search-engines folder in docs --- README.md | 2 +- docs/docs/gpt-researcher/{customization => gptr}/config.md | 0 .../{customization => search-engines}/retrievers.md | 0 docs/sidebars.js | 6 +++--- 4 files changed, 4 insertions(+), 4 deletions(-) rename docs/docs/gpt-researcher/{customization => gptr}/config.md (100%) rename docs/docs/gpt-researcher/{customization => search-engines}/retrievers.md (100%) diff --git a/README.md b/README.md index 2410a4281..f5379da4b 100644 --- a/README.md +++ b/README.md @@ -108,7 +108,7 @@ export TAVILY_API_KEY={Your Tavily API Key here} For a more permanent setup, create a `.env` file in the current `gpt-researcher` directory and input the env vars (without `export`). - The default LLM is [GPT](https://platform.openai.com/docs/guides/gpt), but you can use other LLMs such as `claude`, `ollama3`, `gemini`, `mistral` and more. To learn how to change the LLM provider, see the [LLMs documentation](https://docs.gptr.dev/docs/gpt-researcher/llms) page. Please note: this project is optimized for OpenAI GPT models. -- The default retriever is [Tavily](https://app.tavily.com), but you can refer to other retrievers such as `duckduckgo`, `google`, `bing`, `serper`, `searx`, `arxiv`, `exa` and more. To learn how to change the search provider, see the [retrievers documentation](https://docs.gptr.dev/docs/gpt-researcher/customization/retrievers) page. +- The default retriever is [Tavily](https://app.tavily.com), but you can refer to other retrievers such as `duckduckgo`, `google`, `bing`, `serper`, `searx`, `arxiv`, `exa` and more. To learn how to change the search provider, see the [retrievers documentation](https://docs.gptr.dev/docs/gpt-researcher/search-engines/retrievers) page. ### Quickstart diff --git a/docs/docs/gpt-researcher/customization/config.md b/docs/docs/gpt-researcher/gptr/config.md similarity index 100% rename from docs/docs/gpt-researcher/customization/config.md rename to docs/docs/gpt-researcher/gptr/config.md diff --git a/docs/docs/gpt-researcher/customization/retrievers.md b/docs/docs/gpt-researcher/search-engines/retrievers.md similarity index 100% rename from docs/docs/gpt-researcher/customization/retrievers.md rename to docs/docs/gpt-researcher/search-engines/retrievers.md diff --git a/docs/sidebars.js b/docs/sidebars.js index 88c6e33b1..0bf3fe1fc 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -32,6 +32,7 @@ items: [ 'gpt-researcher/gptr/pip-package', 'gpt-researcher/gptr/example', + 'gpt-researcher/gptr/config', 'gpt-researcher/gptr/automated-tests', 'gpt-researcher/gptr/troubleshooting', ], @@ -69,12 +70,11 @@ }, { type: 'category', - label: 'More Customization', + label: 'Search Engines', collapsible: true, collapsed: true, items: [ - 'gpt-researcher/customization/config', - 'gpt-researcher/customization/retrievers', + 'gpt-researcher/search-engines/retrievers', ] }, { From ebb76ac31e0b1f50aa8681fe00ef7cef5c531fb0 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sat, 7 Sep 2024 20:22:46 +0000 Subject: [PATCH 37/41] querying the backend with websockets --- .env.example | 2 +- README.md | 2 +- .../docs/gpt-researcher/context/local-docs.md | 2 +- .../gptr/querying-the-backend.md | 106 ++++++++++++++++++ docs/sidebars.js | 1 + 5 files changed, 110 insertions(+), 3 deletions(-) create mode 100644 docs/docs/gpt-researcher/gptr/querying-the-backend.md diff --git a/.env.example b/.env.example index edb96a977..f86255e5a 100644 --- a/.env.example +++ b/.env.example @@ -1,3 +1,3 @@ OPENAI_API_KEY= TAVILY_API_KEY= -DOC_PATH=./docs/my-docs \ No newline at end of file +DOC_PATH=./my-docs \ No newline at end of file diff --git a/README.md b/README.md index f5379da4b..a8b66818d 100644 --- a/README.md +++ b/README.md @@ -184,7 +184,7 @@ You can instruct the GPT Researcher to run research tasks based on your local do Step 1: Add the env variable `DOC_PATH` pointing to the folder where your documents are located. ```bash -export DOC_PATH="./docs/my-docs" +export DOC_PATH="./my-docs" ``` Step 2: diff --git a/docs/docs/gpt-researcher/context/local-docs.md b/docs/docs/gpt-researcher/context/local-docs.md index 9c8115c5a..31f53277d 100644 --- a/docs/docs/gpt-researcher/context/local-docs.md +++ b/docs/docs/gpt-researcher/context/local-docs.md @@ -5,7 +5,7 @@ You can instruct the GPT Researcher to run research tasks based on your local do Step 1: Add the env variable `DOC_PATH` pointing to the folder where your documents are located. ```bash -export DOC_PATH="./docs/my-docs" +export DOC_PATH="./my-docs" ``` Step 2: diff --git a/docs/docs/gpt-researcher/gptr/querying-the-backend.md b/docs/docs/gpt-researcher/gptr/querying-the-backend.md new file mode 100644 index 000000000..0bee895a6 --- /dev/null +++ b/docs/docs/gpt-researcher/gptr/querying-the-backend.md @@ -0,0 +1,106 @@ +# Querying the Backend + +## Introduction + +In this section, we will discuss how to query the GPTR backend server. The GPTR backend server is a Python server that runs the GPTR Python package. The server listens for WebSocket connections and processes incoming messages to generate reports, streaming back logs and results to the client. + +An example WebSocket client is implemented in the `gptr-webhook.js` file below. + +This function sends a Webhook Message to the GPTR Python backend running on localhost:8000, but this example can also be modified to query a [GPTR Server hosted on Linux](https://docs.gptr.dev/docs/gpt-researcher/getting-started/linux-deployment). + +// gptr-webhook.js + +```javascript + +const WebSocket = require('ws'); + +let socket = null; +let responseCallback = null; + +async function initializeWebSocket() { + if (!socket) { + const host = 'localhost:8000'; + const ws_uri = `ws://${host}/ws`; + + socket = new WebSocket(ws_uri); + + socket.onopen = () => { + console.log('WebSocket connection established'); + }; + + socket.onmessage = (event) => { + const data = JSON.parse(event.data); + console.log('WebSocket data received:', data); + + if (data.content === 'dev_team_result' + && data.output.rubber_ducker_thoughts != undefined + && data.output.tech_lead_review != undefined) { + if (responseCallback) { + responseCallback(data.output); + responseCallback = null; // Clear callback after use + } + } else { + console.log('Received data:', data); + } + }; + + socket.onclose = () => { + console.log('WebSocket connection closed'); + socket = null; + }; + + socket.onerror = (error) => { + console.error('WebSocket error:', error); + }; + } +} + +async function sendWebhookMessage(message) { + return new Promise((resolve, reject) => { + if (!socket || socket.readyState !== WebSocket.OPEN) { + initializeWebSocket(); + } + + const data = { + task: message, + report_type: 'dev_team', + report_source: 'web', + tone: 'Objective', + headers: {}, + repo_name: 'elishakay/gpt-researcher' + }; + + const payload = "start " + JSON.stringify(data); + + responseCallback = (response) => { + resolve(response); // Resolve the promise with the WebSocket response + }; + + if (socket.readyState === WebSocket.OPEN) { + socket.send(payload); + console.log('Message sent:', payload); + } else { + socket.onopen = () => { + socket.send(payload); + console.log('Message sent after connection:', payload); + }; + } + }); +} + +module.exports = { + sendWebhookMessage +}; +``` + +And here's how you can leverage this helper function: + +```javascript +const { sendWebhookMessage } = require('./gptr-webhook'); + +async function main() { + const message = 'What are the thoughts of the rubber duck?'; + const response = await sendWebhookMessage(message); + console.log('Response:', response); +} +``` \ No newline at end of file diff --git a/docs/sidebars.js b/docs/sidebars.js index 0bf3fe1fc..688f47cf0 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -32,6 +32,7 @@ items: [ 'gpt-researcher/gptr/pip-package', 'gpt-researcher/gptr/example', + 'gpt-researcher/gptr/querying-the-backend', 'gpt-researcher/gptr/config', 'gpt-researcher/gptr/automated-tests', 'gpt-researcher/gptr/troubleshooting', From d131301ef3fe7bacf68adb9df078f1bf385c605e Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sat, 7 Sep 2024 20:43:49 +0000 Subject: [PATCH 38/41] filtering by domain & link to hybrid blog in docs --- .../context/filtering-by-domain.md | 22 +++++++++++++++++++ .../docs/gpt-researcher/context/local-docs.md | 9 ++++++++ .../gptr/querying-the-backend.md | 2 +- docs/sidebars.js | 1 + 4 files changed, 33 insertions(+), 1 deletion(-) create mode 100644 docs/docs/gpt-researcher/context/filtering-by-domain.md diff --git a/docs/docs/gpt-researcher/context/filtering-by-domain.md b/docs/docs/gpt-researcher/context/filtering-by-domain.md new file mode 100644 index 000000000..c413c915a --- /dev/null +++ b/docs/docs/gpt-researcher/context/filtering-by-domain.md @@ -0,0 +1,22 @@ +# Filtering by Domain + +If you set Google as a Retriever, you can filter web results by site. + +For example, set in the query param you pass the GPTResearcher class instance: `query="site:linkedin.com a python web developer to implement my custom gpt-researcher flow"` will limit the results to linkedin.com + +> **Step 1** - Set these environment variables with a .env file in the root folder + +TAVILY_API_KEY= +LANGCHAIN_TRACING_V2=true +LANGCHAIN_API_KEY= +OPENAI_API_KEY= +DOC_PATH=./my-docs +RETRIEVER=google +GOOGLE_API_KEY= +GOOGLE_CX_KEY= + +> **Step 2** - from the root project run: + +docker-compose up -- build + +> **Step 3** - from the frontend input box in localhost:3000, you can append any google search filter (such as filtering by domain names) \ No newline at end of file diff --git a/docs/docs/gpt-researcher/context/local-docs.md b/docs/docs/gpt-researcher/context/local-docs.md index 31f53277d..d07ae3dc8 100644 --- a/docs/docs/gpt-researcher/context/local-docs.md +++ b/docs/docs/gpt-researcher/context/local-docs.md @@ -1,5 +1,7 @@ # ๐Ÿ“„ Research on Local Documents +## Just Local Docs + You can instruct the GPT Researcher to run research tasks based on your local documents. Currently supported file formats are: PDF, plain text, CSV, Excel, Markdown, PowerPoint, and Word documents. Step 1: Add the env variable `DOC_PATH` pointing to the folder where your documents are located. @@ -11,3 +13,10 @@ export DOC_PATH="./my-docs" Step 2: - If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options. - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "documents" when you instantiate the `GPTResearcher` class [code sample here](https://docs.tavily.com/docs/gpt-researcher/tailored-research). + +## Local Docs + Web (Hybrid) + +![GPT Researcher hybrid research](./gptr-hybrid.png) + +Check out the blog post on [Hybrid Research](https://docs.gptr.dev/blog/gptr-hybrid) to learn more about how to combine local documents with web research. +``` \ No newline at end of file diff --git a/docs/docs/gpt-researcher/gptr/querying-the-backend.md b/docs/docs/gpt-researcher/gptr/querying-the-backend.md index 0bee895a6..b499ef6cb 100644 --- a/docs/docs/gpt-researcher/gptr/querying-the-backend.md +++ b/docs/docs/gpt-researcher/gptr/querying-the-backend.md @@ -99,7 +99,7 @@ And here's how you can leverage this helper function: const { sendWebhookMessage } = require('./gptr-webhook'); async function main() { - const message = 'What are the thoughts of the rubber duck?'; + const message = 'How do I get started with GPT-Researcher Websockets?'; const response = await sendWebhookMessage(message); console.log('Response:', response); } diff --git a/docs/sidebars.js b/docs/sidebars.js index 688f47cf0..448b8a3a7 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -56,6 +56,7 @@ items: [ 'gpt-researcher/context/tailored-research', 'gpt-researcher/context/local-docs', + 'gpt-researcher/context/filtering-by-domain', 'gpt-researcher/context/vector-stores', ] }, From 563ccface21c0ad1d94b6bb831bd02c9e3223f68 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sat, 7 Sep 2024 21:01:36 +0000 Subject: [PATCH 39/41] Docker: Quickstart --- .../getting-started/getting-started-with-docker.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md index ab8b819bc..2478df9f5 100644 --- a/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md +++ b/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md @@ -1,4 +1,4 @@ -# Docker: Path of least resistance +# Docker: Quickstart > **Step 1** - Install & Open Docker Desktop From 8be1e8b1d6657b9c0dedaa1635c09a87d93a8e25 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sun, 8 Sep 2024 04:08:08 +0000 Subject: [PATCH 40/41] fix links & docs for running with report_source = 'local' --- README.md | 2 +- docs/docs/gpt-researcher/context/local-docs.md | 2 +- .../getting-started/getting-started-with-docker.md | 7 ++++++- 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a8b66818d..a92d7f173 100644 --- a/README.md +++ b/README.md @@ -189,7 +189,7 @@ export DOC_PATH="./my-docs" Step 2: - If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options. - - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "documents" when you instantiate the `GPTResearcher` class [code sample here](https://docs.tavily.com/docs/gpt-researcher/tailored-research). + - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "local" when you instantiate the `GPTResearcher` class [code sample here](https://docs.gptr.dev/docs/gpt-researcher/context/tailored-research). ## ๐Ÿ‘ช Multi-Agent Assistant diff --git a/docs/docs/gpt-researcher/context/local-docs.md b/docs/docs/gpt-researcher/context/local-docs.md index d07ae3dc8..7652eacff 100644 --- a/docs/docs/gpt-researcher/context/local-docs.md +++ b/docs/docs/gpt-researcher/context/local-docs.md @@ -12,7 +12,7 @@ export DOC_PATH="./my-docs" Step 2: - If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options. - - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "documents" when you instantiate the `GPTResearcher` class [code sample here](https://docs.tavily.com/docs/gpt-researcher/tailored-research). + - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "local" when you instantiate the `GPTResearcher` class [code sample here](https://docs.gptr.dev/docs/gpt-researcher/context/tailored-research). ## Local Docs + Web (Hybrid) diff --git a/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md b/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md index 2478df9f5..e2928c856 100644 --- a/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md +++ b/docs/docs/gpt-researcher/getting-started/getting-started-with-docker.md @@ -12,7 +12,12 @@ This mainly includes cloning the '.env.example' file, adding your API Keys to th > **Step 3** - Within root, run with Docker. ```bash -$ docker-compose up --build +docker-compose up --build +``` + +If that doesn't work, try running it without the dash: +```bash +docker compose up --build ``` > **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: From 4e56eaad9edb8c473a5e1d1df2f3d3e3f4985834 Mon Sep 17 00:00:00 2001 From: ElishaKay Date: Sun, 8 Sep 2024 04:28:48 +0000 Subject: [PATCH 41/41] more link fixes for docs restructuring (based on repo search for 'docs/gpt-researcher') --- README-ko_KR.md | 10 +++++----- docs/docs/faq.md | 4 ++-- docs/docs/gpt-researcher/context/local-docs.md | 2 +- docs/docs/gpt-researcher/context/tailored-research.md | 2 +- docs/docs/gpt-researcher/frontend/frontend.md | 9 +++++++-- docs/docs/gpt-researcher/llms/llms.md | 2 +- docs/docs/gpt-researcher/multi_agents/langgraph.md | 2 +- docs/docs/welcome.md | 2 +- docs/src/components/HomepageFeatures.js | 2 +- 9 files changed, 20 insertions(+), 15 deletions(-) diff --git a/README-ko_KR.md b/README-ko_KR.md index f6c24a929..418436958 100644 --- a/README-ko_KR.md +++ b/README-ko_KR.md @@ -128,7 +128,7 @@ python -m uvicorn main:app --reload
-**[Poetry](https://docs.gptr.dev/docs/gpt-researcher/getting-started#poetry) ๋˜๋Š” [๊ฐ€์ƒ ํ™˜๊ฒฝ](https://docs.gptr.dev/docs/gpt-researcher/getting-started#virtual-environment)์— ๋Œ€ํ•ด ๋ฐฐ์šฐ๊ณ  ์‹ถ๋‹ค๋ฉด, [๋ฌธ์„œ](https://docs.gptr.dev/docs/gpt-researcher/getting-started)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.** +**[Poetry](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started#poetry) ๋˜๋Š” [๊ฐ€์ƒ ํ™˜๊ฒฝ](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started#virtual-environment)์— ๋Œ€ํ•ด ๋ฐฐ์šฐ๊ณ  ์‹ถ๋‹ค๋ฉด, [๋ฌธ์„œ](https://docs.gptr.dev/docs/gpt-researcher/getting-started)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.** ### PIP ํŒจํ‚ค์ง€๋กœ ์‹คํ–‰ํ•˜๊ธฐ ```bash @@ -148,11 +148,11 @@ report = await researcher.write_report() ... ``` -**๋” ๋งŽ์€ ์˜ˆ์ œ์™€ ๊ตฌ์„ฑ ์˜ต์…˜์€ [PIP ๋ฌธ์„œ](https://docs.gptr.dev/docs/gpt-researcher/pip-package)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.** +**๋” ๋งŽ์€ ์˜ˆ์ œ์™€ ๊ตฌ์„ฑ ์˜ต์…˜์€ [PIP ๋ฌธ์„œ](https://docs.gptr.dev/docs/gpt-researcher/gptr/pip-package)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.** ## Docker๋กœ ์‹คํ–‰ -> **1๋‹จ๊ณ„** - [Docker ์„ค์น˜](https://docs.gptr.dev/docs/gpt-researcher/getting-started#try-it-with-docker) +> **1๋‹จ๊ณ„** - [Docker ์„ค์น˜](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started-with-docker) > **2๋‹จ๊ณ„** - `.env.example` ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ  API ํ‚ค๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„, ํŒŒ์ผ์„ `.env`๋กœ ์ €์žฅํ•˜์„ธ์š”. @@ -180,7 +180,7 @@ export DOC_PATH="./my-docs" 2๋‹จ๊ณ„: - ํ”„๋ก ํŠธ์—”๋“œ ์•ฑ์„ localhost:8000์—์„œ ์‹คํ–‰ ์ค‘์ด๋ผ๋ฉด, "Report Source" ๋“œ๋กญ๋‹ค์šด ์˜ต์…˜์—์„œ "My Documents"๋ฅผ ์„ ํƒํ•˜์„ธ์š”. - - GPT Researcher๋ฅผ [PIP ํŒจํ‚ค์ง€](https://docs.tavily.com/docs/gpt-researcher/pip-package)๋กœ ์‹คํ–‰ ์ค‘์ด๋ผ๋ฉด, `report_source` ์ธ์ˆ˜๋ฅผ "documents"๋กœ ์„ค์ •ํ•˜์—ฌ `GPTResearcher` ํด๋ž˜์Šค๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜์„ธ์š”. [์ฝ”๋“œ ์˜ˆ์ œ](https://docs.tavily.com/docs/gpt-researcher/tailored-research)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. + - GPT Researcher๋ฅผ [PIP ํŒจํ‚ค์ง€](https://docs.tavily.com/docs/gpt-researcher/pip-package)๋กœ ์‹คํ–‰ ์ค‘์ด๋ผ๋ฉด, `report_source` ์ธ์ˆ˜๋ฅผ "local"๋กœ ์„ค์ •ํ•˜์—ฌ `GPTResearcher` ํด๋ž˜์Šค๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜์„ธ์š”. [์ฝ”๋“œ ์˜ˆ์ œ](https://docs.gptr.dev/docs/gpt-researcher/context/tailored-research)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ๐Ÿ‘ช ๋‹ค์ค‘ ์—์ด์ „ํŠธ ์–ด์‹œ์Šคํ„ดํŠธ @@ -190,7 +190,7 @@ LangGraph๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์—ฌ๋Ÿฌ ์—์ด์ „ํŠธ์˜ ์ „๋ฌธ ๊ธฐ์ˆ ์„ ํ™œ์šฉํ•˜์—ฌ ํ‰๊ท  ์‹คํ–‰์€ 5-6 ํŽ˜์ด์ง€ ๋ถ„๋Ÿ‰์˜ ์—ฐ๊ตฌ ๋ณด๊ณ ์„œ๋ฅผ PDF, Docx, Markdown ํ˜•์‹์œผ๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. -[์—ฌ๊ธฐ](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents)์—์„œ ํ™•์ธํ•˜๊ฑฐ๋‚˜ [๋ฌธ์„œ](https://docs.gptr.dev/docs/gpt-researcher/langgraph)์—์„œ ์ž์„ธํ•œ ๋‚ด์šฉ์„ ์ฐธ์กฐํ•˜์„ธ์š”. +[์—ฌ๊ธฐ](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents)์—์„œ ํ™•์ธํ•˜๊ฑฐ๋‚˜ [๋ฌธ์„œ](https://docs.gptr.dev/docs/gpt-researcher/multi_agents)์—์„œ ์ž์„ธํ•œ ๋‚ด์šฉ์„ ์ฐธ์กฐํ•˜์„ธ์š”. ## ๐Ÿ–ฅ๏ธ ํ”„๋ก ํŠธ์—”๋“œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ diff --git a/docs/docs/faq.md b/docs/docs/faq.md index b6e979920..cf4d36337 100644 --- a/docs/docs/faq.md +++ b/docs/docs/faq.md @@ -4,7 +4,7 @@ It really depends on what you're aiming for. If you're looking to connect your AI application to the internet with Tavily tailored API, check out the [Tavily API](https://docs.tavily.com/docs/tavily-api/introductionn) documentation. -If you're looking to build and deploy our open source autonomous research agent GPT Researcher, please see [GPT Researcher](/docs/gpt-researcher/introduction) documentation. +If you're looking to build and deploy our open source autonomous research agent GPT Researcher, please see [GPT Researcher](/docs/gpt-researcher/getting-started/introduction) documentation. You can also check out demos and examples for inspiration [here](/docs/examples/examples). ### What is GPT Researcher? @@ -13,7 +13,7 @@ GPT Researcher is a popular open source autonomous research agent that takes car GPT Researcher is built with best practices for leveraging LLMs (prompt engineering, RAG, chains, embeddings, etc), and is optimized for quick and efficient research. It is also fully customizable and can be tailored to your specific needs. -To learn more about GPT Researcher, check out the [documentation page](/docs/gpt-researcher/introduction). +To learn more about GPT Researcher, check out the [documentation page](/docs/gpt-researcher/getting-started/introduction). ### How much does each research run cost? diff --git a/docs/docs/gpt-researcher/context/local-docs.md b/docs/docs/gpt-researcher/context/local-docs.md index 7652eacff..46c0cf38f 100644 --- a/docs/docs/gpt-researcher/context/local-docs.md +++ b/docs/docs/gpt-researcher/context/local-docs.md @@ -12,7 +12,7 @@ export DOC_PATH="./my-docs" Step 2: - If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options. - - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/pip-package), pass the `report_source` argument as "local" when you instantiate the `GPTResearcher` class [code sample here](https://docs.gptr.dev/docs/gpt-researcher/context/tailored-research). + - If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/gptr/pip-package), pass the `report_source` argument as "local" when you instantiate the `GPTResearcher` class [code sample here](https://docs.gptr.dev/docs/gpt-researcher/context/tailored-research). ## Local Docs + Web (Hybrid) diff --git a/docs/docs/gpt-researcher/context/tailored-research.md b/docs/docs/gpt-researcher/context/tailored-research.md index 38ea52ad1..2ff32faed 100644 --- a/docs/docs/gpt-researcher/context/tailored-research.md +++ b/docs/docs/gpt-researcher/context/tailored-research.md @@ -89,7 +89,7 @@ You can combine the above methods to conduct hybrid research. For example, you c Simply provide the sources and set the `report_source` argument as `"hybrid"` and watch the magic happen. Please note! You should set the proper retrievers for the web sources and doc path for local documents for this to work. -To lean more about retrievers check out the [Retrievers](https://docs.gptr.dev/docs/gpt-researcher/retrievers) documentation. +To lean more about retrievers check out the [Retrievers](https://docs.gptr.dev/docs/gpt-researcher/search-engines/retrievers) documentation. ### Research on LangChain Documents ๐Ÿฆœ๏ธ๐Ÿ”— diff --git a/docs/docs/gpt-researcher/frontend/frontend.md b/docs/docs/gpt-researcher/frontend/frontend.md index 10cee0078..a68fc284a 100644 --- a/docs/docs/gpt-researcher/frontend/frontend.md +++ b/docs/docs/gpt-researcher/frontend/frontend.md @@ -18,14 +18,19 @@ It comes with loads of added features, such as: ### Run the NextJS React App with Docker -> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started#try-it-with-docker) +> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started-with-docker) > **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' > **Step 3** - Within the docker-compose file comment out services that you don't want to run with Docker. ```bash -$ docker compose up --build +docker compose up --build +``` + +If that doesn't work, try running it without the dash: +```bash +docker compose up --build ``` > **Step 4** - By default, if you haven't uncommented anything in your docker-compose file, this flow will start 2 processes: diff --git a/docs/docs/gpt-researcher/llms/llms.md b/docs/docs/gpt-researcher/llms/llms.md index 89f7976d9..99829403b 100644 --- a/docs/docs/gpt-researcher/llms/llms.md +++ b/docs/docs/gpt-researcher/llms/llms.md @@ -1,6 +1,6 @@ # Configure LLM -As described in the [introduction](/docs/gpt-researcher/config), the default LLM is OpenAI due to its superior performance and speed. +As described in the [introduction](/docs/gpt-researcher/gptr/config), the default LLM is OpenAI due to its superior performance and speed. With that said, GPT Researcher supports various open/closed source LLMs, and you can easily switch between them by adding the `LLM_PROVIDER` env variable and corresponding configuration params. Current supported LLMs are `openai`, `google` (gemini), `azure_openai`, `ollama`, `anthropic`, `mistral`, `huggingface` and `groq`. diff --git a/docs/docs/gpt-researcher/multi_agents/langgraph.md b/docs/docs/gpt-researcher/multi_agents/langgraph.md index e6014932f..f450931ce 100644 --- a/docs/docs/gpt-researcher/multi_agents/langgraph.md +++ b/docs/docs/gpt-researcher/multi_agents/langgraph.md @@ -120,7 +120,7 @@ It comes with loads of added features, such as: ### Run the NextJS React App with Docker -> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started#try-it-with-docker) +> **Step 1** - [Install Docker](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started-with-docker) > **Step 2** - Clone the '.env.example' file, add your API Keys to the cloned file and save the file as '.env' diff --git a/docs/docs/welcome.md b/docs/docs/welcome.md index e53121762..3831b2fe6 100644 --- a/docs/docs/welcome.md +++ b/docs/docs/welcome.md @@ -10,4 +10,4 @@ Quickly accessing relevant and trustworthy information is more crucial than ever This is why we've built the trending open source **[GPT Researcher](https://github.com/assafelovic/gpt-researcher)**. GPT Researcher is an autonomous agent that takes care of the tedious task of research for you, by scraping, filtering and aggregating over 20+ web sources per a single research task. -To learn more about GPT Researcher, check out the [documentation page](/docs/gpt-researcher/introduction). +To learn more about GPT Researcher, check out the [documentation page](/docs/gpt-researcher/getting-started/introduction). diff --git a/docs/src/components/HomepageFeatures.js b/docs/src/components/HomepageFeatures.js index 91281997e..ffd2a6b4f 100644 --- a/docs/src/components/HomepageFeatures.js +++ b/docs/src/components/HomepageFeatures.js @@ -27,7 +27,7 @@ const FeatureList = [ { title: 'Multi-Agent Assistant', Svg: require('../../static/img/multi-agent.png').default, - docLink: './docs/gpt-researcher/langgraph', + docLink: './docs/gpt-researcher/multi_agents', description: ( <> Learn how a team of AI agents can work together to conduct research on a given topic, from planning to publication.