Skip to content

Commit

Permalink
feat: Migrate docs (#646)
Browse files Browse the repository at this point in the history
* updated docs for readme

* Update index.md

* Update index.md

* added header

* broken link

* sync heading sizes

* fix various broken rel links

* Update index.md

* added webp

* Update index.md

* strip mkdocs/rtk files

* replaced readthedocs references with readme
  • Loading branch information
cpacker authored Dec 19, 2023
1 parent 8d44490 commit c029008
Show file tree
Hide file tree
Showing 37 changed files with 308 additions and 284 deletions.
23 changes: 23 additions & 0 deletions .github/workflows/rdme-docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# This GitHub Actions workflow was auto-generated by the `rdme` cli on 2023-12-18T23:15:45.852Z
# You can view our full documentation here: https://docs.readme.com/docs/rdme
name: ReadMe GitHub Action 🦉

on:
push:
branches:
# This workflow will run every time you push code to the following branch: `migrate-docs`
# Check out GitHub's docs for more info on configuring this:
# https://docs.github.com/actions/using-workflows/events-that-trigger-workflows
- main

jobs:
rdme-docs:
runs-on: ubuntu-latest
steps:
- name: Check out repo 📚
uses: actions/checkout@v3

- name: Run `docs` command 🚀
uses: readmeio/rdme@v8
with:
rdme: docs docs --key=${{ secrets.README_API_KEY }} --version=1.0
19 changes: 0 additions & 19 deletions .readthedocs.yaml

This file was deleted.

6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@

<strong>Try out our MemGPT chatbot on <a href="https://discord.gg/9GEQrxmVyE">Discord</a>!</strong>

<strong>⭐ NEW: You can now run MemGPT with <a href="https://memgpt.readthedocs.io/en/latest/local_llm/">open/local LLMs</a> and <a href="https://memgpt.readthedocs.io/en/latest/autogen/">AutoGen</a>! ⭐ </strong>
<strong>⭐ NEW: You can now run MemGPT with <a href="https://memgpt.readme.io/docs/local_llm">open/local LLMs</a> and <a href="https://memgpt.readme.io/docs/autogen">AutoGen</a>! ⭐ </strong>


[![Discord](https://img.shields.io/discord/1161736243340640419?label=Discord&logo=discord&logoColor=5865F2&style=flat-square&color=5865F2)](https://discord.gg/9GEQrxmVyE)
[![arxiv 2310.08560](https://img.shields.io/badge/arXiv-2310.08560-B31B1B?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2310.08560)
[![Documentation](https://img.shields.io/github/v/release/cpacker/MemGPT?label=Documentation&logo=readthedocs&style=flat-square)](https://memgpt.readthedocs.io/en/latest/)
[![Documentation](https://img.shields.io/github/v/release/cpacker/MemGPT?label=Documentation&logo=readthedocs&style=flat-square)](https://memgpt.readme.io/docs)

</div>

Expand Down Expand Up @@ -96,7 +96,7 @@ You can run the following commands in the MemGPT CLI prompt:
Once you exit the CLI with `/exit`, you can resume chatting with the same agent by specifying the agent name in `memgpt run --agent <NAME>`.

## Documentation
See full documentation at: https://memgpt.readthedocs.io/
See full documentation at: https://memgpt.readme.io

## Installing from source

Expand Down
13 changes: 0 additions & 13 deletions docs/README.md

This file was deleted.

10 changes: 8 additions & 2 deletions docs/adding_wrappers.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
!!! warning "MemGPT + local LLM failure cases"
---
title: Adding support for new LLMs
excerpt: Adding new LLMs via model wrappers
category: 6580dabb585483000f0e6c7c
---

When using open LLMs with MemGPT, **the main failure case will be your LLM outputting a string that cannot be understood by MemGPT**. MemGPT uses function calling to manage memory (eg `edit_core_memory(...)` and interact with the user (`send_message(...)`), so your LLM needs generate outputs that can be parsed into MemGPT function calls.
> ⚠️ MemGPT + local LLM failure cases
>
> When using open LLMs with MemGPT, **the main failure case will be your LLM outputting a string that cannot be understood by MemGPT**. MemGPT uses function calling to manage memory (eg `edit_core_memory(...)` and interact with the user (`send_message(...)`), so your LLM needs generate outputs that can be parsed into MemGPT function calls.
### What is a "wrapper"?

Expand Down
Binary file added docs/assets/cozy_llama.webp
Binary file not shown.
90 changes: 48 additions & 42 deletions docs/autogen.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
!!! question "Need help?"

If you need help visit our [Discord server](https://discord.gg/9GEQrxmVyE) and post in the #support channel.

You can also check the [GitHub discussion page](https://github.com/cpacker/MemGPT/discussions/65), but the Discord server is the official support channel and is monitored more actively.

!!! warning "Tested with `pyautogen` v0.2.0"

The MemGPT+AutoGen integration was last tested using AutoGen version v0.2.0.

If you are having issues, please first try installing the specific version of AutoGen using `pip install pyautogen==0.2.0` (or `poetry install -E autogen` if you are using Poetry).
---
title: MemGPT + AutoGen
excerpt: Creating AutoGen agents powered by MemGPT
category: 6580dab16cade8003f996d17
---

> 📘 Need help?
>
> If you need help visit our [Discord server](https://discord.gg/9GEQrxmVyE) and post in the #support channel.
>
> You can also check the [GitHub discussion page](https://github.com/cpacker/MemGPT/discussions/65), but the Discord server is the official support channel and is monitored more actively.
> ⚠️ Tested with `pyautogen` v0.2.0
>
> The MemGPT+AutoGen integration was last tested using AutoGen version v0.2.0.
>
> If you are having issues, please first try installing the specific version of AutoGen using `pip install pyautogen==0.2.0` (or `poetry install -E autogen` if you are using Poetry).
## Overview

Expand Down Expand Up @@ -69,32 +75,31 @@ For the purposes of this example, we're going to serve (host) the LLMs using [oo

### Part 1: Get web UI working

Install web UI and get a model set up on a local web server. You can use [our instructions on setting up web UI](https://memgpt.readthedocs.io/en/latest/webui/).

!!! info "Choosing an LLM / model to use"
Install web UI and get a model set up on a local web server. You can use [our instructions on setting up web UI](webui).

You'll need to decide on an LLM / model to use with web UI.

MemGPT requires an LLM that is good at function calling to work well - if the LLM is bad at function calling, **MemGPT will not work properly**.

Visit [our Discord server](https://discord.gg/9GEQrxmVyE) and check the #model-chat channel for an up-to-date list of recommended LLMs / models to use with MemGPT.
> 📘 Choosing an LLM / model to use
> You'll need to decide on an LLM / model to use with web UI.
>
> MemGPT requires an LLM that is good at function calling to work well - if the LLM is bad at function calling, **MemGPT will not work properly**.
>
> Visit [our Discord server](https://discord.gg/9GEQrxmVyE) and check the #model-chat channel for an up-to-date list of recommended LLMs / models to use with MemGPT.
### Part 2: Get MemGPT working

Before trying to integrate MemGPT with AutoGen, make sure that you can run MemGPT by itself with the web UI backend.

Try setting up MemGPT with your local web UI backend [using the instructions here](https://memgpt.readthedocs.io/en/latest/local_llm/#using-memgpt-with-local-llms).
Try setting up MemGPT with your local web UI backend [using the instructions here](local_llm/#using-memgpt-with-local-llms).

Once you've confirmed that you're able to chat with a MemGPT agent using `memgpt configure` and `memgpt run`, you're ready to move on to the next step.

!!! info "Using RunPod as an LLM backend"

If you're using RunPod to run web UI, make sure that you set your endpoint to the RunPod IP address, **not the default localhost address**.

For example, during `memgpt configure`:
```text
? Enter default endpoint: https://yourpodaddresshere-5000.proxy.runpod.net
```
> 📘 Using RunPod as an LLM backend
>
> If you're using RunPod to run web UI, make sure that you set your endpoint to the RunPod IP address, **not the default localhost address**.
>
> For example, during `memgpt configure`:
> ```text
> ? Enter default endpoint: https://yourpodaddresshere-5000.proxy.runpod.net
> ```
### Part 3: Creating a MemGPT AutoGen agent (groupchat example)
Expand Down Expand Up @@ -127,7 +132,7 @@ config_list = [
config_list_memgpt = [
{
"preset": DEFAULT_PRESET,
"model": None, # not required for web UI, only required for Ollama, see: https://memgpt.readthedocs.io/en/latest/ollama/
"model": None, # not required for web UI, only required for Ollama, see: https://memgpt.readme.io/docs/ollama
"model_wrapper": "airoboros-l2-70b-2.1", # airoboros is the default wrapper and should work for most models
"model_endpoint_type": "webui",
"model_endpoint": "http://localhost:5000", # notice port 5000 for web UI
Expand Down Expand Up @@ -187,7 +192,7 @@ config_list_memgpt = [
```

#### Azure OpenAI example
Azure OpenAI API setup will be similar to OpenAI API, but requires additional config variables. First, make sure that you've set all the related Azure variables referenced in [our MemGPTAzure setup page](https://memgpt.readthedocs.io/en/latest/endpoints) (`AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_VERSION`, `AZURE_OPENAI_ENDPOINT`, etc). If you have all the variables set correctly, you should be able to create configs by pulling from the env variables:
Azure OpenAI API setup will be similar to OpenAI API, but requires additional config variables. First, make sure that you've set all the related Azure variables referenced in [our MemGPT Azure setup page](https://memgpt.readme.io/docs/endpoints#azure-openai) (`AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_VERSION`, `AZURE_OPENAI_ENDPOINT`, etc). If you have all the variables set correctly, you should be able to create configs by pulling from the env variables:
```python
# This config is for autogen agents that are not powered by MemGPT
# See Auto
Expand Down Expand Up @@ -219,18 +224,19 @@ config_list_memgpt = [
]
```

!!! info "Making internal monologue visible to AutoGen"

By default, MemGPT's inner monologue and function traces are hidden from other AutoGen agents.

You can modify `interface_kwargs` to change the visibility of inner monologue and function calling:
```python
interface_kwargs = {
"debug": False, # this is the equivalent of the --debug flag in the MemGPT CLI
"show_inner_thoughts": True, # this controls if internal monlogue will show up in AutoGen MemGPT agent's outputs
"show_function_outputs": True, # this controls if function traces will show up in AutoGen MemGPT agent's outputs
}
```
> 📘 Making internal monologue visible to AutoGen
>
> By default, MemGPT's inner monologue and function traces are hidden from other AutoGen agents.
>
> You can modify `interface_kwargs` to change the visibility of inner monologue and function calling:
> ```python
> interface_kwargs = {
> "debug": False, # this is the equivalent of the --debug flag in the MemGPT CLI
> "show_inner_thoughts": True, # this controls if internal monlogue will show up in AutoGen MemGPT agent's outputs
> "show_function_outputs": True, # this controls if function traces will show up in AutoGen MemGPT agent's outputs
> }
> ```
The only parts of the `agent_groupchat.py` file you need to modify should be the `config_list` and `config_list_memgpt` (make sure to change `USE_OPENAI` to `True` or `False` depending on if you're trying to use a local LLM server like web UI, or OpenAI's API). Assuming you edited things correctly, you should now be able to run `agent_groupchat.py`:
```sh
Expand Down Expand Up @@ -307,7 +313,7 @@ User_proxy (to chat_manager):

[examples/agent_docs.py](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/agent_docs.py) contains an example of a groupchat where the MemGPT autogen agent has access to documents.

First, follow the instructions in [Example - chat with your data - Creating an external data source](../example_data/#creating-an-external-data-source):
First, follow the instructions in [Example - chat with your data - Creating an external data source](example_data/#creating-an-external-data-source):

To download the MemGPT research paper we'll use `curl` (you can also just download the PDF from your browser):
```sh
Expand Down
14 changes: 9 additions & 5 deletions docs/cli_faq.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,18 @@
# Frequently asked questions
---
title: Frequently asked questions (FAQ)
excerpt: Check frequently asked questions
category: 6580d34ee5e4d00068bf2a1d
---

!!! note "Open / local LLM FAQ"

Questions specific to running your own open / local LLMs with MemGPT can be found [here](../local_llm_faq).
> 📘 Open / local LLM FAQ
>
> Questions specific to running your own open / local LLMs with MemGPT can be found [here](local_llm_faq).
## MemGPT CLI

### How can I use MemGPT to chat with my docs?

Check out our [chat with your docs example](../example_data) to get started.
Check out our [chat with your docs example](example_data) to get started.

### How do I save a chat and continue it later?

Expand Down
8 changes: 7 additions & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
### Configuring the agent
---
title: Configuration
excerpt: Configuring your MemGPT agent
category: 6580d34ee5e4d00068bf2a1d
---

You can set agent defaults by running `memgpt configure`, which will store config information at `~/.memgpt/config` by default.

The `memgpt run` command supports the following optional flags (if set, will override config defaults):
Expand Down Expand Up @@ -43,3 +48,4 @@ memgpt list [humans/personas]
```

### Custom Presets
You can customize your MemGPT agent even further with [custom presets](presets) and [custom functions](functions).
6 changes: 6 additions & 0 deletions docs/contributing.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
title: Contributing to the codebase
excerpt: How to contribute to the MemGPT repo
category: 6580dabb585483000f0e6c7c
---

## Installing from source

To install MemGPT from source, start by cloning the repo:
Expand Down
14 changes: 8 additions & 6 deletions docs/data_sources.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
## Loading External Data
---
title: Attaching data sources
excerpt: Connecting external data to your MemGPT agent
category: 6580d34ee5e4d00068bf2a1d
---

MemGPT supports pre-loading data into archival memory. In order to made data accessible to your agent, you must load data in with `memgpt load`, then attach the data source to your agent. You can configure where archival memory is stored by configuring the [storage backend](storage.md).

### Viewing available data sources
Expand Down Expand Up @@ -34,11 +39,8 @@ memgpt attach --agent <AGENT-NAME> --data-source <DATA-SOURCE-NAME>
memgpt-docs
```


!!! tip "Hint"
To encourage your agent to reference its archival memory, we recommend adding phrases like "_search your archival memory..._" for the best results.


> 👍 Hint
> To encourage your agent to reference its archival memory, we recommend adding phrases like "_search your archival memory..._" for the best results.
### Loading a file or directory
You can load a file, list of files, or directry into MemGPT with the following command:
Expand Down
6 changes: 5 additions & 1 deletion docs/discord_bot.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
## Chatting with the MemGPT Discord Bot
---
title: Chatting with MemGPT Bot
excerpt: Get up and running with the MemGPT Discord Bot
category: 6580da8eb6feb700166e5016
---

The fastest way to experience MemGPT is to chat with the MemGPT Discord Bot.

Expand Down
13 changes: 9 additions & 4 deletions docs/embedding_endpoints.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
title: Configuring embedding backends
excerpt: Connecting MemGPT to various endpoint backends
category: 6580d34ee5e4d00068bf2a1d
---

MemGPT uses embedding models for retrieval search over archival memory. You can use embeddings provided by OpenAI, Azure, or any model on Hugging Face.

## OpenAI
Expand Down Expand Up @@ -47,11 +53,10 @@ MemGPT supports running embeddings with any Hugging Face model using the [Text E
## Local Embeddings

MemGPT can compute embeddings locally using a lightweight embedding model [`BAAI/bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5).
!!! warning "Local LLM Performance"

The `BAAI/bge-small-en-v1.5` was chose to be lightweight, so you may notice degraded performance with embedding-based retrieval when using this option.


> 🚧 Local LLM Performance
>
> The `BAAI/bge-small-en-v1.5` was chosen to be lightweight, so you may notice degraded performance with embedding-based retrieval when using this option.
To compute embeddings locally, install dependencies with:
```
Expand Down
8 changes: 7 additions & 1 deletion docs/endpoints.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
title: Configuring LLM backends
excerpt: Connecting MemGPT to various LLM backends
category: 6580d34ee5e4d00068bf2a1d
---

You can use MemGPT with various LLM backends, including the OpenAI API, Azure OpenAI, and various local (or self-hosted) LLM backends.

## OpenAI
Expand Down Expand Up @@ -72,4 +78,4 @@ $ memgpt configure
Note: **your Azure endpoint must support functions** or you will get an error. See [this GitHub issue](https://github.com/cpacker/MemGPT/issues/91) for more information.

## Local Models & Custom Endpoints
MemGPT supports running open source models, both being run locally or as a hosted service. Setting up MemGPT to run with open models requires a bit more setup, follow [the instructions here](../local_llm).
MemGPT supports running open source models, both being run locally or as a hosted service. Setting up MemGPT to run with open models requires a bit more setup, follow [the instructions here](local_llm).
11 changes: 8 additions & 3 deletions docs/example_chat.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
!!! note "Note"
---
title: Example - perpetual chatbot
excerpt: Using MemGPT to create a perpetual chatbot
category: 6580d34ee5e4d00068bf2a1d
---

Before starting this example, make sure that you've [properly installed MemGPT](../quickstart)
> 📘 Confirm your installation
>
> Before starting this example, make sure that you've [properly installed MemGPT](quickstart)
## Using MemGPT to create a perpetual chatbot
In this example, we're going to use MemGPT to create a chatbot with a custom persona. MemGPT chatbots are "perpetual chatbots", meaning that they can be run indefinitely without any context length limitations. MemGPT chatbots are self-aware that they have a "fixed context window", and will manually manage their own memories to get around this problem by moving information in and out of their small memory window and larger external storage.

MemGPT chatbots always keep a reserved space in their "core" memory window to store their `persona` information (describes the bot's personality + basic functionality), and `human` information (which describes the human that the bot is chatting with). The MemGPT chatbot will update the `persona` and `human` core memory blocks over time as it learns more about the user (and itself).
Expand Down
Loading

0 comments on commit c029008

Please sign in to comment.