Releases: letta-ai/letta
0.2.8
This release includes major updates to help it get easier to get started with MemGPT!
Note: release 0.2.8 superseded by bugfix release 0.2.9
🎄 Free MemGPT Hosted Endpoints
MemGPT now can be used with hosted LLM and embedding endpoints, which are free and do not require an access key! The LLM endpoint is running a variant of the newly released Mixtral model - specifically Dolphin 2.5 Mixtral 8x7b 🐬!
Since the endpoint is still in beta, please expect occasional downtime. You can check for uptime at https://status.memgpt.ai.
⚡ Quickstart Configuration
You can automatically configure MemGPT (for the MemGPT endpoints and OpenAI) with quickstart commands:
# using MemGPT free endpoint
> memgpt quickstart --latest
# using OpenAI endpoint
> memgpt quickstart --latest --backend openai
This will set default options in the file ~/.memgpt/config
which you can also modify with advanced options in memgpt configure
.
📖 Documentation Updates
MemGPT's documentation has migrated to https://memgpt.readme.io.
✍️ Full Change Log
- API server refactor + REST API by @cpacker in #593
- added
memgpt server
command by @cpacker in #611 - updated local APIs to return usage info by @cpacker in #585
- added autogen as an extra by @cpacker in #616
- Add safeguard on tokens returned by functions by @cpacker in #576
- patch bug where
function_args.copy()
throws runtime error by @cpacker in #617 - allow passing custom host to rest server by @cpacker in #618
- migrate to using completions endpoint by default by @cpacker in #628
- Patch bug with loading of old agents by @cpacker in #629
- fix: poetry add [html2text/docx2txt] by @cpacker in #633
- feat: Add semantic PR checking to enforce prefixes on PRs by @cpacker in #634
- feat: added memgpt folder command by @cpacker in #632
- feat: Add common + custom settings files for completion endpoints by @cpacker in #631
- feat: Migrate docs by @cpacker in #646
- feat: Updated contributing docs by @cpacker in #653
- fix: [446] better gitignore for IDEs and OS. by @agiletechnologist in #651
- feat: updated/added docs assets by @cpacker in #654
- feat: Add
memgpt quickstart
command by @cpacker in #641 - fix: patch ollama bug w/ raw mode by @cpacker in #663
- fix: Patch openai error message + openai quickstart by @cpacker in #665
- fix: added logging of raw response on debug by @cpacker in #666
- feat: added /summarize command by @cpacker in #667
- feat: Add new wrapper defaults by @cpacker in #656
- fix: Throw "env vars not set" early and enhance /attach for KeyboardInterrupt (#669) by @dejardim in #674
- fix: CLI conveniences (add-on to #674) by @cpacker in #675
- feat: pull model list for openai-compatible endpoints by @cpacker in #630
- fix: Update README.md by @cpacker in #676
- docs: patched asset links by @cpacker in #677
- feat: further simplify setup flow by @cpacker in #673
👋 New Contributors
Full Changelog: 0.2.7...0.2.8
0.2.7
Minor bugfix release
What's Changed
- allow passing
skip_verify
to autogen constructors by @cpacker in #581 - Chroma storage integration by @sarahwooders in #285
- Fix
pyproject.toml
chroma version by @sarahwooders in #582 - Remove broken tests from chroma merge by @sarahwooders in #584
- patch load_save test by @cpacker in #586
- Patch azure embeddings + handle azure deployments properly by @cpacker in #594
- AutoGen misc fixes by @cpacker in #603
- Add
lancedb
andchroma
into default package dependencies by @sarahwooders in #605 - Bump version 0.2.7 by @sarahwooders in #607
Full Changelog: 0.2.6...0.2.7
0.2.6
Bugfix release
What's Changed
- Add docs file for customizing embedding mode by @sarahwooders in #554
- Upgrade to
llama_index=0.9.10
by @sarahwooders in #556 - fix cannot import name 'EmptyIndex' from 'llama_index' by @cpacker in #558
- Fix typo in storage.md by @alxpez in #564
- use a consistent warning prefix across codebase by @cpacker in #569
- Update autogen.md to include Azure config example + patch for
pyautogen>=0.2.0
by @cpacker in #555 - Update autogen.md by @cpacker in #571
- Fix crash from bad key access into response_message by @claucambra in #437
- sort agents by directory-last-modified time by @cpacker in #574
- Add safety check to pop by @cpacker in #575
- Add
pyyaml
package topyproject.toml
by @cpacker in #557 - add back dotdict for backcompat by @cpacker in #572
- Bump version to 0.2.6 by @sarahwooders in #573
New Contributors
Full Changelog: 0.2.5...0.2.6
0.2.5
This release includes a number of bugfixes and new integrations:
- Bugfixes for AutoGen integration (including a common OpenAI dependency conflict issue)
- Documentations for how to use MemGPT with vLLM OpenAI compatible endpoints
- Integration with HuggingFace TEI for custom embedding models
This release also fully deprecates and removes legacy commands and configuration options which were no longer being maintained:
python main.py
command (replaced bymemgpt run
)- Usage of
BACKEND_TYPE
andOPENAI_BASE_URL
to configure local/custom LLMs (replaced bymemgpt configure
andmemgpt run
flags)
What's Changed
- add new manual json parser meant to catch send_message calls with trailing bad extra chars by @cpacker in #509
- add a longer prefix that to the default wrapper by @cpacker in #510
- add core memory char limits to text shown in core memory by @cpacker in #508
- [hotfix] extra arg being passed causing a runtime error by @cpacker in #517
- Add warning if no data sources loaded on
/attach
command by @sarahwooders in #513 - fix doc typo autogem to autogen by @paulasquin in #512
- Update contributing guidelines by @sarahwooders in #516
- Update contributing.md by @cpacker in #518
- Update contributing.md by @cpacker in #520
- Add support for HuggingFace Text Embedding Inference endpoint for embeddings by @sarahwooders in #524
- Update mkdocs theme, small fixes for
mkdocs.yml
by @cpacker in #522 - Update mkdocs.yml by @cpacker in #525
- Clean memory error messages by @cpacker in #523
- Fix class names used in persistence manager logging by @claucambra in #503
- Specify pyautogen dependency by adding install extra for autogen by @sarahwooders in #530
- Add
user
field for vLLM endpoint by @sarahwooders in #531 - Patch JSON parsing code (regex fallback) by @cpacker in #533
- Update bug_report.md by @cpacker in #532
- LanceDB integration bug fixes and improvements by @AyushExel in #528
- Remove
openai
package by @cpacker in #534 - Update contributing.md (typo) by @cpacker in #538
- Run formatting checks with poetry by @sarahwooders in #537
- Removing dead code + legacy commands by @sarahwooders in #536
- Remove usage of
BACKEND_TYPE
by @sarahwooders in #539 - Update AutoGen documentation and notebook example by @cpacker in #540
- Update local_llm.md by @cpacker in #542
- Documentation update by @cpacker in #541
- clean docs by @cpacker in #543
- Update autogen.md by @cpacker in #544
- update docs by @cpacker in #547
- added vLLM doc page since we support it by @cpacker in #545
New Contributors
- @paulasquin made their first contribution in #512
- @claucambra made their first contribution in #503
- @AyushExel made their first contribution in #528
Full Changelog: 0.2.4...0.2.5
0.2.4
This release includes bugfixes (including major bugfixes for autogen) and a number of new features:
- Custom presets, which allow customization of the set of function calls MemGPT can make
- Integration with LanceDB for archival storage contributed by @PrashantDixit0
- Integration with vLLM OpenAI compatible endpoints
What's Changed
- Set service context for llama index in
local.py
by @sarahwooders in #462 - Update functions.md by @cpacker in #461
- Fix linking functions from
~/.memgpt/functions
by @cpacker in #463 - Add d20 function example to readthedocs by @cpacker in #464
- Move
webui
backend to new openai completions endpoint by @cpacker in #468 - updated websocket protocol and server by @cpacker in #473
- Lancedb by @PrashantDixit0 in #455
- Docs: Fix typos by @sahusiddharth in #477
- Remove .DS_Store from agents list by @cpacker in #485
- Fix #487 (summarize call uses OpenAI even with local LLM config) by @cpacker in #488
- patch web UI by @cpacker in #484
- ANNA, an acronym for Adaptive Neural Network Assistant. personal research assistant by @agiletechnologist in #494
- vLLM support by @cpacker in #492
- Add error handling during linking imports by @cpacker in #495
- Fixes bugs with AutoGen implementation and exampes by @cpacker in #498
- [version] bump version to 0.2.4 by @sarahwooders in #497
New Contributors
- @PrashantDixit0 made their first contribution in #455
- @sahusiddharth made their first contribution in #477
- @agiletechnologist made their first contribution in #494
Full Changelog: 0.2.3...0.2.4
0.2.3
Updates
- Updated MemGPT and Agent Configs: This release makes changes to how MemGPT and agent configurations are stored. These changes will help MemGPT keep track of what settings and with what version an agent was saved with, to help improve cross-version compatibility for agents.
- If you've been using a prior version of MemGPT, you may need to re-run
memgpt configure
to update your configuration settings to be compatible with this version.
- If you've been using a prior version of MemGPT, you may need to re-run
- Configurable Presets: Presets have been refactored to allow developers to customize the set of functions and system prompts MemGPT uses.
What's Changed
- Configurable presets to support easy extension of MemGPT's function set by @cpacker in #420
- WebSocket interface and basic
server.py
process by @cpacker in #399 - patch
getargspec
error by @cpacker in #440 - always cast
config.context_window
toint
before use by @cpacker in #444 - Refactor config + determine LLM via
config.model_endpoint_type
by @sarahwooders in #422 - Update config to include
memgpt_version
and re-run configuration for old versions onmemgpt run
by @sarahwooders in #450 - Add load and load_and_attach functions to memgpt autogen agent. by @wrmedford in #430
- Update documentation [local LLMs, presets] by @cpacker in #453
- When default_mode_endpoint has a value, it needs to become model_endp… by @kfsone in #452
- Upgrade workflows to Python 3.11 by @sarahwooders in #441
New Contributors
Full Changelog: 0.2.2...0.2.3
0.2.2
What's Changed
- Fix MemGPTAgent attach docs error by @anjaleeps in #427
- [fix] remove asserts for
OPENAI_API_BASE
by @sarahwooders in #432 - Patch for #434 (context window value not used by
memgpt run
) by @cpacker in #435 - Patch for #428 (context window not passed to summarize calls) by @cpacker in #433
- [version] bump release to 0.2.2 by @cpacker in #436
New Contributors
- @anjaleeps made their first contribution in #427
Full Changelog: 0.2.1...0.2.2
0.2.1
This is a release to replace the yanked 0.2.0 release, which had critical bugs.
What's Changed
- [version] bump version to 0.2.0 by @sarahwooders in #410
- Fix main.yml to not rely on requirements.txt by @vivi in #411
- Hotfix openai create all with context_window kwarg by @vivi in #413
- Fix agent load for old agent config files by @sarahwooders in #412
- Patch local LLMs with context_window by @cpacker in #416
- Fix model configuration for when
config.model == "local"
previously by @sarahwooders in #415 - Throw more informative error when local model envs are/are not set by @sarahwooders in #418
- [version] bump version to 0.2.1 by @sarahwooders in #417
Full Changelog: 0.2.0...0.2.1
0.2.0
This release includes updated documentation , integration with vector databases (pgvector), and many bug fixes!
What's Changed
- Patch runtime error with personas by @cpacker in #221
- Gracefully catch errors when running agent.step() by @vivi in #216
- update max new tokens by @web3wes in #182
- Added db load ability by @mr-sk in #106
- Using SPR to Compress System Prompts by @tractorjuice in #158
- Cli bug fixes (loading human/persona text, azure setup, local setup) by @sarahwooders in #222
- Support for MemGPT + Autogen + Local LLM by @vivi in #231
- len needs to be implemented in all memory classes by @cpacker in #236
- Update README for CLI changes by @sarahwooders in #207
- Allow MemGPT to read/write text files + make HTTP requests by @cpacker in #174
- fixed load loading from wrong directory by @cpacker in #237
- await async_get_embeddings_with_backoff by @vivi in #239
- Add basic tests that are run on PR/main by @cpacker in #228
- fix: LocalArchivalMemory prints ref_doc_info on if not using EmptyIndex by @goetzrobin in #240
- Allow loading in a directory non-recursively by @vivi in #246
- Fix typos in functions spec by @cpacker in #268
- fix typo in the base system prompt by @yubozhao in #189
- Patch summarize when running with local llms by @cpacker in #213
- Improvements to JSON handling for local LLMs by @cpacker in #269
- Update openai_tools.py by @tractorjuice in #159
- Add more stop tokens by @cpacker in #288
- Don't prompt for selecting existing agent if there is a
--persona/human/model
flag by @sarahwooders in #289 - strip '/' and use osp.join (Windows support) by @cpacker in #283
- Make CLI agent flag errors more clear, and dont throw error if flags dont contradict existing agent config by @sarahwooders in #290
- VectorDB support (pgvector) for archival memory by @sarahwooders in #226
- try to patch hanging test by @cpacker in #295
- I made dump showing more messages and added a count (the last x) by @oderwat in #204
- I added commands to shape the conversation: by @oderwat in #218
- I added a "/retry" command to retry for getting another answer. by @oderwat in #188
- make timezone local by default by @cpacker in #298
- FIx #261 by @danx0r in #300
- Add grammar-based sampling (for webui, llamacpp, and koboldcpp) by @cpacker in #293
- fix: import PostgresStorageConnector only if postgres is selected as … by @goetzrobin in #310
- Don't import postgres storage if not specified in config by @sarahwooders in #318
- Aligned code with README for using Azure embeddings to load documents by @dividor in #308
- Fix: imported wrong storage connector by @sarahwooders in #320
- Remove embeddings as argument in archival_memory.insert by @cpacker in #284
- Create docs pages by @cpacker in #328
- patch in-chat command info by @cpacker in #332
- Bug fix grammar_name not being defined causes a crash by @borewik in #326
- cleanup #326 by @cpacker in #333
- Stopping the app to repeat the user message in normal use. by @oderwat in #304
- Remove redundant docs from README by @sarahwooders in #334
- Add autogen+localllm docs by @vivi in #335
- Add
memgpt version
command and package version by @sarahwooders in #336 - add ollama support by @cpacker in #314
- Better interface output for function calls by @vivi in #296
- Better error message printing for function call failing by @vivi in #291
- Fixing some dict value checking for function_call by @nuaimat in #249
- Specify model inference and embedding endpoint separately by @sarahwooders in #286
- Fix config tests by @sarahwooders in #343
- Avoid throwing error for older
~/.memgpt/config
files due to missing sectionarchival_storage
by @sarahwooders in #344 - Dependency management by @sarahwooders in #337
- Relax verify_first_message_correctness to accept any function call by @vivi in #340
- Update
poetry.lock
by @sarahwooders in #346 - Add autogen example that lets you chat with docs by @vivi in #342
- add gpt-4-turbo by @cpacker in #349
- Revert relaxing verify_first_message_correctness, still add archival_memory_search as an exception by @vivi in #350
- Bump version to 0.1.18 by @vivi in #351
- Remove
requirements.txt
andrequirements_local.txt
by @sarahwooders in #358 - disable pretty exceptions by @cpacker in #367
- Updated documentation for users by @cpacker in #365
- Create pull_request_template.md by @cpacker in #368
- Add pymemgpt-nightly workflow by @vivi in #373
- Update lmstudio.md by @cpacker in #382
- Update lmstudio.md to show the Prompt Formatting Option by @MSZ-MGS in #384
- Swap asset location from #384 by @cpacker in #385
- Update poetry with
pg8000
and includepgvector
in docs by @sarahwooders in #390 - Allow overriding config location with
MEMGPT_CONFIG_PATH
by @sarahwooders in #383 - Always default to local embeddings if not OpenAI or Azure by @sarahwooders in #387
- Add support for larger archival memory stores by @sarahwooders in #359
- Replace
memgpt run
flags error with warning + remove custom embedding endpoint option + add agent create time by @sarahwooders in #364 - Update webui.md by @cpacker in #397
- Update webui.md by @cpacker in #398
- softpass test when keys are missing by @cpacker in #369
- Use
~/.memgpt/config
to set questionary defaults inmemgpt configure
by @sarahwooders in #389 - Simple docker. by @BobKerns in #393
- Return empty list if archival memory search over empty local index by @sarahwooders in #402
- Remove AsyncAgent and async from cli by @vivi in #400
- I added some json repairs that helped me with malformed messages by @oderwat in #341
- Fix max tokens constant by @cpacker in #374
New Contributors
- @web3wes made their first contribution in #182
- @mr-sk made their first contribution in #106
- @goetzrobin made their first contribution in #240
- @yubozhao made their first contribution in #189
- @oderwat made their first contribution in #204
- @danx0r made their first contribution in #300
- @dividor made their first contribution in #308
- @borewik made their first contribution in #326
- @nuaimat made their first contribution in #249
- @MSZ-MGS made their first contribution in #384
- @BobKerns made their first contribution in #393
Full Changelog: 0.1.15...0.2.0