Skip to content

Your local AI counselor. LLM app that runs offline from a single binary.

License

Notifications You must be signed in to change notification settings

danlou/safespace

Repository files navigation

🫧 safespace

promo

safespace is a lightweight LLM that can run on your laptop (or desktop) to help you navigate whatever personal issues or concerns may be troubling you. It's trained to be non-judgemental and support you in coming up with solutions on your own.

If you know about ELIZA, you can think of safespace as its modern counterpart.

Main Features

  • Simple: It's just a single binary executable, no frameworks, no installs. Also, no GPU required.
  • Private: Designed to be run locally and traceless - no logs, analytics, etc. Runs offline.
  • Lightweight: Fully conversational at only ~5GB of RAM. See Model Details.
  • Transparent: App is contained in the 100 lines of safespace.py - have a look at what it's doing.
  • Free: Use it as much as you want.

Note: This model, like all LLMs, can hallucinate. It can sometimes refer to past conversations which never took place (and are not registered in any case). In rare occassions, it might also make ill-advised suggestions (in spite of its training) - use with common sense.

Video Demo

asciicast

Quickstart

1. Download Binaries

Download the binary appropriate for your platform (~30MB).

2. Run Binaries

Go to the directory where the binary is located (you may move it anywhere), and type the following on your terminal.

./safespace

The application will search for a safespace_models in the same directory. If none is found, it will create it and download the default model (3.8GB) on the first run. If this download is interrupted, you will need to re-run the application with following command:

./safespace --force-download

If you get a 'Permission Denied' error, try setting permissions before running:

chmod +x safespace

You might also be able to run the binary by double-clicking on the downloaded binary file, but this hasn't been tested carefully yet.

Running from Python

If you don't want to use the binary files, you can run safespace from the code in this repository. It should run faster this way, but requires some setup.

git clone https://github.com/danlou/safespace.git
cd safespace
pip install -r requirements.txt
python safespace.py

While this project has minimal dependencies, it may still be advisable to use a virtual environment. This installs the default llama.cpp, please check the documentation at the llama-cpp-python repository for instructions on taking advantage of platform specific optimizations (e.g. Apple Silicon). Tested on Python 3.10.

Model Details

safespace uses a Llama v2 7B model that has been 4-bit quantized using llama.cpp to run efficiently on CPUs. In particular, it uses a custom fine-tuned model trained on a synthetic dataset of transcripts of therapy sessions with a Rogerian therapist. The sessions in this synthetic dataset are derived from a compilation of reddit posts on subreddits related to mental health (r/adhd, r/depression, r/aspergers, ...). We make our synthetic dataset available here, and the reddit dataset may be found here.

Our model is fine-tuned from the Samantha-1.11 model (by @ehartford, see blog post) which, in turn, is a fine-tune of original Llama v2 checkpoints towards developing a companion trained on philosophy, psychology, and personal relationships.

Built with Axolotl

License

The code in this repository is under the fully permissive MIT License, but the models are subject to the Llama 2 Community License.