Limestone is a personalized and highly customizable Telegram bot that allows you to interact with a local instance of an LLM. With Limestone, you can chat, search, generate content, and more, all within the Telegram app.
This project aims to provide an accessible way to interact with GPT-based Language Models. By utilizing Telegram as a frontend, we ensure secure messaging and protect user data from potential leaks. The project enables users to run their Language Models privately and securely as long as you trust Telegram as a platform.
-
Set up and launch SGLang with your preferred model and configuration.
-
Clone the repository:
git clone https://github.com/bkutasi/limestone
- Create a new virtual environment with Python 3.13
pyton -m venv env
source env/bin/activate
- Install required packages
pip install -r requirements.txt
- Create a Telegram bot and obtain the token through BotFather. Additional bot documentation is available here and here. Then make you config.yaml file based on the config.example.yml file
python main.py
Note: Server-side encryption is not implemented. Not recommended for production use without proper security measures.
For models, pick your choice form Open LLM leaderboard
- Streaming implementation (Completed)
- Multiple personalities (In Progress)
- Code cleanup and refactoring (First pass complete)
- Conversation history implementation (First pass complete)
- Model testing and integration
- Performance optimization
- Long-term memory implementation
- AgentOoba integration
- Testing and CI/CD implementation
- User whitelisting system
- API integration for document retrieval and search
- Langchain integration
- Vector database implementation
- Concurrent request handling (Completed)
- Public deployment with token/message limitations
Current LLM limitations include:
- Potential for generating incorrect or inconsistent responses
- Limited common sense reasoning
- Knowledge constraints based on training data
- Potential training data biases
- Limited emotional understanding
- Context interpretation challenges
- LLaMA
- Self-instruct
- Alpaca
- Vicuna
- Oobabooga
- Additional upcoming models
- Community contributors