An intelligent terminal assistant that combines shell command execution with AI agent capabilities. Available in multiple implementations.
TxzShell is a next-generation terminal experience that seamlessly blends traditional shell commands with AI assistance. Type regular commands or ask questions in natural language - the shell automatically detects what you mean.
Key Features:
- Dual-Mode Operation: Run shell commands OR ask AI questions
- Multiple LLM Providers: Ollama, Groq, OpenAI, Anthropic, and more
- Flexible Configuration: YAML config file + environment variables
- Smart Command Detection: Automatically knows if you're running a command or asking a question
- Tab Completion: Full command, path, and alias completion
- Session Persistence: History and aliases saved across sessions
- Multi-Step Workflows: Agent plans tasks and asks approval for each step
- Safe Execution: Dangerous command detection and approval
- No API Server Required: Direct LLM integration
llamalearn/
βββ python/ # Python implementation (production-ready)
β βββ src/ # Core Python source code
β βββ examples/ # Python examples
β βββ mcp_servers/ # MCP servers
β βββ tests/ # Tests
β βββ scripts/ # Utility scripts
β βββ requirements*.txt # Dependencies
β βββ Dockerfile # Python container
β βββ README.md # Python docs
βββ typescript/ # TypeScript implementation (coming soon)
β βββ README.md # Roadmap
βββ docs/ # Shared documentation
β βββ TXZSHELL_README.md # Full user guide
β βββ TXZSHELL_QUICKREF.md # Quick reference
β βββ TXZSHELL_IMPLEMENTATION.md # Technical details
β βββ TXZSHELL_BEFORE_AFTER.md # Improvements
βββ k8s/ # Kubernetes manifests
β βββ k8s-deployment.yaml # Full deployment with Ollama
β βββ k8s-minimal.yaml # Minimal deployment
βββ .env.example # Configuration template
βββ docker-compose.yml # Multi-service setup
βββ Makefile # Convenience commands
βββ README.md # This file
Install TxzShell as a shell command you can run from anywhere:
Python Package:
cd python
./scripts/install-package.sh # Install in development mode
# or
./scripts/install-package.sh --user # Install to user site-packages
# or
./scripts/install-package.sh --all # Install with all extras (MCP, RAG, dev tools)
# After installation, run from anywhere:
txzshell # Start TxzShell
txzshell --init-config # Create default configuration
txzshell --help # Show helpTypeScript Package:
cd typescript
./scripts/install-package.sh # Link for development
# or
./scripts/install-package.sh --global # Install globally
# After installation, run from anywhere:
txzshell # Start TxzShell
txzshell --init-config # Create default configuration
txzshell --help # Show helpUsing Make:
# Python
make install-python-package # Development mode
make install-python-package-user # User installation
make install-python-package-all # With all extras
# TypeScript
make install-typescript-package # Link for development
make install-typescript-package-global # Global installation# 1. Navigate to Python directory
cd python
# 2. Setup environment
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# 3. Configure (optional - defaults work!)
cp ../.env.example ../.env
# 4. Run TxzShell (uses Ollama by default)
./scripts/run-txzshell.shThat's it! No API server needed.
# Default: Ollama (local, free)
./scripts/run-txzshell.sh
# Groq (cloud, fast, free tier available)
export GROQ_API_KEY=gsk_your_key_here
./scripts/run-txzshell.sh --provider groq
# OpenAI GPT-4
export OPENAI_API_KEY=sk_your_key_here
./scripts/run-txzshell.sh --provider openai
# Anthropic Claude
export ANTHROPIC_API_KEY=sk_ant_your_key_here
./scripts/run-txzshell.sh --provider anthropicConfiguration is stored in ~/.txzshell/config.yaml (auto-created on first run).
See Configuration Guide and Provider Guide for details.
Available! See typescript/README.md for full details.
cd typescript
npm install
npm run build
# Run directly
node dist/index.js
# Or install as package
./scripts/install-package.sh
txzshell # Now available globally!txzshell> ls -la
txzshell> cd ~/projects
txzshell> git status
txzshell> pwdtxzshell> how many Python files are in src?
π€ Agent thinking about: how many Python files are in src?
βββ Agent Response βββ
There are 12 Python files in the src directory.
βββββββββββββββββββββ
txzshell> find all files larger than 10MB
π€ Agent thinking about: find all files larger than 10MB
βββ Agent Response βββ
I found 3 files larger than 10MB:
1. venv/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so (125MB)
2. .git/objects/pack/pack-abc123.pack (15MB)
3. docs/images/demo.mp4 (12MB)
βββββββββββββββββββββ
txzshell> organize my downloads by file type
π Execution Plan:
Step 1: Create directories for different file types
Command: mkdir -p ~/Downloads/{images,documents,videos,archives}
Step 2: Move image files
Command: mv ~/Downloads/*.{jpg,png,gif} ~/Downloads/images/
...
Execute? [y/n/s/e/a]: y
β Successtxzshell> history 20 # Show last 20 commands
txzshell> alias ll='ls -la' # Create alias
txzshell> session # Show session info
txzshell> help # Show helpEdit .env file:
# Backend: ollama or vllm
LLM_BACKEND=ollama
# Ollama settings
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5-coder:3b
# vLLM settings (alternative)
VLLM_BASE_URL=http://localhost:8000
VLLM_MODEL=Qwen/Qwen2.5-Coder-3B-Instruct- LlamaIndex ReAct Agent - Intelligent tool-using agent
- LiteLLM Integration - Unified interface for multiple LLM backends
- Configurable Backend - Easy switching between Ollama and vLLM
- REST API - FastAPI with
/chat,/query,/reset,/healthendpoints - Multiple Modes - Run as API, CLI, Docker, or Kubernetes
- GPU Support - Optimized for NVIDIA GPU offloading
- Extensible - Easy to add custom tools and RAG capabilities
- Package Installation Guide - Complete installation reference
- Getting Started - Quick start guide
- Full Documentation - Complete feature reference
- Kubernetes Guide - K8s deployment
- GPU Setup - Ollama with GPU offloading
- Project Summary - Architecture overview
# Health check
curl http://localhost:8000/health
# Interactive testing
python -m tests.test_client
# Automated tests
python -m tests.test_suite
# API documentation
open http://localhost:8000/docs# API mode
make run-api
# or
python -m src.main --mode api
# CLI mode
make run-cli
# or
python -m src.main --mode cliSee examples/example_custom_tools.py:
from llama_index.core.tools import FunctionTool
from src.agent import LlamaLearnAgent
from src.config import settings
def my_tool(param: str) -> str:
"""Your tool description."""
return f"Result: {param}"
tool = FunctionTool.from_defaults(fn=my_tool)
agent = LlamaLearnAgent(settings, tools=[tool])See examples/example_rag_agent.py for document search and retrieval.
# Build
make docker-build
# Run
make docker-run
# Or manually
docker build -t llamalearn-agent:latest .
docker-compose upmake k8s-deploy-auto# Build and load image
docker build -t llamalearn-agent:latest .
minikube image load llamalearn-agent:latest # or kind load
# Deploy
kubectl apply -f k8s/k8s-minimal.yaml
# Access
kubectl port-forward svc/llamalearn-service 8000:8000make help # Show all commands
make setup # Run setup script
make install # Install dependencies
make run-api # Run API service
make run-cli # Run CLI mode
make test # Interactive test client
make test-suite # Automated tests
make docker-build # Build Docker image
make docker-run # Run with Docker Compose
make k8s-deploy # Deploy to K8s (minimal)
make k8s-deploy-auto # Automated K8s deployment
make clean # Clean up filesGET /health- Health checkPOST /chat- Chat with agent (stateful)POST /query- Query agent (stateless)POST /reset- Reset conversation historyGET /docs- Interactive API documentation
This is a starting point for your LlamaIndex agent service. Feel free to customize and extend!
Created for your use. Modify and extend as needed.
Optimized for:
- RAM: 8GB total
- CPU: 4 cores
- GPU: NVIDIA with 5GB VRAM (optional, for GPU offloading)
- Model: Qwen2.5-Coder:3B (~2GB VRAM)
New! Connect your agent to external tools and data sources via MCP:
# Quick setup
./scripts/setup-mcp.sh
# Try the developer assistant (filesystem access)
python examples/example_developer_mcp.pyAvailable MCP capabilities:
- π Filesystem - Code analysis, documentation generation
- π GitHub - Repository search, issue management
- ποΈ PostgreSQL - Natural language database queries
- π Brave Search - Web search integration
- π§ Memory - Persistent agent memory
Learn more:
- MCP Quick Start - 5-minute intro
- MCP for Developers - Educational guide
- MCP Integration Guide - Complete reference
- Check GETTING_STARTED.md for quick setup
- See KUBERNETES_QUICKSTART.md for K8s deployment
- Review OLLAMA_GPU_SETUP.md for GPU configuration
- Try MCP_QUICKSTART.md for agent tool expansion
- Check examples in
examples/directory
Ready to start? Run make setup or check docs/GETTING_STARTED.md! π