Skip to content

tzhukov/txzshell

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

53 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

TxzShell - AI-Powered Terminal Assistant

An intelligent terminal assistant that combines shell command execution with AI agent capabilities. Available in multiple implementations.

🌟 What is TxzShell?

TxzShell is a next-generation terminal experience that seamlessly blends traditional shell commands with AI assistance. Type regular commands or ask questions in natural language - the shell automatically detects what you mean.

Key Features:

  • Dual-Mode Operation: Run shell commands OR ask AI questions
  • Multiple LLM Providers: Ollama, Groq, OpenAI, Anthropic, and more
  • Flexible Configuration: YAML config file + environment variables
  • Smart Command Detection: Automatically knows if you're running a command or asking a question
  • Tab Completion: Full command, path, and alias completion
  • Session Persistence: History and aliases saved across sessions
  • Multi-Step Workflows: Agent plans tasks and asks approval for each step
  • Safe Execution: Dangerous command detection and approval
  • No API Server Required: Direct LLM integration

πŸ“ Project Structure

llamalearn/
β”œβ”€β”€ python/                      # Python implementation (production-ready)
β”‚   β”œβ”€β”€ src/                     # Core Python source code
β”‚   β”œβ”€β”€ examples/                # Python examples
β”‚   β”œβ”€β”€ mcp_servers/            # MCP servers
β”‚   β”œβ”€β”€ tests/                   # Tests
β”‚   β”œβ”€β”€ scripts/                 # Utility scripts
β”‚   β”œβ”€β”€ requirements*.txt        # Dependencies
β”‚   β”œβ”€β”€ Dockerfile              # Python container
β”‚   └── README.md               # Python docs
β”œβ”€β”€ typescript/                  # TypeScript implementation (coming soon)
β”‚   └── README.md               # Roadmap
β”œβ”€β”€ docs/                        # Shared documentation
β”‚   β”œβ”€β”€ TXZSHELL_README.md             # Full user guide
β”‚   β”œβ”€β”€ TXZSHELL_QUICKREF.md           # Quick reference
β”‚   β”œβ”€β”€ TXZSHELL_IMPLEMENTATION.md     # Technical details
β”‚   └── TXZSHELL_BEFORE_AFTER.md       # Improvements
β”œβ”€β”€ k8s/                         # Kubernetes manifests
β”‚   β”œβ”€β”€ k8s-deployment.yaml     # Full deployment with Ollama
β”‚   └── k8s-minimal.yaml        # Minimal deployment
β”œβ”€β”€ .env.example                # Configuration template
β”œβ”€β”€ docker-compose.yml          # Multi-service setup
β”œβ”€β”€ Makefile                    # Convenience commands
└── README.md                   # This file

πŸš€ Quick Start

Option 1: Install as Package (Recommended)

Install TxzShell as a shell command you can run from anywhere:

Python Package:

cd python
./scripts/install-package.sh         # Install in development mode
# or
./scripts/install-package.sh --user  # Install to user site-packages
# or
./scripts/install-package.sh --all   # Install with all extras (MCP, RAG, dev tools)

# After installation, run from anywhere:
txzshell                  # Start TxzShell
txzshell --init-config    # Create default configuration
txzshell --help           # Show help

TypeScript Package:

cd typescript
./scripts/install-package.sh         # Link for development
# or
./scripts/install-package.sh --global # Install globally

# After installation, run from anywhere:
txzshell                    # Start TxzShell
txzshell --init-config      # Create default configuration
txzshell --help             # Show help

Using Make:

# Python
make install-python-package          # Development mode
make install-python-package-user     # User installation
make install-python-package-all      # With all extras

# TypeScript
make install-typescript-package       # Link for development
make install-typescript-package-global # Global installation

Option 2: Run from Source (Development)

Python Implementation (Production-Ready)

# 1. Navigate to Python directory
cd python

# 2. Setup environment
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# 3. Configure (optional - defaults work!)
cp ../.env.example ../.env

# 4. Run TxzShell (uses Ollama by default)
./scripts/run-txzshell.sh

That's it! No API server needed.

Using Different LLM Providers

# Default: Ollama (local, free)
./scripts/run-txzshell.sh

# Groq (cloud, fast, free tier available)
export GROQ_API_KEY=gsk_your_key_here
./scripts/run-txzshell.sh --provider groq

# OpenAI GPT-4
export OPENAI_API_KEY=sk_your_key_here
./scripts/run-txzshell.sh --provider openai

# Anthropic Claude
export ANTHROPIC_API_KEY=sk_ant_your_key_here
./scripts/run-txzshell.sh --provider anthropic

Configuration is stored in ~/.txzshell/config.yaml (auto-created on first run).

See Configuration Guide and Provider Guide for details.

TypeScript Implementation

Available! See typescript/README.md for full details.

cd typescript
npm install
npm run build

# Run directly
node dist/index.js

# Or install as package
./scripts/install-package.sh
txzshell  # Now available globally!

πŸ’‘ Usage Examples

Shell Commands (Just Works)

txzshell> ls -la
txzshell> cd ~/projects
txzshell> git status
txzshell> pwd

AI Agent (Natural Language)

txzshell> how many Python files are in src?

πŸ€– Agent thinking about: how many Python files are in src?

━━━ Agent Response ━━━
There are 12 Python files in the src directory.
━━━━━━━━━━━━━━━━━━━━━

txzshell> find all files larger than 10MB

πŸ€– Agent thinking about: find all files larger than 10MB

━━━ Agent Response ━━━
I found 3 files larger than 10MB:
1. venv/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so (125MB)
2. .git/objects/pack/pack-abc123.pack (15MB)
3. docs/images/demo.mp4 (12MB)
━━━━━━━━━━━━━━━━━━━━━

txzshell> organize my downloads by file type

πŸ“‹ Execution Plan:
  Step 1: Create directories for different file types
    Command: mkdir -p ~/Downloads/{images,documents,videos,archives}
  Step 2: Move image files
    Command: mv ~/Downloads/*.{jpg,png,gif} ~/Downloads/images/
  ...

Execute? [y/n/s/e/a]: y
βœ“ Success

Built-in Commands

txzshell> history 20              # Show last 20 commands
txzshell> alias ll='ls -la'       # Create alias
txzshell> session                 # Show session info
txzshell> help                    # Show help

βš™οΈ Configuration

Edit .env file:

# Backend: ollama or vllm
LLM_BACKEND=ollama

# Ollama settings
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5-coder:3b

# vLLM settings (alternative)
VLLM_BASE_URL=http://localhost:8000
VLLM_MODEL=Qwen/Qwen2.5-Coder-3B-Instruct

🎯 Key Features

  • LlamaIndex ReAct Agent - Intelligent tool-using agent
  • LiteLLM Integration - Unified interface for multiple LLM backends
  • Configurable Backend - Easy switching between Ollama and vLLM
  • REST API - FastAPI with /chat, /query, /reset, /health endpoints
  • Multiple Modes - Run as API, CLI, Docker, or Kubernetes
  • GPU Support - Optimized for NVIDIA GPU offloading
  • Extensible - Easy to add custom tools and RAG capabilities

πŸ“š Documentation

πŸ§ͺ Testing

# Health check
curl http://localhost:8000/health

# Interactive testing
python -m tests.test_client

# Automated tests
python -m tests.test_suite

# API documentation
open http://localhost:8000/docs

πŸ› οΈ Development

Running the Application

# API mode
make run-api
# or
python -m src.main --mode api

# CLI mode
make run-cli
# or
python -m src.main --mode cli

Adding Custom Tools

See examples/example_custom_tools.py:

from llama_index.core.tools import FunctionTool
from src.agent import LlamaLearnAgent
from src.config import settings

def my_tool(param: str) -> str:
    """Your tool description."""
    return f"Result: {param}"

tool = FunctionTool.from_defaults(fn=my_tool)
agent = LlamaLearnAgent(settings, tools=[tool])

Adding RAG Capabilities

See examples/example_rag_agent.py for document search and retrieval.

🐳 Docker

# Build
make docker-build

# Run
make docker-run

# Or manually
docker build -t llamalearn-agent:latest .
docker-compose up

☸️ Kubernetes

Quick Deploy

make k8s-deploy-auto

Manual Deploy

# Build and load image
docker build -t llamalearn-agent:latest .
minikube image load llamalearn-agent:latest  # or kind load

# Deploy
kubectl apply -f k8s/k8s-minimal.yaml

# Access
kubectl port-forward svc/llamalearn-service 8000:8000

πŸ“‹ Makefile Commands

make help           # Show all commands
make setup          # Run setup script
make install        # Install dependencies
make run-api        # Run API service
make run-cli        # Run CLI mode
make test           # Interactive test client
make test-suite     # Automated tests
make docker-build   # Build Docker image
make docker-run     # Run with Docker Compose
make k8s-deploy     # Deploy to K8s (minimal)
make k8s-deploy-auto # Automated K8s deployment
make clean          # Clean up files

πŸ”§ API Endpoints

  • GET /health - Health check
  • POST /chat - Chat with agent (stateful)
  • POST /query - Query agent (stateless)
  • POST /reset - Reset conversation history
  • GET /docs - Interactive API documentation

🀝 Contributing

This is a starting point for your LlamaIndex agent service. Feel free to customize and extend!

πŸ“„ License

Created for your use. Modify and extend as needed.

🎯 Hardware Requirements

Optimized for:

  • RAM: 8GB total
  • CPU: 4 cores
  • GPU: NVIDIA with 5GB VRAM (optional, for GPU offloading)
  • Model: Qwen2.5-Coder:3B (~2GB VRAM)

πŸ”Œ MCP Integration (Model Context Protocol)

New! Connect your agent to external tools and data sources via MCP:

# Quick setup
./scripts/setup-mcp.sh

# Try the developer assistant (filesystem access)
python examples/example_developer_mcp.py

Available MCP capabilities:

  • πŸ“ Filesystem - Code analysis, documentation generation
  • πŸ™ GitHub - Repository search, issue management
  • πŸ—„οΈ PostgreSQL - Natural language database queries
  • πŸ” Brave Search - Web search integration
  • 🧠 Memory - Persistent agent memory

Learn more:

πŸ†˜ Need Help?

  1. Check GETTING_STARTED.md for quick setup
  2. See KUBERNETES_QUICKSTART.md for K8s deployment
  3. Review OLLAMA_GPU_SETUP.md for GPU configuration
  4. Try MCP_QUICKSTART.md for agent tool expansion
  5. Check examples in examples/ directory

Ready to start? Run make setup or check docs/GETTING_STARTED.md! πŸš€

About

shell with ai support

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •