Skip to content

clocksmith/gamma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GAMMA

Game Analyzing Model Methods Attentively

An interactive game that teaches you how LLMs work by letting you predict what they'll say next.

The project has evolved providing tools to experiment with, and benchmark, local models in a variety of ways.


Main features

The Game:

Screenshot 2025-10-31 at 8 04 34 PM

Try to guess which word the AI will choose next. See the probabilities in real-time. Learn how temperature, top-k, and sampling actually work by playing with them.

Mind Meld (Experimental):

Screenshot 2025-10-31 at 8 21 05 PM

Watch multiple models collaborate on the same response, swapping control dynamically based on confidence, patterns, or strategy.

Natural Language Commands:

Screenshot 2025-10-31 at 8 32 19 PM

Describe what you want to do, and GAMMA generates the command (either with a local model or an agentic CLI, such as Claude Code)

"I want to play with Gemma 2B using temperature 0.9"

python gamma.py game --engine pytorch --model google/gemma-2-2b-it --temperature 0.9

"Compare Qwen and DeepSeek on a coding prompt"

python gamma.py game --comparison \
  --comparison-models \
    ollama:qwen3-coder:30b \
    ollama:deepseek-r1:32b \
  --prompt "Write a Python function to calculate fibonacci"

"Meld Gemma 2B and Qwen 7B, swapping every 10 tokens"

python gamma.py mind-meld \
  --models \
    pytorch:google/gemma-2-2b-it \
    pytorch:Qwen/Qwen2-7B-Instruct \
  --strategy fixed \
  --interval 10

Get Started

# Install
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -r requirements-pytorch.txt  # or requirements-llamacpp.txt

# Play
python gamma.py game

GAMMA also auto-detects your Ollama models and HuggingFace cache.

See Game Documentation for more details.


Engines & Models

GAMMA supports multiple engines (llamacpp, pytorch, vllm, ollama) and auto-detects models from Ollama, HuggingFace, and local GGUF files.

See Engine Documentation and Core Documentation for details.


More Example Usage

# Interactive menu (recommended)
python gamma.py game

# Quick game with defaults
python gamma.py game --engine llamacpp --model models/model.gguf

# Chat
python gamma.py game --chat --model qwen3-coder:30b

# Compare models
python gamma.py game --comparison \
  --comparison-models model1 model2

# Mind meld
python gamma.py mind-meld \
  --models pytorch:gemma-2-2b-it pytorch:qwen2-1.5b \
  --strategy confidence \
  --steps 50

# Other common options
--help                     # Detailed explanation of commands
--temperature 0.7          # Sampling randomness (0.1-2.0)
--top-k 40                 # Top-K filtering
--top-p 0.95               # Nucleus sampling
--steps 50                 # Max generation steps
--show-attention           # Show attention heatmaps
--verbose                  # Detailed explanations

Additional Features


License

MIT - See LICENSE

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published