Skip to content

Latest commit

 

History

History
26 lines (19 loc) · 1.31 KB

README.md

File metadata and controls

26 lines (19 loc) · 1.31 KB

SimpleTinyLlama

TinyLlama is a relatively small large language model with impressive capabilities for its size. The goal of this project is to serve as a simpler implementation of TinyLlama. The only required dependency is PyTorch.

Installation and usage

  1. Install PyTorch.
  2. Download and extract this repository.
  3. Run main.py to chat with the llama.
  4. Press CTRL + C to interrupt the response.
  5. Press CTRL + C again to exit the program.

Example

Example of asking how to steal a duck.

Notes

  • CUDA will be used if available, but requires approximately 3 GB of VRAM. If you do not have that much VRAM, you can set the computation device manually in main.py.
  • Only inference is supported. Training is not supported.
  • Chat history is currently not supported.
  • This project includes a pure Python implementation of a subset of the Sentencepiece tokenizer. It is not as fast as the C++ implementation, but it is sufficient for this project.