Skip to content

Commit

Permalink
Merge pull request #141 from Anindyadeep/anindya/tinygrad-fix
Browse files Browse the repository at this point in the history
TinyGrad Readme and small fixes.
  • Loading branch information
nsosio authored Jan 31, 2024
2 parents 3cbac98 + 03e78e1 commit 372a80d
Show file tree
Hide file tree
Showing 3 changed files with 84 additions and 21 deletions.
34 changes: 34 additions & 0 deletions bench_tinygrad/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# TinyGrad

[![GitHub Repo](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/tinygrad/tinygrad)  

TinyGrad is a minimalistic deep learning framework, very similar to [PyTorch](https://github.com/pytorch/pytorch). It's simplicity is inspired from the [micrograd](https://github.com/karpathy/micrograd) implementation by [Andrej Karpathy](https://karpathy.ai/). TinyGrad leverages different methods like lazy computation and kernel fusion techniques to run different operations. It supports various accelerators out of the box, including CPU, GPU etc. This benchmark implementation uses the [Llama 2 example](https://github.com/tinygrad/tinygrad/blob/master/examples/llama.py) written inside tinygrad/examples.


### 🚀 Running the TinyGrad Benchmark.

You can run the TinyGrad benchmark using the following command:

```bash
./bench_tinygrad/bench.sh \
--prompt <value> \ # Enter a prompt string
--max_tokens <value> \ # Maximum number of tokens to output
--repetitions <value> \ # Number of repititions to be made for the prompt.
--log_file <file_path> \ # A .log file underwhich we want to write the results.
--device <cpu/cuda/metal> \ # The device in which we want to benchmark.
--models_dir <path_to_models> # The directory in which model weights are present
```

To get started quickly you can simply run:

```bash
./bench_tinygrad/bench.sh -d cuda
```
This will take all the default values (see in the [bench.sh](/bench_tinygrad/bench.sh) file) and perform the benchmarks. You can find all the benchmarks results for TinyGrad [here](/docs/llama2.md).


### 👀 Some points to note:

1. The current implementation of TinyGrad only supports Float16 for CUDA, CPU and Metal.
2. This benchmark implementation expects the Raw Llama 2 weights from Meta AI to run LLama2 Model. So it assumes that you already accepted all the [terms and conditions](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) before running it.
3. Please note, the current implementation won't work if tried to reproduce. There are certain conflicts with the main tinygrad repo. This will be fixed in the upcoming versions.
56 changes: 36 additions & 20 deletions bench_tinygrad/bench.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,30 +2,31 @@

########################################################################################################
# Script: bench.sh
# Description: This script runs benchmarks tinygrad llama benchmark.
# Description: This script runs benchmarks TinyGrad llama benchmark.
#
# Usage: ./bench.sh [OPTIONS]
# OPTIONS:
# -p, --prompt Prompt for benchmarks (default: 'Explain what is a transformer')
# -r, --repetitions Number of repetitions for benchmarks (default: 2)
# -m, --max_tokens Maximum number of tokens for benchmarks (default: 100)
# -d, --device Device for benchmarks (possible values: 'metal', 'gpu', and 'cpu', default: 'cpu')
# -p, --prompt Prompt for benchmarks (default: 'Write an essay about the transformer model architecture')
# -r, --repetitions Number of repetitions for benchmarks (default: 10)
# -m, --max_tokens Maximum number of tokens for benchmarks (default: 512)
# -d, --device Device for benchmarks (possible values: 'metal', 'cuda', and 'cpu', default: 'cuda')
# -lf, --log_file Logging file name.
# -md, --models_dir Models directory.
# -h, --help Show this help message
########################################################################################################

set -euo pipefail

CURRENT_DIR="$(pwd)"
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

print_usage() {
echo "Usage: $0 [OPTIONS]"
echo "OPTIONS:"
echo " -p, --prompt Prompt for benchmarks (default: 'Explain what is a transformer')"
echo " -p, --prompt Prompt for benchmarks (default: 'Write an essay about the transformer model architecture')"
echo " -r, --repetitions Number of repetitions for benchmarks (default: 10)"
echo " -m, --max_tokens Maximum number of tokens for benchmarks (default: 100)"
echo " -d, --device Device for benchmarks (possible values: 'metal', 'gpu', and 'cpu', default: 'cpu')"
echo " -m, --max_tokens Maximum number of tokens for benchmarks (default: 512)"
echo " -d, --device Device for benchmarks (possible values: 'metal', 'cuda', and 'cpu', default: 'cuda')"
echo " -lf, --log_file Logging file name."
echo " -md, --models_dir Models directory."
echo " -h, --help Show this help message"
Expand Down Expand Up @@ -57,16 +58,29 @@ check_platform() {
}

check_python() {
if command -v python &> /dev/null
then
echo -e "\nUsing $(python --version)."
if command -v python &> /dev/null; then
PYTHON_CMD="python"
elif command -v python3 &> /dev/null; then
PYTHON_CMD="python3"
else
echo -e "\nPython does not exist."
echo "Python is not installed."
exit 1
fi
}

setup() {

# Check if Logs folder exists else Make the logs folder
LOGS_FOLDER="$CURRENT_DIR/Logs"

if [ -d "$LOGS_FOLDER" ]; then
echo "Folder '$LOGS_FOLDER' already exists. Skipping."
else
# Create the folder
mkdir "$LOGS_FOLDER"
echo "'$LOGS_FOLDER' created."
fi

echo -e "\nSetting up with $SCRIPT_DIR/setup.sh..."
bash "$SCRIPT_DIR"/setup.sh "$1"
}
Expand All @@ -86,7 +100,7 @@ run_llama_experiment() {
declare -a tokens_per_second_array=()

for ((i=1; i<=repetitions; i++)); do
tokens_per_second=$(python "$script_dir/tinygrad/examples/llama.py" \
tokens_per_second=$("$PYTHON_CMD" "$script_dir/tinygrad/examples/tiny.py" \
--model "$models_dir/llama-2-7b-raw" \
--prompt "$prompt" \
--count "$max_tokens" \
Expand Down Expand Up @@ -179,15 +193,17 @@ while [ "$#" -gt 0 ]; do
;;
esac
done
# Set default values if not provided
PROMPT="${PROMPT:-"Explain what is a transformer"}"
REPETITIONS="${REPETITIONS:-10}"
MAX_TOKENS="${MAX_TOKENS:-100}"
DEVICE="${DEVICE:-'cpu'}"
LOG_FILENAME="${LOG_FILENAME:-"benchmark_$(date +'%Y%m%d%H%M%S').log"}"
MODELS_DIR="${MODELS_DIR:-"./models"}"

check_platform
check_python
setup "$DEVICE"

# Set default values if not provided
PROMPT="${PROMPT:-"Write an essay about the transformer model architecture"}"
REPETITIONS="${REPETITIONS:-10}"
MAX_TOKENS="${MAX_TOKENS:-512}"
DEVICE="${DEVICE:-'cuda'}"
LOG_FILENAME="${LOG_FILENAME:-"$LOGS_FOLDER/benchmark_pytorch_$(date +'%Y%m%d%H%M%S').log"}"
MODELS_DIR="${MODELS_DIR:-"./models"}"

run_benchmarks "$PROMPT" "$REPETITIONS" "$MAX_TOKENS" "$DEVICE" "$LOG_FILENAME" "$MODELS_DIR"
15 changes: 14 additions & 1 deletion bench_tinygrad/setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,21 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
VENV_DIR="$SCRIPT_DIR/venv"

check_python() {
if command -v python &> /dev/null; then
PYTHON_CMD="python"
elif command -v python3 &> /dev/null; then
PYTHON_CMD="python3"
else
echo "Python is not installed."
exit 1
fi
}

check_python

if [ ! -d "$VENV_DIR" ]; then
python -m venv "$VENV_DIR"
"$PYTHON_CMD" -m venv "$VENV_DIR"
echo "Virtual environment '$VENV_DIR' created."
# shellcheck disable=SC1091
source "$VENV_DIR/bin/activate"
Expand Down

0 comments on commit 372a80d

Please sign in to comment.