Skip to content

Commit

Permalink
Merge pull request #6 from yusufcanb/release/1.0-rc3
Browse files Browse the repository at this point in the history
Release 1.0-rc3
  • Loading branch information
yusufcanb authored Feb 27, 2024
2 parents a32466f + 6c70d95 commit 0a8ca82
Show file tree
Hide file tree
Showing 11 changed files with 108 additions and 70 deletions.
4 changes: 3 additions & 1 deletion CONTRIBUTORS
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
Yusuf Can Bayrak
Yusuf Can Bayrak
Slim Abid
Ermin Omeragic
37 changes: 32 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# tlm - Local terminal companion, powered by CodeLLaMa.

tlm is your CLI companion which requires nothing then your workstation. It uses most efficient and powerful [CodeLLaMa](https://ai.meta.com/blog/code-llama-large-language-model-coding/) in your local environment to provide you the best possible command line suggestions.
tlm is your CLI companion which requires nothing except your workstation. It uses most efficient and powerful [CodeLLaMa](https://ai.meta.com/blog/code-llama-large-language-model-coding/) in your local environment to provide you the best possible command line suggestions.

![Suggest](./assets/suggest.gif)

![Suggest](./assets/explain.gif)
![Explain](./assets/explain.gif)

![Config](./assets/config.gif)

## Features

Expand All @@ -23,8 +25,33 @@ tlm is your CLI companion which requires nothing then your workstation. It uses

Installation can be done in two ways;

- Installation script (recommended)
- Go Install
- [Installation script](#installation-script) (recommended)
- [Go Install](#go-install)

### Prerequisites

[Ollama](https://ollama.com/) is needed to download to necessary models.
It can be downloaded with the following methods on different platforms.

- On Linux and macOS;

```bash
curl -fsSL https://ollama.com/install.sh | sh
```

- On Windows;

Download instructions can be followed at the following link: [https://ollama.com/download](https://ollama.com/download)

- Or using official Docker images 🐳;

```bash
# CPU Only
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

# With GPU (Nvidia only)
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
```
### Installation Script
Expand All @@ -50,7 +77,7 @@ Invoke-RestMethod -Uri https://raw.githubusercontent.com/yusufcanb/tlm/main/inst
### Go Install
If you Go 1.21 or higher installed on your system, you can easily use the following command to install tlm;
If you have Go 1.21 or higher installed on your system, you can easily use the following command to install tlm;
```bash
go install github.com/yusufcanb/tlm@latest
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.0-rc2
1.0-rc3
Binary file modified assets/config.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/macos.png
Binary file not shown.
Binary file removed assets/powershell.png
Binary file not shown.
26 changes: 11 additions & 15 deletions assets/tapes/config.tape
Original file line number Diff line number Diff line change
@@ -1,41 +1,37 @@
Output config.gif

Set Theme "GitHub Dark"
Set Margin 60
Set MarginFill "#4e8eff"
Set Theme "Cyberdyne"

Set Width 1400
Set Height 1000
Set FontSize 26
Set Width 1200
Set Height 600
Set FontSize 22

Type "tlm config"
Sleep 1s
Sleep 500ms
Enter
Sleep 1s
Sleep 2s

# host
Sleep 500ms
Enter
Sleep 500ms
Sleep 1s

# shell
Up
Sleep 500ms
Enter
Sleep 500ms
Sleep 1s

# suggest
Sleep 500ms
Up
Sleep 500ms
Enter
Sleep 500ms
Sleep 1s

# explain
Sleep 500ms
Down
Sleep 500ms
Sleep 3s
Enter
Sleep 500ms

Sleep 4s
Sleep 2s
Binary file removed assets/tlm-in-action.png
Binary file not shown.
4 changes: 2 additions & 2 deletions config/form.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ func (c *ConfigForm) Run() error {

huh.NewSelect[string]().
Title("Suggestion Preference").
Description("This sets how suggestions should be in placed").
Description("Sets preference for command suggestions").
Options(
huh.NewOption("Stable", "stable"),
huh.NewOption("Balanced", "balanced"),
Expand All @@ -39,7 +39,7 @@ func (c *ConfigForm) Run() error {

huh.NewSelect[string]().
Title("Explain Preference").
Description("This configuration sets explain responses").
Description("Sets preference for command explanations").
Options(
huh.NewOption("Stable", "stable"),
huh.NewOption("Balanced", "balanced"),
Expand Down
2 changes: 1 addition & 1 deletion install.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ if ($env:PROCESSOR_ARCHITECTURE -eq 'AMD64') {
}

# Download URL Construction
$version = "1.0-rc2"
$version = "1.0-rc3"
$base_url = "https://github.com/yusufcanb/tlm/releases/download"
$download_url = "${base_url}/${version}/tlm_${version}_${os}_${arch}.exe"

Expand Down
103 changes: 58 additions & 45 deletions install.sh
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
#!/bin/bash

set -eu

status() { echo ">>> $*" >&2; }
error() { echo "ERROR $*"; }
warning() { echo "WARNING: $*"; }


# OS and Architecture Detection
if [[ "$OSTYPE" == "linux-gnu"* ]]; then
os="linux"
elif [[ "$OSTYPE" == "darwin"* ]]; then
os="darwin"
else
echo "Unsupported operating system. Only Linux and macOS are currently supported."
error "Unsupported operating system. Only Linux and macOS are currently supported."
exit 1
fi

Expand All @@ -15,88 +22,94 @@ if [[ "$(uname -m)" == "x86_64" ]]; then
elif [[ "$(uname -m)" == "aarch64" || "$(uname -m)" == "arm64" ]]; then
arch="arm64"
else
echo "Unsupported architecture. tlm requires a 64-bit system (x86_64 or arm64)."
error "Unsupported architecture. tlm requires a 64-bit system (x86_64 or arm64)."
exit 1
fi

# Download URL Construction
version="1.0-rc2"
version="1.0-rc3"
base_url="https://github.com/yusufcanb/tlm/releases/download"
download_url="${base_url}/${version}/tlm_${version}_${os}_${arch}"

# Docker check
if ! command -v docker &>/dev/null; then
echo "Docker not found. Please install Docker from https://www.docker.com/get-started"
error "Docker not found. Please install Docker from https://www.docker.com/get-started"
exit 1
fi

# Ollama check
if ! curl -fsSL http://localhost:11434 &> /dev/null; then
echo "Ollama not found."
error "Ollama not found."
if [[ "$os" == "darwin" ]]; then
echo ""
echo "*** On macOS: ***"
echo ""
echo "Ollama can run with GPU acceleration inside Docker containers for Nvidia GPUs."
echo "To get started using the Docker image, please follow these steps:"
echo ""
echo "1. *** CPU only: ***"
echo " docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
echo ""
echo "2. *** GPU Acceleration: ***"
echo " This option requires running Ollama outside of Docker"
echo " To get started, simply download and install Ollama."
echo " https://ollama.com/download"
echo ""
echo ""
echo "Installation aborted. Please install Ollama using the methods above and try again."
status ""
status "*** On macOS: ***"
status ""
status "Ollama can run with GPU acceleration inside Docker containers for Nvidia GPUs."
status "To get started using the Docker image, please follow these steps:"
status ""
status "1. *** CPU only: ***"
status " docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
status ""
status "2. *** GPU Acceleration: ***"
status " This option requires running Ollama outside of Docker"
status " To get started, simply download and install Ollama."
status " https://ollama.com/download"
status ""
status ""
status "Installation aborted. Please install Ollama using the methods above and try again."
exit 1

elif [[ "$os" == "linux" ]]; then
echo ""
echo "*** On Linux: ***"
echo ""
echo "Ollama can run with GPU acceleration inside Docker containers for Nvidia GPUs."
echo "To get started using the Docker image, please follow these steps:"
echo ""
echo "1. *** CPU only: ***"
echo " docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
echo ""
echo "2. *** Nvidia GPU: ***"
echo " docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
echo ""
echo ""
echo "Installation aborted. Please install Ollama using the methods above and try again."
status ""
status "*** On Linux: ***"
status ""
status "Ollama can run with GPU acceleration inside Docker containers for Nvidia GPUs."
status "To get started using the Docker image, please follow these steps:"
status ""
status "1. *** CPU only: ***"
status " docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
status ""
status "2. *** Nvidia GPU: ***"
status " docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama"
status ""
status ""
status "Installation aborted. Please install Ollama using the methods above and try again."
exit 1

fi
fi

# Download the binary
echo "Downloading tlm version ${version} for ${os}/${arch}..."
status "Downloading tlm version ${version} for ${os}/${arch}..."
if ! curl -fsSL -o tlm ${download_url}; then
echo "Download failed. Please check your internet connection and try again."
error "Download failed. Please check your internet connection and try again."
exit 1
fi

# Make executable
chmod +x tlm

# Move to installation directory
echo "Installing tlm..."
if ! mv tlm /usr/local/bin/; then
echo "Installation requires administrator permissions. Please use sudo or run the script as root."
exit 1
else
echo ""
status "Installing tlm..."

SUDO=
if [ "$(id -u)" -ne 0 ]; then
# Running as root, no need for sudo
if ! available sudo; then
error "This script requires superuser permissions. Please re-run as root."
fi

SUDO="sudo"
fi

$SUDO mv tlm /usr/local/bin/;

if ! tlm deploy; then
echo ""
error ""
exit 1
else
echo ""
fi

echo "Type 'tlm' to get started."
status "Type 'tlm' to get started."
exit 0

0 comments on commit 0a8ca82

Please sign in to comment.