Skip to content

yann-y/gpt-subtitle

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GPT-Subtitle 💬 🌐

Build & Test FOSSA Status

English | 简体中文

whisper_preview

View Current Development Task 📋

GPT-Subtitle combines Whisper and OpenAI’s GPT-3 Language Model 🧠, offering you local translation functionality for audio and video. It not only translates subtitles into dialogues but also supports multiple language translations and allows you to conveniently translate subtitles into other languages. 🛰️

✨ Key Features:

By integrating the whisper.cpp model, you can now:

  • Scan videos and audios in a folder and convert them into srt subtitle files 🔍 🎞️ 🎧
  • Utilize optimization algorithms to translate multi-language subtitle files 💬 🌐

🔧 Tech Stack

  • NextJS 13 (App Router)
  • NestJS
  • Jotai
  • Framer Motion
  • Radix UI
  • Socket.IO
  • TailwindCSS

Running Environment

This project runs on the Node.js platform, so you need to install Node.js on your local machine first. After installation, open your command-line tool, navigate to the project root directory, and install pnpm and the necessary dependencies:

pnpm install

Install whisper

sh setup-whisper.sh

Install ffmpeg, please install it yourself for other systems

brew install ffmpeg

Also need to install redis and mysql, please install other systems by yourself

brew install redis
brew install mysql

Usage

Setting up API KEY

Before using the translation feature, you need to register an account on the OpenAI official website and apply for an API KEY. After obtaining the API KEY, you can copy a .env file from .env.template in the root directory and add the following configuration:

# Frontend Setting
NEXT_PUBLIC_API_URL=http://localhost:3001 # Same as above. Backend API address
WEB_PORT=3000                             # Frontend start port

# Backend Setting
OPEN_AUTH=true            # Whether to enable authentication
OPENAI_API_KEY=           # OpenAI API KEY
GOOGLE_TRANSLATE_API_KEY= # Google API KEY(Can be left blank)
BASE_URL=                 # OpenAI API URL

## Database Setting
REDIS_PORT=6379             # Redis port
REDIS_HOST=subtitle_redis   # Redis address
MYSQL_HOST=subtitle_mysql   # MySQL address
MYSQL_PORT=3306             # MySQL port
MYSQL_USER=root             # MySQL user
MYSQL_PASSWORD=123456       # MySQL passowrd
MYSQL_DATABASE=gpt_subtitle # MySQL Database name

## Server Address Setting
SERVER_PORT=3001 # Backend start port

## Auth Setting
### GitHub Auth
GITHUB_CLIENT_ID=           # GitHub client ID
GITHUB_CLIENT_SECRET=       # GitHub client secret
AUTH_SECRET = YOUR_KEY_HERE # JWT secret you can run `openssl rand -base64 32` to generate a secret

## System Setting. You can edit in Setting
OUTPUT_SRT_THEN_TRANSLATE=true # Whether to output the SRT file first and then translate it
TranslateModel=google          # google or gpt3
LANGUAGE=zh-CN                 # Output SRT file and then translate the language
TRANSLATE_DELAY=1500           # Delay between calling translation interface
TRANSLATE_GROUP=4              # Translate sentences for grouping translation, how many sentences can be translated at most at a time

Replace your_api_key with your own API key.

Running the Program

Deploy the service locally:

npm run deploy:prod

🐳 Docker Deployment

📚 Using docker-compose

  1. Change the arguments inside docker-compose.yml

    args:
         - WEB_PORT=3000
         - SERVER_PORT=3001
         - NEXT_PUBLIC_API_URL=http://localhost:3001
    
  2. Run the command

    docker-compose up -d

setup-whisper

setup-whisper.sh, install whisper script

You can choose the downloaded model by uncommenting before make

# more info about whisper.cpp: https://github.com/ggerganov/whisper.cpp
# make tiny.en
# make tiny
# make base.en
# make base
# make small.en
# make small
# make medium.en
# make medium
# make large-v1
# make large

The larger the model, the better the translation effect, but the slower it is. It is recommended to use the large model for languages other than English.

Nvidia GPU can accelerate the operation of models, but CUDA needs to be installed. For details, see whisper project instructions.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 94.7%
  • JavaScript 3.3%
  • Dockerfile 1.1%
  • Other 0.9%