Skip to content

Releases: BBC-Esq/WhisperS2T-transcriber

v1.3.3 - WhisperS2T-transcriber

17 Sep 08:19
67d8d61
Compare
Choose a tag to compare

Patch release to make setting cuda-related paths more robust.

v1.3.2 - user-friendliness

29 Aug 15:47
d1905bd
Compare
Choose a tag to compare

Changes

  • The comboboxes are now only populated with the device and whisper model precisions that your system supports, and are updated dynamically if you change the compute device.

v1.3.1 - better-faster-stronger

02 Aug 08:40
95d5db8
Compare
Choose a tag to compare

Improvements

  • Add checks for ensuring the correct Whisper model is selected depending on whether CPU or GPU is selected.
  • Adjust how many CPU cores are utilized for the best performance.

v1.3.0 - better, faster, stronger

01 Aug 04:55
092ee6b
Compare
Choose a tag to compare

Improvements

  • Added distil whisper models as well as Large-v3 model variants.
  • Significantly improved the installation speed by using the uv library.
  • Users now choose the installation method of GPU support or Cpu-only.
  • You no longer have to separately install CUDA on your computer. The GPU installation will pip install into the virtual environment.

Removals

  • Removed Python 10 support.

v1.2.1 - user friendler fixed

12 Mar 12:48
4962912
Compare
Choose a tag to compare

Minor release to address a situation where a user only intends to use cpu, in which case the installation script will not install pynvml nor will the scripts attempt to use it (e.g. the metrics bar).

  • Also added a feature to calculate the number of files to be processed and ask a user's permission before proceeding.

v1.2.0 - user friendlier

12 Mar 03:05
cab7b1d
Compare
Choose a tag to compare
  • Revise GUI
  • Add option to only process certain file types.
  • Implement stop processing midstream.
  • Offload settings to separate settings.py script.

v1.1.0 - better/faster/stronger

11 Mar 18:26
8af3ef8
Compare
Choose a tag to compare
  • CPU support with automatic thread management.
  • Process each file individually instead of in a list sent to ctranslate2, which enables error handling for files that can't be processed for any reason. Previously, if one file failed the all other files failed.
  • Simplified installation script setup_windows.py.
  • Increase speed max to 200 batch size (USE WITH CAUTION - large-v2 model only supports approximately 21 speed on a GPU with 24 GB of VRAM. You'll have to experiment with the speed setting for smaller Whisper models).
  • Revise GUI and add more user-friendly messages.

TO-DO:

  • Figure out why the model still isn't being released from memory like it was mid-release versions...You currently have to close the program (not a big deal) to release VRAM memory...).

v1.0.0 - fastest ctranslate2 transcriber

02 Mar 12:51
ece8b10
Compare
Choose a tag to compare

This is the fastest ctranslate2 transcriber while still maintaining superior quality as well. Hats off to the WhisperS2T library for someone finally implementing batch processing with the ctranslate2 library.