Skip to content

Commit 8a1a435

Browse files
Add granite speech architecture details
1 parent f20adcd commit 8a1a435

File tree

1 file changed

+20
-1
lines changed

1 file changed

+20
-1
lines changed

docs/source/en/model_doc/granite_speech.md

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,26 @@ rendered properly in your Markdown viewer.
2121
</div>
2222

2323
## Overview
24-
Currently being updated!
24+
The Granite Speech model is a multimodal language model, consisting of a speech encoder, speech projector, large language model, and LoRA adapter(s). More details regarding each component for the current (Granite 3.2 Speech) model architecture may be found below.
25+
26+
1. Speech Encoder: A [Conformer](https://arxiv.org/abs/2005.08100) encoder trained with Connectionist Temporal Classification (CTC) on character-level targets on ASR corpora. The encoder uses block-attention and self-conditioned CTC from the middle layer.
27+
28+
2. Speech Projector: A query transformer (q-former) operating on the outputs of the last encoder block. The encoder and projector temporally downsample the audio features to be merged into the multimodal embeddings to be processed by the llm.
29+
30+
3. Large Language Model: The Granite Speech model leverages Granite LLMs, which were originally proposed in [this paper](https://arxiv.org/abs/2408.13359).
31+
32+
4. LoRA adapter(s): The Granite Speech model contains a modality specific LoRA, which will be enabled when audio features are provided, and disabled otherwise.
33+
34+
35+
Note that most of the aforementioned components are implemented generically to enable compatability and potential integration with other model architectures in transformers.
36+
37+
38+
This model was contributed by [Alexander Brooks](https://huggingface.co/abrooks9944), [Avihu Dekel](https://huggingface.co/Avihu), and [George Saon](https://huggingface.co/gsaon).
39+
40+
## Usage tips
41+
- This model bundles its own LoRA adapter, which will be automatically loaded and enabled/disabled as needed during inference calls. Be sure to install [PEFT](https://github.com/huggingface/peft) to ensure the LoRA is correctly applied!
42+
43+
<!-- TODO (@alex-jw-brooks) Add an example here once the model compatible with the transformers implementation is released -->
2544

2645
## GraniteSpeechConfig
2746

0 commit comments

Comments
 (0)