Skip to content

AssemblyAI/assemblyai-haystack

Repository files navigation


CI Passing GitHub License PyPI version PyPI Python Versions PyPI - Wheel AssemblyAI Twitter AssemblyAI YouTube Discord

AssemblyAITranscriber

This custom component is designed for using AssemblyAI with Haystack (2.x), an open source Python framework for building custom LLM applications. It seamlessly integrates with the AssemblyAI API and enhances Haystack's capabilities.

The AssemblyAITranscriber goes beyond simple audio transcription; it also offers features such as summarization and speaker diarization. This allows you to not only convert audio to text but also obtain concise summaries and identify speakers in the conversation. To use AssemblyAITranscriber, you should pass your ASSEMBLYAI_API_KEY as an argument while adding a component (see usage code example below).

More info about AssemblyAI:

Installation

First, install the assemblyai-haystack python package.

pip install assemblyai-haystack

This package installs and uses the AssemblyAI Python SDK. You can find more info about the SDK at the assemblyai-python-sdk GitHub repo.

Usage

The AssemblyAITranscriber needs to be initialized with the AssemblyAI API key. The run function needs at least the file_path argument. Audio files can be specified as an URL or a local file path. You can also specify whether you want summarization and speaker diarization results in the run function.

import os

from assemblyai_haystack.transcriber import AssemblyAITranscriber
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack import Pipeline
from haystack.components.writers import DocumentWriter

ASSEMBLYAI_API_KEY = os.environ.get("ASSEMBLYAI_API_KEY")

## Use AssemblyAITranscriber in a pipeline
document_store = InMemoryDocumentStore()
file_url = "https://github.com/AssemblyAI-Examples/audio-examples/raw/main/20230607_me_canadian_wildfires.mp3"

indexing = Pipeline()
indexing.add_component("transcriber", AssemblyAITranscriber(api_key=ASSEMBLYAI_API_KEY))
indexing.add_component("writer", DocumentWriter(document_store))
indexing.connect("transcriber.transcription", "writer.documents")
indexing.run(
    {
        "transcriber": {
            "file_path": file_url,
            "summarization": None,
            "speaker_labels": None,
        }
    }
)

print("Indexed Document Count:", document_store.count_documents())

Note: Calling indexing.run() blocks until the transcription is finished.

The results of the transcription, summarization and speaker diarization are returned in separate document lists:

  • transcription
  • summarization
  • speaker_labels

The metadata of the transcription document contains the transcription ID and url of the uploaded audio file.

{
   "transcript_id":"73089e32-...-4ae9-97a4-eca7fe20a8b1",
   "audio_url":"https://storage.googleapis.com/aai-docs-samples/nbc.mp3"
}