Skip to content

Recall.ai #133

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions community/recall/.env.Example
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
RECALL_API_KEY=<your-recall-api-key>
GEMINI_API_KEY=<your-gemini-api-key>
# Restack Cloud (Optional)

# RESTACK_ENGINE_ID=<your-engine-id>
# RESTACK_ENGINE_API_KEY=<your-engine-api-key>
# RESTACK_ENGINE_ADDRESS=<your-engine-address>
# RESTACK_ENGINE_API_ADDRESS=<your-engine-api-address>
6 changes: 6 additions & 0 deletions community/recall/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
__pycache__
.pytest_cache
venv
.env
.vscode
poetry.lock
22 changes: 22 additions & 0 deletions community/recall/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
FROM python:3.12-slim

WORKDIR /app

RUN apt-get update && apt-get install -y

RUN pip install poetry

COPY pyproject.toml ./

COPY . .

# Configure poetry to not create virtual environment
RUN poetry config virtualenvs.create false

# Install dependencies
RUN poetry install --no-interaction --no-ansi

# Expose port 80
EXPOSE 80

CMD poetry run python -m src.services
162 changes: 162 additions & 0 deletions community/recall/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# Restack AI - Recall Example

This repository demonstrates how to build a production-ready AI backend using [Restack](https://docs.restack.io) and [Recall](https://docs.recall.ai). It combines Recall’s universal API for capturing meeting data in real-time with Restack’s framework to build resilient ai workflows to handle concurrency, retries, and scheduling at scale.

## Overview

This example shows how to reliably scale workflows on a local machine, capturing meeting audio, video, and metadata via Recall, then processing it using Restack. You can define concurrency limits, automatically retry failed steps, and focus on building robust logic without managing manual locks or queues.

## Walkthrough Video


## Motivation

When building AI meeting-related workflows, you want to handle real-time data ingestion (Recall) along with safe, scalable processing (Restack). Restack ensures steps that call LLM APIs or other services adhere to concurrency constraints, automatically queueing and retrying operations to maintain reliability.

### Workflow Steps

Below is an example of 50 workflows in parallel, each using Recall data and calling LLM functions that are rate-limited to 1 concurrent call per second.

| Step | Workflow 1 | Workflow 2 | ... | Workflow 50 |
| ---- | ---------- | ---------- | --- | ----------- |
| 1 | Recall | Recall | ... | Recall |
| 2 | Recall | Recall | ... | Recall |
| 3 | LLM | LLM | ... | LLM |

### Rate Limit Management

When processing data from Recall in parallel, you might rely on LLM or other external services. Managing concurrency is crucial:

1. **Task Queue**: Traditional approach using Celery or RabbitMQ.
2. **Rate Limiting Middleware**: Custom logic to hold requests in a queue.
3. **Semaphore or Locking**: Single shared lock to ensure serial processing.

### With Restack

Restack automates rate-limit management and concurrency controls:

```python
client.start_service(
task_queue="llm",
functions=[llm_generate, llm_evaluate],
options=ServiceOptions(
rate_limit=1,
max_concurrent_function_runs=1
)
)
```

Combine your Recall steps (fetch meeting transcripts, metadata, etc.) with LLM calls, and Restack ensures each step is handled in order without manual synchronization.

## On Restack UI

You can see how long each workflow or step stayed in the queue and the execution details:

![Parent Workflow](./ui-parent.png)

For each child workflow, you can see how many retries occurred and how long each function took to execute:

![Child Workflow](./ui-child.png)

## Prerequisites

- Python 3.10 or higher
- Poetry (for dependency management)
- Docker (for running Restack)
- Recall account and API key
- (Optional) Gemini LLM API key

## Prepare Environment

Create a `.env` file from `.env.Example`:

```
RECALL_API_KEY=<your-recall-api-key>
GEMINI_API_KEY=<your-gemini-api-key>
...
```

## Start Restack

```bash
docker run -d --pull always --name restack -p 5233:5233 -p 6233:6233 -p 7233:7233 ghcr.io/restackio/restack:main
```

## Start Python Shell

```bash
poetry env use 3.10 && poetry shell
```

## Install Dependencies

```bash
poetry install
poetry env info
```

## Development

```bash
poetry run dev
```

This will start the Restack services locally, using your configured environment.

## Run Workflows

### From UI

Access http://localhost:5233 to see your workflows. Click “Run” to start them.

![Run workflows from UI](./ui-endpoints.png)

### From API

Use the generated endpoints for your workflows:

`POST http://localhost:6233/api/workflows/ChildWorkflow`

or

`POST http://localhost:6233/api/workflows/ExampleWorkflow`

### From CLI

```bash
poetry run schedule
```

Triggers `ChildWorkflow`.

```bash
poetry run scale
```

Triggers `ExampleWorkflow` 50 times in parallel.

```bash
poetry run interval
```

Schedules `ChildWorkflow` every second.

## Deploy on Restack Cloud

Create an account at [https://console.restack.io](https://console.restack.io). You can deploy your workflows to Restack Cloud for automated scaling and monitoring.

## Project Structure

- `src/`
- `client.py`: Initializes Restack client
- `functions/`: Contains function definitions
- `workflows/`: Contains workflow definitions (including steps that leverage Recall data)
- `services.py`: Sets up Restack services
- `schedule_workflow.py`: Scheduling a single workflow
- `schedule_interval.py`: Scheduling a workflow repeatedly
- `schedule_scale.py`: Scheduling 50 workflows at once
- `.env.Example`: Environment variable template for Recall and Gemini keys

# Conclusion

With Recall providing real-time meeting data and Restack handling durable, concurrent workflows, you can build a powerful AI-backed system for processing, summarizing, and analyzing meetings at scale. This setup dramatically reduces operational overhead, allowing you to focus on delivering meaningful product features without worrying about rate limits or concurrency.
34 changes: 34 additions & 0 deletions community/recall/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Project metadata
[tool.poetry]
name = "community_recall"
version = "0.0.1"
description = "A simple example to show how to build a resilient backend with Recall to transcribe meetings"
authors = [
"Restack Team <service@restack.io>",
]
readme = "README.md"
packages = [{include = "src"}]

[tool.poetry.dependencies]
python = ">=3.10,<4.0"
restack-ai = "^0.0.52"
watchfiles = "^1.0.0"
google-generativeai = "0.8.3"
pydantic = "^2.10.5"
requests = "^2.32.3"

[tool.poetry.dev-dependencies]
pytest = "6.2" # Optional: Add if you want to include tests in your example

# Build system configuration
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

# CLI command configuration
[tool.poetry.scripts]
dev = "src.services:watch_services"
services = "src.services:run_services"
workflow = "schedule_workflow:run_schedule_workflow"
interval = "schedule_interval:run_schedule_interval"
scale = "schedule_scale:run_schedule_scale"
26 changes: 26 additions & 0 deletions community/recall/schedule_workflow.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import asyncio
import time
from restack_ai import Restack

async def main():

client = Restack()

workflow_id = f"{int(time.time() * 1000)}-ChildWorkflow"
run_id = await client.schedule_workflow(
workflow_name="ChildWorkflow",
workflow_id=workflow_id
)

await client.get_workflow_result(
workflow_id=workflow_id,
run_id=run_id
)

exit(0)

def run_schedule_workflow():
asyncio.run(main())

if __name__ == "__main__":
run_schedule_workflow()
Empty file.
21 changes: 21 additions & 0 deletions community/recall/src/client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import os
from restack_ai import Restack
from restack_ai.restack import CloudConnectionOptions
from dotenv import load_dotenv

# Load environment variables from a .env file
load_dotenv()


engine_id = os.getenv("RESTACK_ENGINE_ID")
address = os.getenv("RESTACK_ENGINE_ADDRESS")
api_key = os.getenv("RESTACK_ENGINE_API_KEY")
api_address = os.getenv("RESTACK_ENGINE_API_ADDRESS")

connection_options = CloudConnectionOptions(
engine_id=engine_id,
address=address,
api_key=api_key,
api_address=api_address
)
client = Restack(connection_options)
Empty file.
43 changes: 43 additions & 0 deletions community/recall/src/functions/create_meet_bot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
from restack_ai.function import function, FunctionFailure, log
from pydantic import BaseModel
import requests
from typing import Optional
import os

class CreateMeetBotInput(BaseModel):
meeting_url: str = "https://meet.google.com/jgv-jvev-jhe"
bot_name: Optional[str] = "Recall Bot"
transcription_options: Optional[dict] = { "provider": "meeting_captions" }

@function.defn()
async def create_meet_bot(input: CreateMeetBotInput) -> dict:
try:
headers = {
"Authorization": f"Token {os.getenv('RECALL_API_KEY')}",
"Content-Type": "application/json"
}

payload = {
"meeting_url": input.meeting_url,
"transcription_options": input.transcription_options,
"bot_name": input.bot_name,
"google_meet": {
"login_required": False
}
}

response = requests.post(
"https://us-west-2.recall.ai/api/v1/bot",
headers=headers,
json=payload
)

response.raise_for_status()
return response.json()

except requests.exceptions.RequestException as e:
log.error(f"Failed to create meet bot: {e}")
raise FunctionFailure(f"Failed to create meet bot: {e}", non_retryable=True) from e
except Exception as e:
log.error(f"Unexpected error creating meet bot: {e}")
raise FunctionFailure(f"Unexpected error: {e}", non_retryable=True) from e
31 changes: 31 additions & 0 deletions community/recall/src/functions/get_bot_transcript.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
from restack_ai.function import function, FunctionFailure, log
from pydantic import BaseModel
import requests
from typing import Optional
import os

class GetBotTranscriptInput(BaseModel):
bot_id: str

@function.defn()
async def get_bot_transcript(input: GetBotTranscriptInput) -> dict:
try:
headers = {
"Authorization": f"Token {os.getenv('RECALL_API_KEY')}",
"Content-Type": "application/json"
}

response = requests.get(
f"https://us-west-2.recall.ai/api/v1/bot/{input.bot_id}/transcript/",
headers=headers
)

response.raise_for_status()
return {"segments": response.json()}

except requests.exceptions.RequestException as e:
log.error(f"Failed to get bot transcript: {e}")
raise FunctionFailure(f"Failed to get bot transcript: {e}", non_retryable=True) from e
except Exception as e:
log.error(f"Unexpected error getting bot transcript: {e}")
raise FunctionFailure(f"Unexpected error: {e}", non_retryable=True) from e
Loading