Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 94 additions & 13 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,43 @@ on:

jobs:
build-and-push:
runs-on: ubuntu-latest-8-cores
strategy:
fail-fast: false
matrix:
include:
# AMD64 builds on x86 runners
- dockerfile: ./docker/app.Dockerfile
image: ghcr.io/simstudioai/simstudio
platform: linux/amd64
arch: amd64
runner: linux-x64-8-core
- dockerfile: ./docker/db.Dockerfile
image: ghcr.io/simstudioai/migrations
platform: linux/amd64
arch: amd64
runner: linux-x64-8-core
- dockerfile: ./docker/realtime.Dockerfile
image: ghcr.io/simstudioai/realtime
platform: linux/amd64
arch: amd64
runner: linux-x64-8-core
# ARM64 builds on native ARM64 runners
- dockerfile: ./docker/app.Dockerfile
image: ghcr.io/simstudioai/simstudio
platform: linux/arm64
arch: arm64
runner: linux-arm64-8-core
- dockerfile: ./docker/db.Dockerfile
image: ghcr.io/simstudioai/migrations
platform: linux/arm64
arch: arm64
runner: linux-arm64-8-core
- dockerfile: ./docker/realtime.Dockerfile
image: ghcr.io/simstudioai/realtime
platform: linux/arm64
arch: arm64
runner: linux-arm64-8-core
runs-on: ${{ matrix.runner }}
permissions:
contents: read
packages: write
Expand All @@ -26,9 +52,6 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v4

- name: Set up QEMU
uses: docker/setup-qemu-action@v3

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

Expand All @@ -46,21 +69,79 @@ jobs:
with:
images: ${{ matrix.image }}
tags: |
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}.{{minor}}.{{patch}}
type=sha,format=long
type=raw,value=latest-${{ matrix.arch }},enable=${{ github.ref == 'refs/heads/main' }}
type=ref,event=pr,suffix=-${{ matrix.arch }}
type=semver,pattern={{version}},suffix=-${{ matrix.arch }}
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.arch }}
type=semver,pattern={{major}}.{{minor}}.{{patch}},suffix=-${{ matrix.arch }}
type=sha,format=long,suffix=-${{ matrix.arch }}

- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: linux/amd64,linux/arm64
platforms: ${{ matrix.platform }}
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: type=gha,scope=build-v2
cache-to: type=gha,mode=max,scope=build-v2

create-manifests:
runs-on: ubuntu-latest
needs: build-and-push
if: github.event_name != 'pull_request'
strategy:
matrix:
include:
- image: ghcr.io/simstudioai/simstudio
- image: ghcr.io/simstudioai/migrations
- image: ghcr.io/simstudioai/realtime
permissions:
contents: read
packages: write

steps:
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata for manifest
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ matrix.image }}
tags: |
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}.{{minor}}.{{patch}}
type=sha,format=long

- name: Create and push manifest
run: |
# Extract the tags from metadata
TAGS="${{ steps.meta.outputs.tags }}"

# Create manifest for each tag
for tag in $TAGS; do
echo "Creating manifest for $tag"
echo "Looking for images: ${tag}-amd64 and ${tag}-arm64"

# Check if both architecture images exist before creating manifest
if docker manifest inspect ${tag}-amd64 >/dev/null 2>&1 && docker manifest inspect ${tag}-arm64 >/dev/null 2>&1; then
docker manifest create $tag \
${tag}-amd64 \
${tag}-arm64
docker manifest push $tag
echo "Successfully created and pushed manifest for $tag"
else
echo "Warning: One or both architecture images not found for $tag"
echo "Skipping manifest creation for this tag"
fi
done
30 changes: 30 additions & 0 deletions apps/docs/components/ui/video.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import { getVideoUrl } from '@/lib/utils'

interface VideoProps {
src: string
className?: string
autoPlay?: boolean
loop?: boolean
muted?: boolean
playsInline?: boolean
}

export function Video({
src,
className = 'w-full -mb-2 rounded-lg',
autoPlay = true,
loop = true,
muted = true,
playsInline = true,
}: VideoProps) {
return (
<video
autoPlay={autoPlay}
loop={loop}
muted={muted}
playsInline={playsInline}
className={className}
src={getVideoUrl(src)}
/>
)
}
3 changes: 2 additions & 1 deletion apps/docs/content/docs/blocks/evaluator.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
import { Video } from '@/components/ui/video'

The Evaluator block uses AI to score and assess content quality based on metrics you define. Perfect for quality control, A/B testing, and ensuring your AI outputs meet specific standards.

Expand Down Expand Up @@ -63,7 +64,7 @@ Choose an AI model to perform the evaluation:
**Local Models**: Any model running on Ollama

<div className="w-full max-w-2xl mx-auto overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/models.mp4"></video>
<Video src="models.mp4" />
</div>

**Recommendation**: Use models with strong reasoning capabilities like GPT-4o or Claude 3.7 Sonnet for more accurate evaluations.
Expand Down
5 changes: 3 additions & 2 deletions apps/docs/content/docs/blocks/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { BlockTypes } from '@/components/ui/block-types'
import { Video } from '@/components/ui/video'

Blocks are the building components you connect together to create AI workflows. Think of them as specialized modules that each handle a specific task—from chatting with AI models to making API calls or processing data.

<div className="w-full max-w-2xl mx-auto overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/connections.mp4"></video>
<Video src="connections.mp4" />
</div>

## Core Block Types
Expand Down Expand Up @@ -62,7 +63,7 @@ You create workflows by connecting blocks together. The output of one block beco
- **Branching paths**: Some blocks can route to different paths based on conditions

<div className="w-full max-w-2xl mx-auto overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/connections.mp4"></video>
<Video src="connections.mp4" />
</div>

## Common Patterns
Expand Down
3 changes: 2 additions & 1 deletion apps/docs/content/docs/blocks/router.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'
import { ThemeImage } from '@/components/ui/theme-image'
import { Video } from '@/components/ui/video'

The Router block uses AI to intelligently decide which path your workflow should take next. Unlike Condition blocks that use simple rules, Router blocks can understand context and make smart routing decisions based on content analysis.

Expand Down Expand Up @@ -103,7 +104,7 @@ Choose an AI model to power the routing decision:
**Local Models**: Any model running on Ollama

<div className="w-full max-w-2xl mx-auto overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/router-model-dropdown.mp4"></video>
<Video src="router-model-dropdown.mp4" />
</div>

**Recommendation**: Use models with strong reasoning capabilities like GPT-4o or Claude 3.7 Sonnet for more accurate routing decisions.
Expand Down
3 changes: 2 additions & 1 deletion apps/docs/content/docs/connections/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ description: Connect your blocks to one another.
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { ConnectIcon } from '@/components/icons'
import { Video } from '@/components/ui/video'

Connections are the pathways that allow data to flow between blocks in your workflow. They define how information is passed from one block to another, enabling you to create sophisticated, multi-step processes.

Expand All @@ -15,7 +16,7 @@ Connections are the pathways that allow data to flow between blocks in your work
</Callout>

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/connections.mp4"></video>
<Video src="connections.mp4" />
</div>

## Connection Types
Expand Down
3 changes: 2 additions & 1 deletion apps/docs/content/docs/connections/tags.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,12 @@ description: Using connection tags to reference data between blocks
---

import { Callout } from 'fumadocs-ui/components/callout'
import { Video } from '@/components/ui/video'

Connection tags are visual representations of the data available from connected blocks. They provide an easy way to reference outputs from previous blocks in your workflow.

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/connections.mp4"></video>
<Video src="connections.mp4" />
</div>

### What Are Connection Tags?
Expand Down
11 changes: 6 additions & 5 deletions apps/docs/content/docs/execution/basics.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ import {
LoopIcon,
ParallelIcon,
} from '@/components/icons'
import { Video } from '@/components/ui/video'

When you run a workflow in Sim Studio, the execution engine follows a systematic process to ensure blocks are executed in the correct order with proper data flow.

Expand Down Expand Up @@ -162,7 +163,7 @@ Run workflows on-demand through the Sim Studio interface by clicking the "Run" b
- Workflows that need human supervision

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/input-format.mp4"></video>
<Video src="input-format.mp4" />
</div>

### Scheduled Execution
Expand All @@ -175,7 +176,7 @@ Configure workflows to run automatically on a specified schedule:
- Set minimum and maximum execution intervals

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/configure-schedule.mp4"></video>
<Video src="configure-schedule.mp4" />
</div>

### API Endpoints
Expand All @@ -188,15 +189,15 @@ Each workflow can be exposed as an API endpoint:
- Receive execution results as JSON responses

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/api-deployment.mp4"></video>
<Video src="api-deployment.mp4" />
</div>

#### Viewing Deployed APIs

Monitor your deployed workflow APIs and their current state:

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/api-redeployment.mp4"></video>
<Video src="api-redeployment.mp4" />
</div>

This shows how to view the deployed state and compare with the original deployed API configuration.
Expand All @@ -211,7 +212,7 @@ Configure workflows to execute in response to external events:
- Support for specialized webhooks (GitHub, Stripe, etc.)

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/webhooks.mp4"></video>
<Video src="webhooks.mp4" />
</div>

<Callout type="info">
Expand Down
11 changes: 6 additions & 5 deletions apps/docs/content/docs/getting-started/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import {
PerplexityIcon,
SlackIcon,
} from '@/components/icons'
import { Video } from '@/components/ui/video'

This tutorial will guide you through building your first AI workflow in Sim Studio. We'll create a people research agent that can find information about individuals using state-of-the-art LLM-Search tools.

Expand Down Expand Up @@ -63,7 +64,7 @@ A people research agent that:
- **User Prompt**: Drag the connection from the Start block's output into this field (this connects `<start.input>` to the user prompt)

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/static/examples/started/started-2.mp4"></video>
<Video src="examples/started-2.mp4" />
</div>
</Step>

Expand All @@ -77,7 +78,7 @@ A people research agent that:
- Add your API keys for both tools (this allows the agent to search the web and access additional information)

<div className="mx-auto w-3/5 overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/static/examples/started/started-3.mp4"></video>
<Video src="examples/started-3.mp4" />
</div>
</Step>

Expand All @@ -92,7 +93,7 @@ A people research agent that:
You should see the agent's response analyzing the person described in your text.

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/static/examples/started/started-4.mp4"></video>
<Video src="examples/started-4.mp4" />
</div>
</Step>

Expand All @@ -105,7 +106,7 @@ A people research agent that:
- The AI will generate a JSON schema for you automatically

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/static/examples/started/started-5.mp4"></video>
<Video src="examples/started-5.mp4" />
</div>
</Step>

Expand All @@ -120,7 +121,7 @@ A people research agent that:
You should now see structured JSON output with the person's information organized into location, profession, and education fields.

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/static/examples/started/started-6.mp4"></video>
<Video src="examples/started-6.mp4" />
</div>
</Step>
</Steps>
Expand Down
3 changes: 2 additions & 1 deletion apps/docs/content/docs/triggers/webhook.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,12 @@ description: Trigger workflow execution from external webhooks
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
import { Video } from '@/components/ui/video'

The Webhook block allows external services to automatically trigger your workflow execution through HTTP webhooks.

<div className="mx-auto w-full overflow-hidden rounded-lg">
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/webhooks.mp4"></video>
<Video src="webhooks.mp4" />
</div>

## Supported Providers
Expand Down
Loading