diff --git a/docs/docs/docker-compose-setup.md b/docs/docs/docker-compose-setup.md index d6119a34..cbae1f48 100644 --- a/docs/docs/docker-compose-setup.md +++ b/docs/docs/docker-compose-setup.md @@ -119,6 +119,16 @@ After running the Docker Compose command: - **MongoDB** (if using with-mongodb): `mongodb://localhost:27017` (not HTTP - use MongoDB clients like MongoDB Compass or mongosh) - **API Documentation**: `http://localhost:8000/docs` +## Next Steps + +Now that your Exosphere services are running, continue with these guides: + +- **[Create a Node](./exosphere/register-node.md)** – Learn how to define and register your own node. +- **[Create a Runtime](./exosphere/create-runtime.md)** – Set up and configure your runtime environment. +- **[Create a Graph](./exosphere/create-graph.md)** – Build workflows by connecting nodes together. +- **[Trigger a Graph](./exosphere/trigger-graph.md)** – Execute your workflows and monitor their progress. + + ## Development Commands === "Cloud Mongodb" @@ -257,6 +267,7 @@ You can validate your docker-compose configuration before starting services: | `NEXT_PUBLIC_DEFAULT_NAMESPACE` | Default namespace for workflows | `default` | > **🔒 Security Note**: The dashboard now uses **Server-Side Rendering (SSR)** for enhanced security: +> > - **API keys are never exposed** to the browser > - **All API calls go through** secure server-side routes > - **Production-ready security** architecture @@ -458,27 +469,6 @@ alias 'docker compose'='docker-compose' 4. **SDK connection issues**: Make sure `EXOSPHERE_STATE_MANAGER_URI` points to the correct URL and `EXOSPHERE_API_KEY` matches your `STATE_MANAGER_SECRET`. The `EXOSPHERE_API_KEY` value is checked for equality with the `STATE_MANAGER_SECRET` value when making API requests. -## Next Steps - -Once your Exosphere instance is running: - -1. **Set up your SDK environment variables**: - ```bash - export EXOSPHERE_STATE_MANAGER_URI=http://localhost:8000 - export EXOSPHERE_API_KEY=exosphere@123 - ``` - -2. **Install the Python SDK**: - ```bash - uv add exospherehost - ``` - -3. **Create your first workflow** following the [Getting Started Guide](https://docs.exosphere.host/getting-started) - -4. **Explore the dashboard** at `http://localhost:3000` - -5. **Check out the API documentation** at `http://localhost:8000/docs` - ## Support - [Documentation](https://docs.exosphere.host) diff --git a/docs/docs/exosphere/api-changes.md b/docs/docs/exosphere/api-changes.md deleted file mode 100644 index da33fb12..00000000 --- a/docs/docs/exosphere/api-changes.md +++ /dev/null @@ -1,189 +0,0 @@ -# API Changes (Beta) - -This document outlines the latest beta API changes and enhancements in ExosphereHost. - -## StateManager.upsert_graph() - Model-Based Parameters (Beta) - -The `upsert_graph` method now supports model-based parameters for improved type safety, validation, and developer experience. - -### New Signature - -```python -async def upsert_graph( - self, - graph_name: str, - graph_nodes: list[GraphNodeModel], - secrets: dict[str, str], - retry_policy: RetryPolicyModel | None = None, - store_config: StoreConfigModel | None = None, - validation_timeout: int = 60, - polling_interval: int = 1 -): -``` - -### Key Changes - -1. **Model-Based Nodes**: `graph_nodes` parameter now expects a list of `GraphNodeModel` objects instead of raw dictionaries -2. **Retry Policy Model**: Optional `retry_policy` parameter using `RetryPolicyModel` with enum-based strategy selection -3. **Store Configuration**: Optional `store_config` parameter using `StoreConfigModel` for graph-level key-value store -4. **Validation Control**: New `validation_timeout` and `polling_interval` parameters for better control over graph validation - -### Migration Guide - -#### Before (Traditional) -```python -# Old dictionary-based approach -graph_nodes = [ - { - "node_name": "DataProcessor", - "namespace": "MyProject", - "identifier": "processor", - "inputs": {"data": "initial"}, - "next_nodes": [] - } -] - -retry_policy = { - "max_retries": 3, - "strategy": "EXPONENTIAL", - "backoff_factor": 2000 -} -``` - -#### After (Beta Model-Based) -```python -from exospherehost import GraphNodeModel, RetryPolicyModel, RetryStrategyEnum - -# New model-based approach -graph_nodes = [ - GraphNodeModel( - node_name="DataProcessor", - namespace="MyProject", - identifier="processor", - inputs={"data": "initial"}, - next_nodes=[] - ) -] - -retry_policy = RetryPolicyModel( - max_retries=3, - strategy=RetryStrategyEnum.EXPONENTIAL, # Use enum instead of string - backoff_factor=2000 -) -``` - -### Available Models - -#### GraphNodeModel - -- **node_name** (str): Class name of the node -- **namespace** (str): Namespace where node is registered -- **identifier** (str): Unique identifier in the graph -- **inputs** (dict[str, Any]): Input values for the node -- **next_nodes** (Optional[List[str]]): List of next node identifiers -- **unites** (Optional[UnitesModel]): Unite configuration for parallel execution - -#### RetryPolicyModel (Beta) - -- **max_retries** (int): Maximum number of retry attempts (default: 3) -- **strategy** (RetryStrategyEnum): Retry strategy using enum values (default: EXPONENTIAL) -- **backoff_factor** (int): Base delay in milliseconds (default: 2000) -- **exponent** (int): Exponential multiplier (default: 2) -- **max_delay** (int | None): Maximum delay cap in milliseconds (optional) - -#### StoreConfigModel (Beta) - -- **required_keys** (list[str]): Keys that must be present in the store -- **default_values** (dict[str, str]): Default values for store keys - -### Retry Strategy Enums - -- `RetryStrategyEnum.EXPONENTIAL`: Pure exponential backoff -- `RetryStrategyEnum.EXPONENTIAL_FULL_JITTER`: Exponential with full randomization -- `RetryStrategyEnum.EXPONENTIAL_EQUAL_JITTER`: Exponential with 50% randomization - -- `RetryStrategyEnum.LINEAR`: Linear backoff -- `RetryStrategyEnum.LINEAR_FULL_JITTER`: Linear with full randomization -- `RetryStrategyEnum.LINEAR_EQUAL_JITTER`: Linear with 50% randomization - -- `RetryStrategyEnum.FIXED`: Fixed delay -- `RetryStrategyEnum.FIXED_FULL_JITTER`: Fixed with full randomization -- `RetryStrategyEnum.FIXED_EQUAL_JITTER`: Fixed with 50% randomization - -### Complete Example - -```python -from exospherehost import ( - StateManager, - GraphNodeModel, - RetryPolicyModel, - StoreConfigModel, - RetryStrategyEnum -) - -async def create_advanced_graph(): - state_manager = StateManager(namespace="MyProject") - - # Define nodes using models - graph_nodes = [ - GraphNodeModel( - node_name="DataLoader", - namespace="MyProject", - identifier="loader", - inputs={"source": "initial"}, - next_nodes=["processor"] - ), - GraphNodeModel( - node_name="DataProcessor", - namespace="MyProject", - identifier="processor", - inputs={"data": "${{ loader.outputs.data }}"}, - next_nodes=[] - ) - ] - - # Define retry policy with enum - retry_policy = RetryPolicyModel( - max_retries=5, - strategy=RetryStrategyEnum.EXPONENTIAL_FULL_JITTER, - backoff_factor=1000, - exponent=2, - max_delay=30000 - ) - - # Define store configuration - store_config = StoreConfigModel( - required_keys=["cursor", "batch_id"], - default_values={ - "cursor": "0", - "batch_size": "100" - } - ) - - # Create graph with all beta features - result = await state_manager.upsert_graph( - graph_name="advanced-workflow", - graph_nodes=graph_nodes, - secrets={"api_key": "your-key"}, - retry_policy=retry_policy, # beta - store_config=store_config, # beta - validation_timeout=120, - polling_interval=2 - ) - - return result -``` - -### Benefits - -1. **Type Safety**: Pydantic models catch configuration errors at definition time -2. **IDE Support**: Better autocomplete, error detection, and documentation -3. **Validation**: Automatic validation of parameters and relationships -4. **Consistency**: Standardized parameter names and types across the SDK -5. **Extensibility**: Easy to add new fields and maintain backward compatibility - -### Beta Status - -These features are currently in beta and the API may change based on user feedback. The traditional dictionary-based approach will continue to work alongside the new model-based approach. - -For questions or feedback about these beta features, please reach out through our [Discord community](https://discord.com/invite/zT92CAgvkj). \ No newline at end of file diff --git a/docs/docs/exosphere/architecture.md b/docs/docs/exosphere/architecture.md index 51b7838a..18087cea 100644 --- a/docs/docs/exosphere/architecture.md +++ b/docs/docs/exosphere/architecture.md @@ -13,6 +13,8 @@ Exosphere is built around a **state-based execution model** where workflows are - **Graph Templates**: Declarative workflow definitions - **States**: Individual execution units with inputs, outputs, and metadata +> **📚 Core Concepts**: For a comprehensive overview of Exosphere's unique features, see **[Exosphere Concepts](./concepts.md)**. + ## State Execution Model ### State Lifecycle @@ -29,150 +31,49 @@ CREATED → QUEUED → EXECUTED/ERRORED → SUCCESS 4. **ERRORED**: State failed during executionState and all its dependencies are complete 5. **SUCCESS**: Workflow/branch-level success once all dependent states complete +### State Execution Flow -## Fanout Mechanism - -### Single vs Multiple Outputs - -Exosphere supports two execution patterns: - -1. **Single Output**: A state produces one output and continues to the next stage -2. **Multiple Outputs (Fanout)**: A state produces multiple outputs, creating parallel execution paths +The following diagram illustrates how states flow through the execution system using the actual StateStatusEnum values: +```mermaid +stateDiagram-v2 + [*] --> CREATED : Graph Triggered and Dependencies Met + + CREATED --> QUEUED : Runtime picked task + + QUEUED --> EXECUTED : Runtime Executes + QUEUED --> ERRORED : Runtime Fails + QUEUED --> PRUNED : State pruned + + EXECUTED --> SUCCESS : Executed and children created + EXECUTED --> NEXT_CREATED_ERROR : Error on creating next state + + ERRORED --> RETRY_CREATED : Retry Policy Allows -### Fanout Example -Consider a data processing workflow: - -```python hl_lines="9-11" -class DataSplitterNode(BaseNode): - async def execute(self) -> list[Outputs]: - data = json.loads(self.inputs.data) - chunk_size = 100 - - outputs = [] - for i in range(0, len(data), chunk_size): - chunk = data[i:i + chunk_size] - outputs.append(self.Outputs( - chunk=json.dumps(chunk) - )) - - return outputs # This creates fanout on each output ``` -When this node executes: -1. **Original state** gets the first chunk as output -2. **Additional states** are created for each remaining chunk -3. **All states** are marked as EXECUTED -4. **Next stages** are created for each state independently - -**This enables parallel processing of data chunks across multiple runtime instances.** - -## Unites Keyword - -### Purpose - -The `unites` keyword is a powerful mechanism for **synchronizing parallel execution paths**. It allows a node to wait for multiple parallel states to complete before executing. - -### Unites Logic - -When a node has a `unites` configuration: - -1. **Execution is deferred** until all states with the specified identifier are complete -2. **State fingerprinting** ensures only one unites state is created per unique combination -3. **Dependency validation** ensures the unites node depends on the specified identifier - -### Unites Strategy (Beta) - -The `unites` keyword supports different strategies to control when the uniting node should execute. This feature is currently in **beta**. - -#### Available Strategies - -- **`ALL_SUCCESS`** (default): The uniting node executes only when all states with the specified identifier have reached `SUCCESS` status. If any state fails or is still processing, the uniting node will wait. - -- **`ALL_DONE`**: The uniting node executes when all states with the specified identifier have reached any terminal status (`SUCCESS`, `ERRORED`, `CANCELLED`, `NEXT_CREATED_ERROR`, or `PRUNED`). This strategy allows the uniting node to proceed even if some states have failed. - -#### Strategy Configuration - -You can specify the strategy in your unites configuration: - -```json hl_lines="22-25" -{ - "nodes": [ - { - "node_name": "DataSplitterNode", - "identifier": "data_splitter", - "next_nodes": ["processor_1"] - }, - { - "node_name": "DataProcessorNode", - "identifier": "processor_1", - "inputs":{ - "x":"${{data_splitter.outputs.data_chunk}}" - }, - "next_nodes": ["result_merger"] - }, - { - "node_name": "ResultMergerNode", - "identifier": "result_merger", - "inputs":{ - "x_processed":"${{processor_1.outputs.processed_data}}" - }, - "unites": { - "identifier": "data_splitter", - "strategy": "ALL_SUCCESS" - }, - "next_nodes": [] - } - ] -} -``` - -#### Use Cases - -- **`ALL_SUCCESS`**: Use when you need all parallel processes to complete successfully before proceeding. Ideal for data processing workflows where partial failures are not acceptable. **Caution**: This strategy can block indefinitely if any parallel branch never reaches a SUCCESS terminal state. Consider adding timeouts, explicit failure-to-success fallbacks, or using ALL_DONE when partial results are acceptable. Implement watchdogs or retry/timeout policies in workflows to prevent permanent blocking. - -- **`ALL_DONE`**: Use when you want to proceed with partial results or when you have error handling logic in the uniting node. Useful for scenarios where you want to aggregate results from successful processes while handling failures separately. - -### Unites Example - -```json hl_lines="22-24" -{ - "nodes": [ - { - "node_name": "DataSplitterNode", - "identifier": "data_splitter", - "next_nodes": ["processor_1"] - }, - { - "node_name": "DataProcessorNode", - "identifier": "processor_1", - "inputs":{ - "x":"${{data_splitter.outputs.data_chunk}}" - }, - "next_nodes": ["result_merger"] - }, - { - "node_name": "ResultMergerNode", - "identifier": "result_merger", - "inputs":{ - "x_processed":"${{processor_1.outputs.processed_data}}" - }, - "unites": { - "identifier": "data_splitter" - }, - "next_nodes": [] - } - ] -} +### Runtime Interaction + +States interact with runtimes through a pull-based model: + +```mermaid +sequenceDiagram + participant RT as Runtime + participant SM as State Manager + participant DB as Database + + RT->>SM: Request available states + SM->>DB: Query queued states + DB->>SM: Return eligible states + SM->>RT: Assign state for execution + RT->>RT: Execute state logic + RT->>SM: Report completion/failure + SM->>DB: Update state status + SM->>DB: Store outputs/errors + SM->>DB: Trigger dependent states ``` -In this example: -1. `data_splitter` creates fanout with 3 outputs -2. `processor_1` executes in parallel for all three data chunks -3. `result_merger` waits for all processors to complete (unites with `data_splitter`) -4. Only one `result_merger` state is created due to fingerprinting - ## Architecture Benefits ### Scalability @@ -200,4 +101,13 @@ In this example: - **Performance monitoring**: Track execution times and resource usage -Exosphere's architecture provides a robust foundation for building distributed, scalable workflows. The combination of state-based execution, fanout mechanisms, and the unites keyword enables complex parallel processing patterns while maintaining simplicity and reliability. \ No newline at end of file +Exosphere's architecture provides a robust foundation for building distributed, scalable workflows. The combination of state-based execution, fanout mechanisms, and the unites keyword enables complex parallel processing patterns while maintaining simplicity and reliability. + +## Next Steps + +- **[Exosphere Concepts](./concepts.md)** - Explore Exosphere's core concepts and unique features +- **[Fanout](./fanout.md)** - Learn about parallel execution and dynamic scaling +- **[Unite](./unite.md)** - Understand synchronization of parallel paths +- **[Signals](./signals.md)** - Control workflow execution flow +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution \ No newline at end of file diff --git a/docs/docs/exosphere/concepts.md b/docs/docs/exosphere/concepts.md new file mode 100644 index 00000000..6507e5b9 --- /dev/null +++ b/docs/docs/exosphere/concepts.md @@ -0,0 +1,83 @@ +# Exosphere Concepts + +Exosphere is built around several core concepts that make it unique in the workflow orchestration space. This page provides an overview of these key features and how they work together. + +## Core Concepts Overview + +Exosphere's architecture is designed around these fundamental concepts: + +```mermaid +graph TB + A[State-Based Execution] --> B[Fanout] + A --> C[Unite] + A --> D[Signals] + A --> E[Retry Policy] + A --> F[Store] + + B --> G[Parallel Processing] + C --> H[Synchronization] + D --> I[Flow Control] + E --> J[Resilience] + F --> K[Persistence] +``` + +## Unique Features + +### 1. **State-Based Execution Model** +- **Discrete States**: Each workflow step is a separate, independently executable state +- **Persistent State**: All states are stored in the database for reliability and recovery +- **Independent Execution**: States can be processed by any available runtime instance + +### 2. **Dynamic Fanout** +- **Runtime Fanout**: Nodes can produce multiple outputs during execution, creating parallel paths +- **Variable Parallelism**: The number of parallel executions is determined at runtime, not design time +- **Automatic Scaling**: More runtime instances automatically handle increased parallel load + +### 3. **Intelligent Unite** +- **Smart Synchronization**: The `unites` keyword synchronizes parallel execution paths +- **State Fingerprinting**: Prevents duplicate unite states for the same parallel branch +- **Flexible Strategies**: Choose between waiting for all success or all completion + +### 4. **Signals System** +- **Flow Control**: Nodes can control workflow execution by raising signals +- **Prune Signal**: Terminate execution branches when conditions aren't met +- **ReQueue Signal**: Schedule retries or polling with custom delays + +### 5. **Advanced Retry Policies** +- **Multiple Strategies**: Exponential, linear, and fixed backoff with jitter variants +- **Jitter Prevention**: Built-in jitter prevents thundering herd problems +- **Delay Capping**: Configurable maximum delays for predictable behavior + +### 6. **Persistent Store** +- **Graph-Level Storage**: Key-value store that persists across the entire workflow execution +- **Runtime Access**: All nodes can read and write to the shared store +- **Automatic Cleanup**: Store data is automatically cleaned up when workflows complete + +## How They Work Together + +These concepts combine to create a powerful workflow system: + +1. **State-based execution** provides the foundation for distributed, reliable processing +2. **Fanout** enables parallel processing and horizontal scaling +3. **Unite** synchronizes parallel paths and enables complex workflow patterns +4. **Signals** give nodes control over execution flow and error handling +5. **Retry policies** ensure resilience against transient failures +6. **Store** provides persistent state across workflow executions + +## Benefits + +- **Scalability**: Horizontal scaling with automatic load distribution +- **Reliability**: Persistent state management and automatic retry mechanisms +- **Flexibility**: Dynamic parallelism and runtime flow control +- **Observability**: Complete visibility into execution state and progress +- **Developer Experience**: Simple node-based API with powerful orchestration + +## Next Steps + +Explore each concept in detail: + +- **[Fanout](./fanout.md)** - Learn about parallel execution and dynamic scaling +- **[Unite](./unite.md)** - Understand synchronization of parallel paths +- **[Signals](./signals.md)** - Control workflow execution flow +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/create-graph.md b/docs/docs/exosphere/create-graph.md index b13eac9a..da497d12 100644 --- a/docs/docs/exosphere/create-graph.md +++ b/docs/docs/exosphere/create-graph.md @@ -1,6 +1,6 @@ # Create Graph -Graphs in Exosphere define executions by connecting nodes together. A graph template specifies the nodes, their connections, and how data flows between them. This guide shows you how to create and manage graph templates. +Graphs in Exosphere define executions by connecting nodes together. A graph template specifies the nodes, their connections, and how data flows between them. ## Graph Structure @@ -9,494 +9,82 @@ A graph template consists of: - **Nodes**: The processing units in your workflow with their inputs and next nodes - **Secrets**: Configuration data shared across nodes - **Input Mapping**: How data flows between nodes using `${{ ... }}` syntax -- **Retry Policy**: Optional failure handling configuration (beta) -- **Store Configuration**: Optional graph-level key-value store (beta) +- **Retry Policy**: `Optional` failure handling configuration +- **Store Configuration**: `Optional` graph-level key-value store -## Basic Graph Example - -One can define a graph on Exosphere through a simple json config, which specifies the nodes and their relationships on the graph. +## Basic Example of a Graph Template ```json { "secrets": { - "openai_api_key": "your-openai-key", - "database_url": "your-database-url" + "api_key": "your-api-key" }, "nodes": [ { "node_name": "DataLoaderNode", "namespace": "MyProject", - "identifier": "data_loader", - "inputs": { - "source": "initial", - "format": "json" - }, - "next_nodes": ["data_processor"] + "identifier": "loader", + "inputs": {"source": "initial"}, + "next_nodes": ["processor"] }, { "node_name": "DataProcessorNode", - "namespace": "MyProject", - "identifier": "data_processor", - "inputs": { - "raw_data": "${{ data_loader.outputs.processed_data }}", - "config": "initial" - }, - "next_nodes": ["data_validator"] - }, - { - "node_name": "DataValidatorNode", "namespace": "MyProject", - "identifier": "data_validator", - "inputs": { - "data": "${{ data_processor.outputs.processed_data }}", - "validation_rules": "initial" - }, + "identifier": "processor", + "inputs": {"data": "${{ loader.outputs.data }}"}, "next_nodes": [] } - ], - "retry_policy": { - "max_retries": 3, - "strategy": "EXPONENTIAL", - "backoff_factor": 2000, - "exponent": 2 - } -} -``` - -## Components - -### Secrets - -Define secrets as an object with key-value pairs: - -```json -{ - "secrets": { - "openai_api_key": "your-openai-key", - "aws_access_key_id": "your-aws-key", - "aws_secret_access_key": "your-aws-secret", - "aws_region": "us-east-1", - "database_url": "your-database-url" - } -} -``` - -**Fields:** - -- **Keys**: Secret names that will be available to all nodes -- **Values**: The actual secret values (in production, these should be encrypted) - -### Nodes - -Define the nodes in your workflow with their inputs and next nodes: - -```json -{ - "nodes": [ - { - "node_name": "NodeClassName", - "namespace": "MyProject", - "identifier": "unique_node_id", - "inputs": { - "input_field": "initial_value", - "mapped_field": "${{ source_node.outputs.output_field }}" - }, - "next_nodes": ["next_node_identifier"] - } ] } ``` -**Fields:** - -- **`node_name`**: The class name of the node (must be registered) -- **`namespace`**: The namespace where the node is registered -- **`identifier`**: Unique identifier for the node in this graph -- **`inputs`**: Input values for the node -- **`next_nodes`**: Array of node identifiers that this node connects to - -### Input Mapping - -Use the `${{ ... }}` syntax to map outputs from previous nodes: - -```json -{ - "inputs": { - "static_value": "initial", - "mapped_value": "${{ source_node.outputs.output_field }}", - "nested_mapping": "${{ source_node.outputs.nested.field }}" - } -} -``` - -**Mapping Syntax:** - -- **`${{ node_identifier.outputs.field_name }}`**: Maps output from a specific node -- **`initial`**: Static value provided when the graph is triggered -- **Direct values**: String values. In v1, numbers/booleans must be string-encoded (e.g., "42", "true"). - -### Retry Policy - -Graphs can include a retry policy to handle transient failures automatically. The retry policy is configured at the graph level and applies to all nodes within the graph. - -```json -{ - "retry_policy": { - "max_retries": 3, - "strategy": "EXPONENTIAL", - "backoff_factor": 2000, // milliseconds - "exponent": 2 - } -} -``` - -For detailed information about retry policies, including all available strategies and configuration options, see the [Retry Policy](retry-policy.md) documentation. - -## Creating Graph Templates (Beta) +## Quick Start with Python SDK -The recommended way to create graph templates is using the Exosphere Python SDK with model-based parameters, which provides a clean interface to the State Manager API and includes beta features for enhanced workflow management. - -```python hl_lines="1-3 8-12 15-35 38-44 47-53 56-70" -from exospherehost import StateManager, GraphNodeModel, RetryPolicyModel, StoreConfigModel, RetryStrategyEnum +```python +from exospherehost import StateManager, GraphNodeModel -async def create_graph_template(): - # Initialize the State Manager +async def create_graph(): state_manager = StateManager( namespace="MyProject", state_manager_uri=EXOSPHERE_STATE_MANAGER_URI, key=EXOSPHERE_API_KEY ) - # Define graph nodes using models (beta) graph_nodes = [ GraphNodeModel( node_name="DataLoaderNode", namespace="MyProject", - identifier="data_loader", - inputs={ - "source": "initial", - "format": "json" - }, - next_nodes=["data_processor"] + identifier="loader", + inputs={"source": "initial"}, + next_nodes=["processor"] ), GraphNodeModel( node_name="DataProcessorNode", namespace="MyProject", - identifier="data_processor", - inputs={ - "raw_data": "${{ data_loader.outputs.processed_data }}", - "config": "initial" - }, - next_nodes=["data_validator"] - ), - GraphNodeModel( - node_name="DataValidatorNode", - namespace="MyProject", - identifier="data_validator", - inputs={ - "data": "${{ data_processor.outputs.processed_data }}", - "validation_rules": "initial" - }, + identifier="processor", + inputs={"data": "${{ loader.outputs.data }}"}, next_nodes=[] ) ] - # Define secrets - secrets = { - "openai_api_key": "your-openai-key", - "database_url": "your-database-url" - # Store real values in a secret manager or environment variables, not in code. - } - - # Define retry policy using model (beta) - retry_policy = RetryPolicyModel( - max_retries=3, - strategy=RetryStrategyEnum.EXPONENTIAL, - backoff_factor=2000, - exponent=2 - ) - - # Define store configuration (beta) - store_config = StoreConfigModel( - required_keys=["cursor", "batch_id"], - default_values={ - "cursor": "0", - "batch_size": "100" - } + result = await state_manager.upsert_graph( + graph_name="my-workflow", + graph_nodes=graph_nodes, + secrets={"api_key": "your-key"} ) - - try: - # Create or update the graph template (beta) - result = await state_manager.upsert_graph( - graph_name="my-workflow", - graph_nodes=graph_nodes, - secrets=secrets, - retry_policy=retry_policy, # beta - store_config=store_config, # beta - validation_timeout=60, - polling_interval=1 - ) - print("Graph template created successfully!") - print(f"Validation status: {result['validation_status']}") - return result - except Exception as e: - print(f"Error creating graph template: {e}") - raise - -# Run the function -import asyncio -asyncio.run(create_graph_template()) + return result ``` -### Model-Based Parameters (Beta) - -The new `upsert_graph` method uses Pydantic models for better type safety and validation: - -#### GraphNodeModel - -```python -from exospherehost import GraphNodeModel - -node = GraphNodeModel( - node_name="MyNode", # Class name of the node - namespace="MyProject", # Namespace where node is registered - identifier="unique_id", # Unique identifier in this graph - inputs={ # Input values for the node - "field1": "value1", - "field2": "${{ other_node.outputs.field }}" - }, - next_nodes=["next_node_id"] # List of next node identifiers -) -``` - -#### RetryPolicyModel (Beta) - -```python -from exospherehost import RetryPolicyModel, RetryStrategyEnum - -retry_policy = RetryPolicyModel( - max_retries=3, # Maximum number of retry attempts - strategy=RetryStrategyEnum.EXPONENTIAL, # Retry strategy (use enum) - backoff_factor=2000, # Base delay in milliseconds - exponent=2, # Exponential multiplier - max_delay=30000 # Maximum delay cap in milliseconds -) -``` - -**Available Retry Strategies:** - -- `RetryStrategyEnum.EXPONENTIAL` -- `RetryStrategyEnum.EXPONENTIAL_FULL_JITTER` -- `RetryStrategyEnum.EXPONENTIAL_EQUAL_JITTER` -- `RetryStrategyEnum.LINEAR` -- `RetryStrategyEnum.LINEAR_FULL_JITTER` -- `RetryStrategyEnum.LINEAR_EQUAL_JITTER` -- `RetryStrategyEnum.FIXED` -- `RetryStrategyEnum.FIXED_FULL_JITTER` -- `RetryStrategyEnum.FIXED_EQUAL_JITTER` - -#### StoreConfigModel (Beta) - -```python -from exospherehost import StoreConfigModel - -store_config = StoreConfigModel( - required_keys=["cursor", "batch_id"], # Keys that must be present - default_values={ # Default values for keys - "cursor": "0", - "batch_size": "100" - } -) -``` - -## Input Mapping Patterns - -=== "Field Mapping" - - ```json - { - "inputs": { - "data": "${{ source_node.outputs.data }}" - } - } - ``` - -=== "Static Values" - - ```json - { - "inputs": { - "config_value": "static_value", - "number_value": "42", - "boolean_value": "true" - } - } - ``` - -## Graph Validation - -The state manager validates your graph template: - -### Node Validation - -- All nodes must be registered in the specified namespace -- Node identifiers must be unique within the graph -- Node names must match registered node classes - -### Input Validation - -- Mapped fields must exist in source node schemas -- Input field names must match node input schemas -- No circular dependencies allowed in `next_nodes` - -### Secret Validation - -- All referenced secrets must be defined in the secrets object -- Secret names must be valid identifiers - -## Graph Management - -=== "Get Graph Template" - - ```python hl_lines="11" - from exospherehost import StateManager - - async def get_graph_template(): - state_manager = StateManager( - namespace="MyProject", - state_manager_uri=EXOSPHERE_STATE_MANAGER_URI, - key=EXOSPHERE_API_KEY - ) - - try: - graph_info = await state_manager.get_graph("my-workflow") - print(f"Graph validation status: {graph_info['validation_status']}") - print(f"Number of nodes: {len(graph_info['nodes'])}") - print(f"Validation errors: {graph_info['validation_errors']}") - return graph_info - except Exception as e: - print(f"Error getting graph template: {e}") - raise - - # Get graph information - graph_info = asyncio.run(get_graph_template()) - ``` - -=== "Update Graph Template" - - ```python hl_lines="1-3 8-12 15-35 38-44 47-53 56-70" - from exospherehost import StateManager, GraphNodeModel, RetryPolicyModel, StoreConfigModel, RetryStrategyEnum - - async def update_graph_template(): - state_manager = StateManager( - namespace="MyProject", - state_manager_uri=EXOSPHERE_STATE_MANAGER_URI, - key=EXOSPHERE_API_KEY - ) - - # Updated graph nodes using models (beta) - updated_nodes = [ - GraphNodeModel( - node_name="DataLoaderNode", - namespace="MyProject", - identifier="data_loader", - inputs={ - "source": "initial", - "format": "json", - "batch_size": "200" # Updated parameter - }, - next_nodes=["data_processor"] - ), - GraphNodeModel( - node_name="DataProcessorNode", - namespace="MyProject", - identifier="data_processor", - inputs={ - "raw_data": "${{ data_loader.outputs.processed_data }}", - "config": "initial", - "optimization": "enabled" # New parameter - }, - next_nodes=["data_validator", "data_logger"] # Added new next node - ), - GraphNodeModel( - node_name="DataValidatorNode", - namespace="MyProject", - identifier="data_validator", - inputs={ - "data": "${{ data_processor.outputs.processed_data }}", - "validation_rules": "initial" - }, - next_nodes=[] - ), - GraphNodeModel( - node_name="DataLoggerNode", # New node - namespace="MyProject", - identifier="data_logger", - inputs={ - "log_data": "${{ data_processor.outputs.processed_data }}", - "log_level": "info" - }, - next_nodes=[] - ) - ] - - # Updated secrets - updated_secrets = { - "openai_api_key": "your-openai-key", - "database_url": "your-database-url", - "logging_endpoint": "your-logging-endpoint" # Added new secret - } - - # Updated retry policy (beta) - retry_policy = RetryPolicyModel( - max_retries=5, # Increased retries - strategy=RetryStrategyEnum.EXPONENTIAL_FULL_JITTER, - backoff_factor=1500, # Reduced base delay - exponent=2, - max_delay=60000 # Increased max delay - ) - - # Updated store configuration (beta) - store_config = StoreConfigModel( - required_keys=["cursor", "batch_id", "session_id"], # Added session_id - default_values={ - "cursor": "0", - "batch_size": "150", # Updated default - "session_id": "default" # New default - } - ) - - try: - result = await state_manager.upsert_graph( - graph_name="my-workflow", - graph_nodes=updated_nodes, - secrets=updated_secrets, - retry_policy=retry_policy, # beta - store_config=store_config, # beta - validation_timeout=120, # Increased timeout - polling_interval=2 # Increased polling interval - ) - print("Graph template updated successfully!") - print(f"Validation status: {result['validation_status']}") - return result - except Exception as e: - print(f"Error updating graph template: {e}") - raise - - # Update the graph - asyncio.run(update_graph_template()) - ``` - -## Graph Visualization - -The Exosphere dashboard provides visual representation of your graphs. Checkout the [Dashboard Guide](./dashboard.md) +## Next Steps -- **Node View**: See all nodes and their connections via `next_nodes` -- **Execution Flow**: Track how data flows through the graph using input mapping -- **State Monitoring**: Monitor execution states in real-time -- **Error Tracking**: Identify and debug failed executions +- **[Graph Components](./graph-components.md)** - Learn about secrets, nodes, and retry policy +- **[Python SDK](./python-sdk-graph.md)** - Use Python SDK with Pydantic models +- **[Graph Validation](./graph-validation.md)** - Learn about validation rules +- **[Trigger Graph](./trigger-graph.md)** - Trigger your workflows on created Graph -## Next Steps +## Related Concepts -- **[Trigger Graph](./trigger-graph.md)** - Learn how to execute your workflows -- **[Dashboard](./dashboard.md)** - Use the Exosphere dashboard for monitoring +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/create-runtime.md b/docs/docs/exosphere/create-runtime.md index a34fddcc..166fa720 100644 --- a/docs/docs/exosphere/create-runtime.md +++ b/docs/docs/exosphere/create-runtime.md @@ -4,33 +4,8 @@ The `Runtime` class is the core component that manages the execution environment > **📚 Getting Started**: For a complete local setup guide covering both the state manager and dashboard, see our [Local Setup Guide](./local-setup.md). -## Runtime Setup -Before creating a runtime, you need to set up the state manager and configure your environment variables. - -### Prerequisites - -1. **Start the State Manager**: Run the state manager using Docker Compose: - ```bash - docker-compose up -d - ``` - For detailed setup instructions, see [State Manager Setup](./state-manager-setup.md). - -> **🔐 Authentication**: When making API requests to the state manager, the `EXOSPHERE_API_KEY` value is compared to the `STATE_MANAGER_SECRET` value in the state manager container. - -2. **Set Environment Variables**: Configure your authentication: - ```bash - export EXOSPHERE_STATE_MANAGER_URI="your-state-manager-uri" - export EXOSPHERE_API_KEY="your-api-key" - ``` - - Or create a `.env` file: - ```bash - EXOSPHERE_STATE_MANAGER_URI=your-state-manager-uri - EXOSPHERE_API_KEY=your-api-key - ``` - -### Creating a Runtime +### Creating a Runtime === "Basic" ```python hl_lines="17-22" @@ -136,17 +111,6 @@ EXOSPHERE_STATE_MANAGER_URI=https://your-state-manager.exosphere.host EXOSPHERE_API_KEY=your-api-key ``` -Then load it in your code: - -```python -from dotenv import load_dotenv -load_dotenv() - -from exospherehost import Runtime, BaseNode - -# Your runtime code here... -``` - ## Runtime Lifecycle ### 1. Initialization @@ -313,6 +277,12 @@ Monitor your runtime using the Exosphere dashboard: ## Next Steps -- **[Register Node](./register-node.md)** - Learn how to create custom nodes - **[Create Graph](./create-graph.md)** - Build workflows by connecting nodes -- **[Trigger Graph](./trigger-graph.md)** - Execute and monitor workflows \ No newline at end of file +- **[Trigger Graph](./trigger-graph.md)** - Execute and monitor workflows + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution \ No newline at end of file diff --git a/docs/docs/exosphere/dashboard.md b/docs/docs/exosphere/dashboard.md index 4ea2897e..d43e09a4 100644 --- a/docs/docs/exosphere/dashboard.md +++ b/docs/docs/exosphere/dashboard.md @@ -203,5 +203,12 @@ For additional help: ## Next Steps -- **[Architecture](./architecture.md)** - Learn about fanout, units, inputs, outputs, and secrets +- **[Architecture](./architecture.md)** - Learn about Exosphere's architecture - **[State Manager Setup](./state-manager-setup.md)** - Complete backend setup guide + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/fanout.md b/docs/docs/exosphere/fanout.md new file mode 100644 index 00000000..6a0d9529 --- /dev/null +++ b/docs/docs/exosphere/fanout.md @@ -0,0 +1,169 @@ +# Fanout + +Fanout is Exosphere's mechanism for creating parallel execution paths during workflow execution. It allows nodes to produce multiple outputs, automatically creating parallel states for each output. + +## Overview + +Fanout enables **dynamic parallelism** where the number of parallel executions is determined at runtime, not at design time. This makes Exosphere uniquely powerful for scenarios where you need to process variable amounts of data or create conditional parallel paths. + +```mermaid +graph LR + A[Single Node] --> B[Multiple Outputs] + B --> C[Parallel State 1] + B --> D[Parallel State 2] + B --> E[Parallel State N] + + C --> F[Next Stage 1] + D --> G[Next Stage 2] + E --> H[Next Stage N] +``` + +## How Fanout Works + +### Single vs Multiple Outputs + +Exosphere supports two execution patterns: + +1. **Single Output**: A state produces one output and continues to the next stage +2. **Multiple Outputs (Fanout)**: A state produces multiple outputs, creating parallel execution paths + +### Fanout Execution Flow + +When a node returns multiple outputs: + +1. **Original state** gets the first output and continues execution +2. **Additional states** are created for each remaining output +3. **All states** are marked as EXECUTED +4. **Next stages** are created for each state independently + +## Implementation + +### Basic Fanout Example + +```python +class DataSplitterNode(BaseNode): + class Inputs(BaseModel): + data: str + + class Outputs(BaseModel): + chunk: str + + async def execute(self, inputs: Inputs) -> list[Outputs]: + data = json.loads(inputs.data) + chunk_size = 100 + + outputs = [] + for i in range(0, len(data), chunk_size): + chunk = data[i:i + chunk_size] + outputs.append(self.Outputs( + chunk=json.dumps(chunk) + )) + + return outputs # This creates fanout on each output +``` + +### Graph Configuration + +```json +{ + "nodes": [ + { + "node_name": "DataSplitterNode", + "identifier": "data_splitter", + "next_nodes": ["processor"] + }, + { + "node_name": "DataProcessorNode", + "identifier": "processor", + "inputs": { + "data": "${{ data_splitter.outputs.chunk }}" + }, + "next_nodes": [] + } + ] +} +``` + +## Use Cases + +### Data Processing +- **Batch Processing**: Split large datasets into chunks for parallel processing +- **File Processing**: Process multiple files or file segments concurrently +- **API Pagination**: Handle paginated API responses in parallel + +### Conditional Workflows +- **Multi-Path Logic**: Create different execution paths based on runtime conditions +- **A/B Testing**: Execute different processing logic for different data subsets +- **Dynamic Routing**: Route data to different processors based on content + +### Resource Optimization +- **Load Distribution**: Distribute work across multiple runtime instances +- **Parallel API Calls**: Make multiple API calls simultaneously +- **Concurrent Computations**: Run independent calculations in parallel + +## Benefits + +### Scalability +- **Horizontal Scaling**: Add more runtime instances to handle increased load +- **Automatic Distribution**: Work is automatically distributed across available runtimes +- **Dynamic Load Balancing**: Parallel execution automatically balances load + +### Performance +- **Reduced Latency**: Parallel processing reduces total execution time +- **Efficient Resource Usage**: Better utilization of available compute resources +- **Predictable Scaling**: Performance scales linearly with runtime instances + +### Flexibility +- **Runtime Decisions**: Parallelism decisions made during execution, not design +- **Adaptive Processing**: Automatically adjust to data size and complexity +- **Conditional Execution**: Create parallel paths based on runtime conditions + +## Integration with Other Concepts + +### Fanout + Unite +Fanout creates parallel paths, and the `unites` keyword synchronizes them: + +```json +{ + "nodes": [ + { + "node_name": "DataSplitterNode", + "identifier": "data_splitter", + "next_nodes": ["processor"] + }, + { + "node_name": "DataProcessorNode", + "identifier": "processor", + "inputs": { + "data": "${{ data_splitter.outputs.chunk }}" + }, + "next_nodes": ["merger"] + }, + { + "node_name": "ResultMergerNode", + "identifier": "merger", + "inputs": { + "processed_data": "${{ processor.outputs.result }}" + }, + "unites": { + "identifier": "data_splitter" + }, + "next_nodes": [] + } + ] +} +``` + +### Fanout + Retry Policy +Each parallel branch benefits from the same retry policy: + +- **Independent Retries**: Each branch retries independently +- **Consistent Behavior**: All branches use the same retry strategy +- **Failure Isolation**: One branch's failures don't affect others + +## Next Steps + +- **[Unite](./unite.md)** - Learn how to synchronize parallel execution paths +- **[Signals](./signals.md)** - Control execution flow in parallel branches +- **[Retry Policy](./retry-policy.md)** - Build resilience into parallel workflows +- **[Store](./store.md)** - Share data across parallel execution paths diff --git a/docs/docs/exosphere/graph-components.md b/docs/docs/exosphere/graph-components.md new file mode 100644 index 00000000..0431671a --- /dev/null +++ b/docs/docs/exosphere/graph-components.md @@ -0,0 +1,110 @@ +# Graph Components + +Brief overview of the main components that make up an Exosphere graph. + +## 1. Secrets + +Configuration data shared across all nodes: + +```json +{ + "secrets": { + "api_key": "your-api-key", + "database_url": "your-database-url" + } +} +``` + +## 2. Nodes + +Processing units with inputs and connections: + +```json +{ + "nodes": [ + { + "node_name": "NodeClassName", + "namespace": "MyProject", + "identifier": "unique_id", + "inputs": { + "field": "value", + "mapped": "${{ source.outputs.field }}" + }, + "next_nodes": ["next_node_id"] + } + ] +} +``` + +**Fields:** +- `node_name`: Class name (must be registered) +- `namespace`: Where node is registered +- `identifier`: Unique ID in this graph +- `inputs`: Input values for the node +- `next_nodes`: Connected node IDs + +## 3. Input Mapping + +Use `${{ ... }}` syntax to map data between nodes: + +```json +{ + "inputs": { + "static": "value", + "mapped": "${{ source_node.outputs.field }}", + "initial": "initial" + } +} +``` + +- `"initial"`: Value provided when graph is triggered +- `${{ node.outputs.field }}`: Maps output from another node + +## 4. Retry Policy + +Handle failures automatically: + +```json +{ + "retry_policy": { + "max_retries": 3, + "strategy": "EXPONENTIAL", + "backoff_factor": 2000, + "exponent": 2 + } +} +``` + +**Strategies:** EXPONENTIAL, LINEAR, FIXED (with jitter variants) + +## 5. Store Configuration + +Graph-level key-value storage for shared state: + +```json +{ + "store_config": { + "required_keys": ["cursor", "batch_id"], + "default_values": { + "cursor": "0", + "batch_size": "100" + } + } +} +``` + +**Fields:** +- `required_keys`: Keys that must be present in the store +- `default_values`: Default values for store keys + +## Next Steps + +- **[Create Graph](./create-graph.md)** - Return to main guide +- **[Graph Models](./python-sdk-graph.md)** - Use Python SDK for type-safe graph creation. + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/graph-validation.md b/docs/docs/exosphere/graph-validation.md new file mode 100644 index 00000000..f01ff763 --- /dev/null +++ b/docs/docs/exosphere/graph-validation.md @@ -0,0 +1,71 @@ +# Graph Validation + +The Exosphere system validates your graph templates to ensure they can execute successfully. + +## Validation Rules + +### Node Validation +- All nodes must be registered in the specified namespace +- Node identifiers must be unique within the graph +- Node names must match registered node classes exactly + +### Input Validation +- Mapped fields must exist in source node schemas +- Input field names must match node input schemas +- No circular dependencies allowed in `next_nodes` + +### Secret Validation +- All referenced secrets must be defined in the secrets object +- Secret names must be valid identifiers + +## Common Errors + +``` +ValidationError: Node "DataProcessorNode" not found in namespace "MyProject" +``` +**Solution**: Ensure the node is registered or check spelling. + +``` +ValidationError: Duplicate node identifier: "data_processor" +``` +**Solution**: Use unique identifiers for each node. + +``` +ValidationError: Unknown input field: "raw_data" for node "DataValidatorNode" +``` +**Solution**: Check the node's input schema for correct field names. + +``` +ValidationError: Circular dependency detected in graph +``` +**Solution**: Review `next_nodes` configuration to remove circular references. + +## Validation in Python SDK + +```python +try: + result = await state_manager.upsert_graph( + graph_name="my-workflow", + graph_nodes=graph_nodes, + secrets=secrets + ) + print("Graph created successfully!") + return result + +except ValidationError as e: + print(f"Validation failed: {e}") + return None +``` + +## Next Steps + +- **[Create Graph](./create-graph.md)** - Return to main guide +- **[Graph Components](./graph-components.md)** - Learn about components +- **[Python SDK](./python-sdk-graph.md)** - Use Python SDK for type-safe graph creation + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/local-setup.md b/docs/docs/exosphere/local-setup.md index 0f135b53..7f060d35 100644 --- a/docs/docs/exosphere/local-setup.md +++ b/docs/docs/exosphere/local-setup.md @@ -66,6 +66,13 @@ With your local Exosphere instance running, you're ready to: 2. **[Create and run workflows](./create-graph.md)** - Build your first workflow 3. **[Monitor execution](./dashboard.md)** - Use the dashboard to track progress +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution + ## Troubleshooting ### Common Issues diff --git a/docs/docs/exosphere/python-sdk-graph.md b/docs/docs/exosphere/python-sdk-graph.md new file mode 100644 index 00000000..51bc649c --- /dev/null +++ b/docs/docs/exosphere/python-sdk-graph.md @@ -0,0 +1,135 @@ +# Python SDK + +Use the Exosphere Python SDK with Pydantic models for type-safe graph creation. + +## Basic Usage + +```python hl_lines="42-48" +from exospherehost import StateManager, GraphNodeModel, RetryPolicyModel, StoreConfigModel, RetryStrategyEnum + +async def create_graph(): + state_manager = StateManager( + namespace="MyProject", + state_manager_uri=EXOSPHERE_STATE_MANAGER_URI, + key=EXOSPHERE_API_KEY + ) + + graph_nodes = [ + GraphNodeModel( + node_name="DataLoaderNode", + namespace="MyProject", + identifier="loader", + inputs={"source": "initial"}, + next_nodes=["processor"] + ), + GraphNodeModel( + node_name="DataProcessorNode", + namespace="MyProject", + identifier="processor", + inputs={"data": "${{ loader.outputs.data }}"}, + next_nodes=[] + ) + ] + + retry_policy = RetryPolicyModel( + max_retries=3, + strategy=RetryStrategyEnum.EXPONENTIAL, + backoff_factor=2000, + exponent=2 + ) + + store_config = StoreConfigModel( + required_keys=["cursor", "batch_id"], + default_values={ + "cursor": "0", + "batch_size": "100" + } + ) + + result = await state_manager.upsert_graph( + graph_name="my-workflow", + graph_nodes=graph_nodes, + secrets={"api_key": "your-key"}, + retry_policy=retry_policy, + store_config=store_config + ) + return result +``` + +## Models + +### GraphNodeModel +```python +GraphNodeModel( + node_name="NodeClassName", # Must be registered + namespace="MyProject", # Node namespace + identifier="unique_id", # Unique in graph + inputs={ # Input values + "field": "value", + "mapped": "${{ other.outputs.field }}" + }, + next_nodes=["next_node_id"] # Connected nodes +) +``` + +**Fields:** + +- **`node_name`** (str): The class name of the node that must be registered in the Exosphere runtime +- **`namespace`** (str): The project namespace for organizing and isolating nodes +- **`identifier`** (str): A unique identifier within the graph, used for referencing this node +- **`inputs`** (dict): Key-value pairs defining the input parameters for the node execution +- **`next_nodes`** (list[str]): List of node identifiers that this node connects to in the workflow + +### RetryPolicyModel +```python +RetryPolicyModel( + max_retries=3, # Max retry attempts + strategy=RetryStrategyEnum.EXPONENTIAL, # Strategy enum + backoff_factor=2000, # Base delay (ms) + exponent=2 # Multiplier +) +``` + +**Fields:** + +- **`max_retries`** (int): Maximum number of retry attempts before marking the node as failed +- **`strategy`** (RetryStrategyEnum): The retry strategy to use (EXPONENTIAL, LINEAR, FIXED) +- **`backoff_factor`** (int): Base delay in milliseconds before the first retry attempt +- **`exponent`** (int): Multiplier for exponential backoff calculations + +### StoreConfigModel +```python +StoreConfigModel( + required_keys=["cursor", "batch_id"], # Keys that must be present + default_values={ # Default values for keys + "cursor": "0", + "batch_size": "100" + } +) +``` + +**Fields:** + +- **`required_keys`** (list[str]): List of keys that must be present in the store for the graph to function +- **`default_values`** (dict): Default values for store keys when they are not present + +## Retry Strategies + +Available strategies from `RetryStrategyEnum`: + +- `EXPONENTIAL`, `LINEAR`, `FIXED` +- Add `_FULL_JITTER` or `_EQUAL_JITTER` for jitter variants + +More info: [Retry Policies Guide](./retry-policy.md) + +## Next Steps + +- **[Create Graph](./create-graph.md)** - Return to main guide +- **[Graph Components](./graph-components.md)** - Learn about components + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/register-node.md b/docs/docs/exosphere/register-node.md index 37a70e65..59150dda 100644 --- a/docs/docs/exosphere/register-node.md +++ b/docs/docs/exosphere/register-node.md @@ -91,11 +91,24 @@ class Secrets(BaseModel): encryption_key: str ``` +## Node Signals + +Nodes can control workflow execution by raising **signals** during execution. We support two signals today: + +- **Prune**: kill further execution from that branch +- **Requeue** : good for polling and scheduled flows + +These can be triggered by raising a signal in your node's `execute` method. + +>Please raise an issue to add more signals to Exosphere. + +Check the [Signals from a Node](./signals.md) guide + ## Node Validation The runtime automatically validates your nodes: -```python hl_lines="19" +```python hl_lines="22" from exospherehost import BaseNode from pydantic import BaseModel @@ -173,7 +186,19 @@ Runtime( ).start() ``` + + + + ## Next Steps +- **[Create Runtime](./create-runtime.md)** - Setting up your runtime to execute tasks - **[Create Graph](./create-graph.md)** - Learn how to connect nodes into workflows - **[Trigger Graph](./trigger-graph.md)** - Execute and monitor your workflows + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/retry-policy.md b/docs/docs/exosphere/retry-policy.md index 05507d8a..4a8a5fae 100644 --- a/docs/docs/exosphere/retry-policy.md +++ b/docs/docs/exosphere/retry-policy.md @@ -1,7 +1,5 @@ # Retry Policy -!!! beta "Beta Feature" - Retry Policy is currently available in beta. The API and functionality may change in future releases. The Retry Policy feature in Exosphere provides sophisticated retry mechanisms for handling transient failures in your workflow nodes. When a node execution fails, the retry policy automatically determines when and how to retry the execution based on configurable strategies. @@ -388,14 +386,14 @@ If a retry policy configuration is invalid: - An error will be returned during graph creation - The graph will not be saved until the configuration is corrected -## Model-Based Configuration (Beta) +## Model-Based Configuration With the new Exosphere Python SDK, you can define retry policies using Pydantic models for better type safety and validation: ```python from exospherehost import StateManager, GraphNodeModel, RetryPolicyModel, RetryStrategyEnum -# Define retry policy using model (beta) +# Define retry policy using model retry_policy = RetryPolicyModel( max_retries=5, strategy=RetryStrategyEnum.EXPONENTIAL_FULL_JITTER, @@ -417,12 +415,12 @@ async def create_graph_with_retry_policy(): ) ] - # Apply retry policy to the entire graph (beta) + # Apply retry policy to the entire graph result = await state_manager.upsert_graph( graph_name="resilient-workflow", graph_nodes=graph_nodes, secrets={"api_key": "your-key"}, - retry_policy=retry_policy # beta + retry_policy=retry_policy ) ``` @@ -483,3 +481,10 @@ Retry policies work alongside Exosphere's signaling system: - Nodes can still raise `PruneSignal` to stop retries immediately - Nodes can raise `ReQueueAfterSignal` to re-queue after some time. This will not mark nodes as failures. - When a node is re-queued using `ReQueueAfterSignal`, the `retry_count` is not incremented. The existing count is carried over to the new state. + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Signals](./signals.md)** - Control workflow execution flow +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/signals.md b/docs/docs/exosphere/signals.md index 6796b455..8f35a0fc 100644 --- a/docs/docs/exosphere/signals.md +++ b/docs/docs/exosphere/signals.md @@ -1,7 +1,5 @@ # Signals -!!! beta "Beta Feature" - Signals are currently available in beta. The API and functionality may change in future releases. Signals are a mechanism in Exosphere for controlling workflow execution flow and state management. They allow nodes to communicate with the state manager to perform specific actions like pruning states or requeuing them after a delay. @@ -9,6 +7,24 @@ Signals are a mechanism in Exosphere for controlling workflow execution flow and Signals are implemented as exceptions that should be raised from within node execution. When a signal is raised, the runtime automatically handles the communication with the state manager to perform the requested action. +```mermaid +graph TD + A[Node Execution] --> B{Signal Raised?} + B -->|Yes| C[Runtime Catches Signal] + B -->|No| D[Continue Normal Execution] + + C --> E{Signal Type} + E -->|PruneSignal| F[State Manager: Set Status to PRUNED] + E -->|ReQueueAfterSignal| G[State Manager: Schedule Requeue] + + F --> H[Workflow Branch Terminated] + G --> I[State Requeued After Delay] + + D --> J[Return Outputs] + J --> K[State Completed] + +``` + ## Available Signals ### PruneSignal @@ -17,7 +33,7 @@ The `PruneSignal` is used to permanently remove a state from the workflow execut #### Usage -```python +```python hl_lines="13" from exospherehost import PruneSignal class MyNode(BaseNode): @@ -45,13 +61,13 @@ The `ReQueueAfterSignal` is used to requeue a state for execution after a specif #### Usage -```python +```python hl_lines="15" from exospherehost import ReQueueAfterSignal from datetime import timedelta class RetryNode(BaseNode): class Inputs(BaseModel): - retry_count: int + retry_count: str data: str class Outputs(BaseModel): @@ -87,7 +103,7 @@ If signal sending fails (e.g., network issues), the runtime will log the error a ### Conditional Pruning -```python +```python hl_lines="7-12" class ValidationNode(BaseNode): class Inputs(BaseModel): user_id: str @@ -106,7 +122,7 @@ class ValidationNode(BaseNode): ### Polling -```python +```python hl_lines="14" class PollingNode(BaseNode): class Inputs(BaseModel): job_id: str @@ -118,14 +134,14 @@ class PollingNode(BaseNode): if job_status == "completed": result = await self._get_job_result(inputs.job_id) return self.Outputs(result=result) - elif job_status == "failed": - # Job failed, prune the state - raise PruneSignal({ - "reason": "job_failed", - "job_id": inputs.job_id, - "poll_count": inputs.poll_count - }) else: # Job still running, poll again in 30 seconds raise ReQueueAfterSignal(timedelta(seconds=30)) -``` \ No newline at end of file +``` + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution \ No newline at end of file diff --git a/docs/docs/exosphere/state-manager-setup.md b/docs/docs/exosphere/state-manager-setup.md index c1c00aaa..4799e04b 100644 --- a/docs/docs/exosphere/state-manager-setup.md +++ b/docs/docs/exosphere/state-manager-setup.md @@ -243,4 +243,11 @@ Response: 200 - **[Dashboard Setup](./dashboard.md)** - Set up the web dashboard for monitoring - **[Node Development](./register-node.md)** - Learn how to create and register nodes + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution - **[Graph Creation](./create-graph.md)** - Build workflows using graph templates diff --git a/docs/docs/exosphere/store.md b/docs/docs/exosphere/store.md new file mode 100644 index 00000000..743e58d9 --- /dev/null +++ b/docs/docs/exosphere/store.md @@ -0,0 +1,133 @@ +# Store + +The Store is Exosphere's graph-level key-value storage system that persists data across the entire workflow execution. It provides a way to share state between nodes and maintain persistent data throughout the workflow lifecycle. + +## Overview + +The Store provides **persistent storage** that survives across node executions, enabling complex workflows that need to maintain state, track progress, or share data between different parts of the workflow. + +```mermaid +graph TB + A[Graph Trigger] --> B[Store Initialization] + B --> C[Node 1: Read/Write Store] + C --> D[Node 2: Read/Write Store] + D --> E[Node 3: Read/Write Store] + E --> F[Workflow Complete] + + G[Store Data] -.->|Persists| C + G -.->|Persists| D + G -.->|Persists| E + + style G fill:#e8f5e8 +``` + +## How Store Works + +### Store Lifecycle + +1. **Initialization**: Store is created when the graph is triggered +2. **Persistence**: Data persists across all node executions in the workflow +3. **Cleanup**: Store data is automatically cleaned up when the workflow completes + +### Store Access + +- **All nodes** can read from and write to the store +- **Key-value pairs** are stored as strings +- **Automatic persistence** ensures data survives node restarts and failures + +## Implementation + +### Store Configuration + +Define store requirements in your graph template: + +```json +{ + "store_config": { + "required_keys": ["cursor", "batch_id"], + "default_values": { + "cursor": "0", + "batch_size": "100" + } + }, + "nodes": [ + { + "node_name": "DataProcessorNode", + "identifier": "processor", + "inputs": { + "cursor": "${{ store.cursor }}", + "batch_size": "${{ store.batch_size }}" + }, + "next_nodes": [] + } + ] +} +``` + +### Python SDK Example + +```python +from exospherehost import StoreConfigModel + +store_config = StoreConfigModel( + required_keys=["cursor", "batch_id"], + default_values={ + "cursor": "0", + "batch_size": "100" + } +) + +result = await state_manager.upsert_graph( + graph_name="my-workflow", + graph_nodes=graph_nodes, + store_config=store_config +) +``` + +### Triggering with Store Data + +```python +# Trigger graph with initial store values +result = await state_manager.trigger( + "my-workflow", + inputs={"user_id": "123"}, + store={ + "cursor": "0", + "batch_id": "batch_001" + } +) +``` + + +## Store Operations + +### Reading from Store + +Use `${{ store.key }}` syntax to access store values: + +```json +{ + "inputs": { + "current_cursor": "${{ store.cursor }}", + "batch_size": "${{ store.batch_size }}", + "user_id": "${{ store.user_id }}" + } +} +``` + +### Writing to Store + +Store values are updated when nodes complete successfully. The store is automatically updated with any new values that nodes produce. + +### Store Validation + +- **Required keys**: Must be present when the graph is triggered +- **Default values**: Automatically provided if not specified +- **Key constraints**: Keys cannot contain dots (`.`) or be empty + +## Next Steps + +- **[Fanout](./fanout.md)** - Learn how to create parallel execution paths +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Signals](./signals.md)** - Control workflow execution flow +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows diff --git a/docs/docs/exosphere/trigger-graph.md b/docs/docs/exosphere/trigger-graph.md index 4444658f..226605c6 100644 --- a/docs/docs/exosphere/trigger-graph.md +++ b/docs/docs/exosphere/trigger-graph.md @@ -20,11 +20,11 @@ The recommended way to trigger graphs is using the Exosphere Python SDK, which p ) try: - # Trigger the graph with optional store (beta) + # Trigger the graph with optional store result = await state_manager.trigger( "my-graph", inputs={"user_id": "123"}, - store={"cursor": "0"} # persisted across nodes (beta) + store={"cursor": "0"} # persisted across nodes ) print(f"Graph triggered successfully!") print(f"Run ID: {result['run_id']}") @@ -47,4 +47,11 @@ For more details on using the Exosphere dashboard see the **[Dashboard Guide](./ ## Next Steps - **[Dashboard](./dashboard.md)** - Use the Exosphere dashboard for monitoring -- **[ARchitecture](./architecture.md)** - Learn about fanout, unites +- **[Architecture](./architecture.md)** - Learn about Exosphere's architecture + +## Related Concepts + +- **[Fanout](./fanout.md)** - Create parallel execution paths dynamically +- **[Unite](./unite.md)** - Synchronize parallel execution paths +- **[Retry Policy](./retry-policy.md)** - Build resilient workflows +- **[Store](./store.md)** - Persist data across workflow execution diff --git a/docs/docs/exosphere/unite.md b/docs/docs/exosphere/unite.md new file mode 100644 index 00000000..20d9daaa --- /dev/null +++ b/docs/docs/exosphere/unite.md @@ -0,0 +1,179 @@ +# Unite + +The `unites` keyword is Exosphere's mechanism for synchronizing parallel execution paths. It allows a node to wait for multiple parallel states to complete before executing, enabling complex workflow patterns and result aggregation. + +## Overview + +Unite provides **intelligent synchronization** that prevents duplicate execution while ensuring all parallel paths complete before proceeding. It's essential for workflows that need to aggregate results from parallel processing or coordinate multiple execution branches. + +```mermaid +graph TB + A[Data Splitter] --> B[Processor 1] + A --> C[Processor 2] + A --> D[Processor 3] + + B --> E[Result Merger] + C --> E + D --> E + + E -.->|unites: data_splitter| A + +``` + +## How Unite Works + +### Basic Mechanism + +When a node has a `unites` configuration: + +1. **Execution is deferred** until all states with the specified identifier are complete +2. **State fingerprinting** ensures only one unites state is created per unique combination +3. **Dependency validation** ensures the unites node depends on the specified identifier + +### Unite Strategies + +The `unites` keyword supports different strategies to control when the uniting node should execute: + +#### ALL_SUCCESS (Default) +The uniting node executes only when all states with the specified identifier have reached `SUCCESS` status. + +```json +{ + "unites": { + "identifier": "data_splitter", + "strategy": "ALL_SUCCESS" + } +} +``` + +#### ALL_DONE +The uniting node executes when all states with the specified identifier have reached any terminal status (`SUCCESS`, `ERRORED`, `CANCELLED`, etc.). + +```json +{ + "unites": { + "identifier": "data_splitter", + "strategy": "ALL_DONE" + } +} +``` + +## Implementation + +### Basic Unite Example + +```json +{ + "nodes": [ + { + "node_name": "DataSplitterNode", + "identifier": "data_splitter", + "next_nodes": ["processor"] + }, + { + "node_name": "DataProcessorNode", + "identifier": "processor", + "inputs": { + "data": "${{ data_splitter.outputs.chunk }}" + }, + "next_nodes": ["result_merger"] + }, + { + "node_name": "ResultMergerNode", + "identifier": "result_merger", + "inputs": { + "processed_data": "${{ processor.outputs.result }}" + }, + "unites": { + "identifier": "data_splitter" + }, + "next_nodes": [] + } + ] +} +``` + +### Python SDK Example + +```python +from exospherehost import GraphNodeModel, UnitesModel, UnitesStrategyEnum + +graph_nodes = [ + GraphNodeModel( + node_name="DataSplitterNode", + identifier="data_splitter", + next_nodes=["processor"] + ), + GraphNodeModel( + node_name="DataProcessorNode", + identifier="processor", + inputs={"data": "${{ data_splitter.outputs.chunk }}"}, + next_nodes=["result_merger"] + ), + GraphNodeModel( + node_name="ResultMergerNode", + identifier="result_merger", + inputs={"processed_data": "${{ processor.outputs.result }}"}, + unites=UnitesModel( + identifier="data_splitter", + strategy=UnitesStrategyEnum.ALL_SUCCESS + ), + next_nodes=[] + ) +] +``` + +## Use Cases + +- **Data Merging**: Combine results from parallel data processing +- **Batch Completion**: Wait for all parallel batches to finish +- **Summary Generation**: Aggregate results from multiple sources + +## Strategy Selection + +### ALL_SUCCESS Strategy +**Use when:** +- You need all parallel processes to complete successfully +- Partial failures are not acceptable +- Data integrity is critical + +**Example scenarios:** +- Financial transaction processing +- Data validation workflows +- Critical business processes + +**Caution:** This strategy can block indefinitely if any parallel branch never reaches SUCCESS status. + +### ALL_DONE Strategy +**Use when:** +- You want to proceed with partial results +- You have error handling logic in the uniting node +- Some failures are acceptable + +**Example scenarios:** +- Data collection from multiple sources +- Batch processing with error tolerance +- Monitoring and alerting systems + + +## Integration with Other Concepts + +### Unite + Fanout +Unite is most commonly used with fanout to synchronize parallel execution: + +1. **Fanout creates parallel paths** (e.g., processing multiple data chunks) +2. **Unite synchronizes completion** (e.g., merging all processed results) +3. **Automatic coordination** ensures proper execution order + +### Unite + Retry Policy +- **Independent retries**: Each parallel branch retries independently +- **Unite waits for final status**: Unite considers the final status after all retries +- **Consistent behavior**: All branches use the same retry policy + + +## Next Steps + +- **[Fanout](./fanout.md)** - Learn how to create parallel execution paths +- **[Signals](./signals.md)** - Control execution flow in parallel branches +- **[Retry Policy](./retry-policy.md)** - Build resilience into parallel workflows +- **[Store](./store.md)** - Share data across parallel execution paths diff --git a/docs/docs/getting-started.md b/docs/docs/getting-started.md index e441604e..00be1a6e 100644 --- a/docs/docs/getting-started.md +++ b/docs/docs/getting-started.md @@ -28,7 +28,33 @@ Refer: [Getting State Manager URI](./exosphere/state-manager-setup.md) ## Overview -Exosphere is built around three core concepts: +Exosphere is built around several core concepts that enable powerful workflow orchestration: + +### Data Flow Architecture + +```mermaid +sequenceDiagram + participant Client as Client Application + participant Runtime as Runtime + participant Node as Node Executor + participant StateMgr as State Manager + participant MongoDB as State Store + + Client->>Runtime: Initialize with nodes + Runtime->>StateMgr: Register runtime & nodes + StateMgr->>MongoDB: Store registration info + + loop Workflow Execution + StateMgr->>Runtime: Trigger node execution + Runtime->>Node: Execute node logic + Node->>Runtime: Return outputs + Runtime->>StateMgr: Update execution state + StateMgr->>MongoDB: Persist state changes + StateMgr->>Runtime: Trigger next node (if any) + end + + StateMgr->>Client: Return final results +``` ### 1. Nodes @@ -52,6 +78,18 @@ The `Runtime` class manages the execution environment and coordinates with the E The state manager orchestrates workflow execution, manages state transitions, and provides the dashboard for monitoring and debugging. +### 4. Core Concepts + +Exosphere provides several unique features that make it powerful: + +- **[Fanout](./exosphere/fanout.md)**: Create parallel execution paths dynamically +- **[Unite](./exosphere/unite.md)**: Synchronize parallel execution paths +- **[Signals](./exosphere/signals.md)**: Control workflow execution flow +- **[Retry Policy](./exosphere/retry-policy.md)**: Build resilient workflows +- **[Store](./exosphere/store.md)**: Persist data across workflow execution + +For a comprehensive overview, see **[Exosphere Concepts](./exosphere/concepts.md)**. + ## Quick Start Example Create a simple node that processes data: @@ -181,31 +219,7 @@ Now that you have the basics, explore: - **[Trigger Graph](./exosphere/trigger-graph.md)** - Execute your workflows and monitor their progress -### Data Flow Architecture -```mermaid -sequenceDiagram - participant Client as Client Application - participant Runtime as Runtime - participant Node as Node Executor - participant StateMgr as State Manager - participant MongoDB as State Store - - Client->>Runtime: Initialize with nodes - Runtime->>StateMgr: Register runtime & nodes - StateMgr->>MongoDB: Store registration info - - loop Workflow Execution - StateMgr->>Runtime: Trigger node execution - Runtime->>Node: Execute node logic - Node->>Runtime: Return outputs - Runtime->>StateMgr: Update execution state - StateMgr->>MongoDB: Persist state changes - StateMgr->>Runtime: Trigger next node (if any) - end - - StateMgr->>Client: Return final results -``` ## Data Model (v1) diff --git a/docs/docs/index.md b/docs/docs/index.md index 41db50c8..29ae146a 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -23,6 +23,7 @@ Exosphere provides a powerful foundation for building and orchestrating AI appli - **Infinite Parallel Agents**: Run multiple AI agents simultaneously across distributed infrastructure - **Dynamic State Management**: Create and manage state at runtime with persistent storage - **Fault Tolerance**: Built-in failure handling and recovery mechanisms for production reliability +- **Core Concepts**: [Fanout](./exosphere/fanout.md), [Unite](./exosphere/unite.md), [Signals](./exosphere/signals.md), [Retry Policy](./exosphere/retry-policy.md), [Store](./exosphere/store.md) ### **Developer Experience** - **Plug-and-Play Nodes**: Create reusable, atomic workflow components that can be mixed and matched diff --git a/docs/docs/stylesheets/extra.css b/docs/docs/stylesheets/extra.css index b9cad792..8b12987c 100644 --- a/docs/docs/stylesheets/extra.css +++ b/docs/docs/stylesheets/extra.css @@ -17,7 +17,7 @@ /* Accent color for links and headings */ --md-accent-fg-color: #66d1b5; - --md-typeset-a-color: #daf5ff; + --md-typeset-a-color: #8bdfff; --md-code-fg-color: #daf5ff; --md-code-bg-color: #263048; @@ -26,6 +26,8 @@ --md-code-hl-color: #66d1b5; --md-code-hl-color--light: #0c246994; + --md-admonition-bg-color: #05184a; + --md-admonition-fg-color: #ffffff; --md-code-hl-number-color: hsla(0, 67%, 50%, 1); --md-code-hl-special-color: hsla(340, 83%, 47%, 1); @@ -62,11 +64,11 @@ --md-default-bg-color: #f7fdff; --md-typeset-color: #1a1a1a; + --md-typeset-a-color: #e4587d; /* Accent color for links and headings */ --md-accent-fg-color: #e4587d; - --md-typeset-a-color: #031035; - + --md-default-bg-color: #f7fdff; --md-code-fg-color: #031035; diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 7792edbd..9ffebd59 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -108,7 +108,6 @@ plugins: Introduction: - index.md - getting-started.md - - exosphere/api-changes.md - exosphere/local-setup.md - docker-compose-setup.md - exosphere/state-manager-setup.md @@ -144,15 +143,24 @@ extra_css: nav: - Introduction: index.md - Getting Started: getting-started.md - - Local Setup: exosphere/local-setup.md - - Docker Compose: docker-compose-setup.md - - State Manager Setup: exosphere/state-manager-setup.md + - Exosphere Concepts: + - Overview: exosphere/concepts.md + - Architecture: exosphere/architecture.md + - Fanout: exosphere/fanout.md + - Unite: exosphere/unite.md + - Signals: exosphere/signals.md + - Retry Policy: exosphere/retry-policy.md + - Store: exosphere/store.md + - Local Setup: + - Overview: exosphere/local-setup.md + - Docker Compose: docker-compose-setup.md + - State Manager Setup: exosphere/state-manager-setup.md + - Dashboard: exosphere/dashboard.md - Register Node: exosphere/register-node.md - Create Runtime: exosphere/create-runtime.md - - Create Graph: exosphere/create-graph.md - - API Changes: exosphere/api-changes.md - - Retry Policy: exosphere/retry-policy.md - - Trigger Graph: exosphere/trigger-graph.md - - Dashboard: exosphere/dashboard.md - - Signals: exosphere/signals.md - - Architecture: exosphere/architecture.md \ No newline at end of file + - Create Graph: + - Overview: exosphere/create-graph.md + - Components: exosphere/graph-components.md + - Validation: exosphere/graph-validation.md + - Python SDK: exosphere/python-sdk-graph.md + - Trigger Graph: exosphere/trigger-graph.md \ No newline at end of file