-
Notifications
You must be signed in to change notification settings - Fork 191
chore: replace Python examples with llama-deploy #701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: replace Python examples with llama-deploy #701
Conversation
🦋 Changeset detectedLatest commit: 62d82ed The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughThis update introduces a new Python-based deployment template using LlamaDeploy and reorganizes the create-llama package to support it. The changes include new helper modules, environment variable handling, file organization, server run logic, use case configurations, and updated or new workflows, utilities, and documentation for Python use cases. Some FastAPI-based files and HITL examples are removed. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant PythonLlamaDeployServer
participant LlamaCloud
participant UI (NextJS/TS Proxy)
User->>CLI: Run create-llama with Python LlamaDeploy template
CLI->>PythonLlamaDeployServer: Prepare src/, ui/, .env, etc.
CLI->>PythonLlamaDeployServer: Start server (uv run -m llama_deploy.apiserver)
PythonLlamaDeployServer->>LlamaCloud: Upload data files to pipeline
CLI->>PythonLlamaDeployServer: Run llamactl deploy llama_deploy.yml
User->>UI (NextJS/TS Proxy): Open chat UI in browser
UI (NextJS/TS Proxy)->>PythonLlamaDeployServer: Send chat requests
PythonLlamaDeployServer->>LlamaCloud: Retrieve/query data as needed
PythonLlamaDeployServer-->>UI (NextJS/TS Proxy): Stream responses
UI (NextJS/TS Proxy)-->>User: Display chat and results
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md
Show resolved
Hide resolved
packages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.toml
Outdated
Show resolved
Hide resolved
packages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.py
Outdated
Show resolved
Hide resolved
packages/create-llama/templates/types/llamaindexserver/fastapi/src/index.py
Show resolved
Hide resolved
packages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py
Show resolved
Hide resolved
packages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.toml
Show resolved
Hide resolved
packages/create-llama/templates/types/llamaindexserver/fastapi/src/index.py
Show resolved
Hide resolved
packages/create-llama/templates/types/llamaindexserver/fastapi/llama_deploy.yml
Show resolved
Hide resolved
…com:run-llama/create-llama into tp/replace-python-examples-with-llamadeploy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
🔭 Outside diff range comments (1)
packages/create-llama/templates/components/use-cases/python/financial_report/query.py (1)
1-48: Extract duplicated code to a shared moduleThis file is identical to
packages/create-llama/templates/components/use-cases/python/agentic_rag/query.py. This violates the DRY principle and will lead to maintenance issues.Consider creating a shared query utilities module that both use cases can import:
- Create a new shared module (e.g.,
packages/create-llama/templates/components/use-cases/python/shared/query_utils.py)- Move these functions to the shared module
- Import from the shared module in both use cases
Example usage after refactoring:
from shared.query_utils import create_query_engine, get_query_engine_tool
♻️ Duplicate comments (5)
packages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.py (1)
14-14: Address the TODO comment about LlamaCloud support.The existing TODO comment indicates that LlamaCloud support needs to be implemented for all use cases.
Based on the past review comments, the plan is to use LlamaCloud from environment variables for all use cases. Would you like me to help implement this functionality?
packages/create-llama/templates/components/use-cases/python/financial_report/query.py (2)
14-14: Fix docstring parameter nameSame issue as in the agentic_rag version - the docstring mentions "params" but the actual parameter is "**kwargs".
16-18: Add error handling for environment variable parsingSame issue as in the agentic_rag version - the
int()conversion needs error handling.packages/create-llama/helpers/env-variables.ts (1)
458-468: Correct handling of .env file location for LlamaDeploy.As noted in the previous discussions, placing the
.envfile inside thesrc/directory for Python LlamaDeploy is the correct approach since llama-deploy copies the src directory during deployment.packages/create-llama/helpers/python.ts (1)
479-483: Verify if cleanup logic is still required.The cleanup removes
generate.pyandindex.pyfor non-data use cases. There's a past TODO comment suggesting these files might not be needed at all.#!/bin/bash # Check if generate.py and index.py are used by the remaining use cases echo "Checking usage of generate.py and index.py in Python use cases..." # List all Python use case directories fd -t d . packages/create-llama/templates/components/use-cases/python/ # Check which use cases have these files echo -e "\nUse cases with generate.py:" fd generate.py packages/create-llama/templates/components/use-cases/python/ echo -e "\nUse cases with index.py:" fd index.py packages/create-llama/templates/components/use-cases/python/ # Check imports of these modules in workflow files echo -e "\nChecking imports in workflow files:" rg -A 2 "from.*generate|import.*generate" packages/create-llama/templates/components/use-cases/python/*/workflow.py rg -A 2 "from.*index|import.*index" packages/create-llama/templates/components/use-cases/python/*/workflow.py
🧹 Nitpick comments (22)
.changeset/good-avocados-try.md (1)
5-5: Drop the “chore:” prefix to keep the changelog entry cleanChangesets uses the body of this file verbatim in the autogenerated CHANGELOG. Prefixes like “chore:” don’t add value there and make the final log noisier. Prefer a concise, capital-case description.
-chore: replace Python examples with llama-deploy +Replace Python examples with llama-deploypackages/create-llama/templates/components/ts-proxy/index.ts (1)
1-9: Consider adding error handling and configuration validation.The server setup follows the correct pattern, but consider adding:
- Error handling for server startup failures
- Environment variable configuration
- Graceful shutdown handling
- Configuration validation
+import { config } from "dotenv"; +config(); +const port = process.env.PORT ? parseInt(process.env.PORT) : 3000; +const deployment = process.env.DEPLOYMENT || "chat"; +const workflow = process.env.WORKFLOW || "workflow"; +try { new LlamaIndexServer({ uiConfig: { componentsDir: "components", layoutDir: "layout", - llamaDeploy: { deployment: "chat", workflow: "workflow" }, + llamaDeploy: { deployment, workflow }, }, + port, }).start(); + } catch (error) { + console.error("Failed to start server:", error); + process.exit(1); + }packages/create-llama/templates/components/use-cases/python/financial_report/utils.py (1)
29-41: Consider adding error handling for streaming responses.While the implementation is correct, consider adding error handling for cases where streaming responses might fail or contain malformed data.
if isinstance(res, AsyncGenerator): # Handle streaming response (CompletionResponseAsyncGen or ChatResponse AsyncGenerator) - async for chunk in res: + try: + async for chunk in res: + ctx.write_event_to_stream( + AgentStream( + delta=chunk.delta or "", + response=final_response, + current_agent_name=current_agent_name, + tool_calls=[], + raw=getattr(chunk, 'raw', None) or "", + ) + ) + final_response += chunk.delta or "" + except Exception as e: + # Log error and return accumulated response + print(f"Error in streaming response: {e}") + # Return what we have so farpackages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py (1)
35-35: Consider lazy initialization for the global workflow.Creating the workflow instance at module level might cause issues if environment variables aren't set when the module is imported. Consider lazy initialization.
-workflow = create_workflow() +_workflow = None + +def get_workflow() -> AgentWorkflow: + global _workflow + if _workflow is None: + _workflow = create_workflow() + return _workflowpackages/create-llama/templates/components/use-cases/python/agentic_rag/query.py (1)
14-14: Fix docstring parameter nameThe docstring mentions "params" but the actual parameter is "**kwargs".
- params (optional): Additional parameters for the query engine, e.g: similarity_top_k + **kwargs: Additional parameters for the query engine, e.g: similarity_top_kpackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.py (1)
29-32: Consider making the data directory path configurableThe path "ui/data" is hardcoded, which could make the code less flexible and harder to maintain.
Consider extracting this to a configuration variable or environment variable:
+import os + def generate_index(): init_settings() logger.info("Generate index for the provided data") index = get_index(create_if_missing=True) if index is None: raise ValueError("Index not found and could not be created") + # Get data directory from environment or use default + data_dir = os.getenv("DATA_DIRECTORY", "ui/data") + # use SimpleDirectoryReader to retrieve the files to process reader = SimpleDirectoryReader( - "ui/data", + data_dir, recursive=True, )packages/create-llama/templates/components/use-cases/python/document_generator/utils.py (2)
51-51: Use proper logging instead of print statementsError messages are printed to stdout, which makes it difficult to control log levels and outputs in production.
Add logging and use it consistently:
import json import re +import logging from typing import List, Optional, Any +logger = logging.getLogger(__name__) # ... in the code ... - print(f"Failed to parse annotation: {error}") + logger.warning(f"Failed to parse annotation: {error}") # ... and ... - print( - f"Failed to parse artifact from annotation: {annotation}. Error: {e}" - ) + logger.error( + f"Failed to parse artifact from annotation: {annotation}. Error: {e}" + )Also applies to: 102-104
70-91: Add defensive checks for nested data accessThe nested data extraction could fail if the structure doesn't match expectations. While there's a try-except block, it's better to be more defensive.
Consider using the
.get()method with defaults more defensively:if artifact_type == "code": # Get the nested data object that contains the actual code information code_info = artifact_data.get("data", {}) + if not isinstance(code_info, dict): + code_info = {} code_data = CodeArtifactData( file_name=code_info.get("file_name", ""), code=code_info.get("code", ""), language=code_info.get("language", ""), )The same pattern should be applied to the document artifact handling.
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.py (3)
59-74: Improve polling mechanism with exponential backoff.The current polling mechanism uses a tight loop with only 100ms sleep, which could be inefficient and put unnecessary load on the API.
- # Wait 2s for the file to be processed - max_attempts = 20 - attempt = 0 - while attempt < max_attempts: + # Wait for the file to be processed with exponential backoff + max_attempts = 10 + attempt = 0 + sleep_time = 0.5 # Start with 500ms + + while attempt < max_attempts: result = client.pipelines.get_pipeline_file_status( file_id=file_id, pipeline_id=pipeline_id ) if result.status == ManagedIngestionStatus.ERROR: raise Exception(f"File processing failed: {str(result)}") if result.status == ManagedIngestionStatus.SUCCESS: # File is ingested - return the file id return file_id attempt += 1 - time.sleep(0.1) # Sleep for 100ms + time.sleep(sleep_time) + sleep_time = min(sleep_time * 1.5, 5.0) # Cap at 5 seconds
32-33: Remove unused constants or document their purpose.The
LOCAL_STORE_PATHandDOWNLOAD_FILE_NAME_TPLconstants are defined but never used in the code.class LlamaCloudFileService: - LOCAL_STORE_PATH = "output/llamacloud" - DOWNLOAD_FILE_NAME_TPL = "{pipeline_id}${filename}"If these constants are intended for future use, please add a comment explaining their purpose.
13-13: Use the configured logger for error reporting.The logger is configured but never used. Consider using it for error reporting instead of just raising generic exceptions.
if result.status == ManagedIngestionStatus.ERROR: - raise Exception(f"File processing failed: {str(result)}") + error_msg = f"File processing failed: {str(result)}" + logger.error(error_msg) + raise Exception(error_msg)packages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.md (2)
97-97: Fix typo in configuration.There's a typo in "configration" which should be "configuration".
-- `llamaDeploy`: The LlamaDeploy configration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file) +- `llamaDeploy`: The LlamaDeploy configuration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file)
32-45: Add language specifiers to code blocks.Code blocks should have language specifiers for better syntax highlighting and clarity.
-``` +```bash $ uv run -m llama_deploy.apiserver-``` +```bash $ uv run llamactl deploy llama_deploy.ymlAlso applies to: 42-45
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md (2)
90-90: Fix typo in configuration.There's a typo in "configration" which should be "configuration".
-- `llamaDeploy`: The LlamaDeploy configration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file) +- `llamaDeploy`: The LlamaDeploy configuration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file)
32-38: Add language specifiers to code blocks.Code blocks should have language specifiers for better syntax highlighting and clarity.
-``` +```bash $ uv run -m llama_deploy.apiserver-``` +```bash $ uv run llamactl deploy llama_deploy.ymlAlso applies to: 42-45
packages/create-llama/templates/components/use-cases/python/deep_research/README-template.md (3)
5-5: Fix the typo at the end of the sentence.The sentence ends with a forward slash instead of a period.
-LlamaDeploy is a system for deploying and managing LlamaIndex workflows, while LlamaIndexServer provides a pre-built TypeScript server with an integrated chat UI that can connect directly to LlamaDeploy deployments. This example shows how you can quickly set up a complete chat application by combining these two technologies/ +LlamaDeploy is a system for deploying and managing LlamaIndex workflows, while LlamaIndexServer provides a pre-built TypeScript server with an integrated chat UI that can connect directly to LlamaDeploy deployments. This example shows how you can quickly set up a complete chat application by combining these two technologies.
39-39: Add language specifiers to fenced code blocks.Fenced code blocks should have a language specified for proper syntax highlighting.
For line 39, add
bashas the language specifier:-``` +```bash $ uv run -m llama_deploy.apiserverFor line 49, add
bashas the language specifier:-``` +```bash $ uv run llamactl deploy llama_deploy.ymlAlso applies to: 49-49
97-97: Fix the spelling error."configration" should be "configuration".
-- `llamaDeploy`: The LlamaDeploy configration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file) +- `llamaDeploy`: The LlamaDeploy configuration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file)packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md (2)
89-89: Fix typo in documentation.-- `llamaDeploy`: The LlamaDeploy configration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file) +- `llamaDeploy`: The LlamaDeploy configuration (deployment name and workflow name that defined in the [llama_deploy.yml](llama_deploy.yml) file)
31-31: Add language specifiers to code blocks for better syntax highlighting.For the code block at line 31:
-``` +```bashFor the code block at line 41:
-``` +```bashAlso applies to: 41-41
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.py (1)
111-147: Consider making embedding model configuration more flexible.The current implementation has two limitations:
- It only supports OpenAI embeddings for new pipeline creation, which might be restrictive for users of other embedding providers.
- Direct use of
os.getenv("OPENAI_API_KEY")on line 134 could return None.Consider validating the API key exists before using it:
client.pipelines.upsert_pipeline( request={ "name": pipeline_name, "embedding_config": { "type": "OPENAI_EMBEDDING", "component": { - "api_key": os.getenv("OPENAI_API_KEY"), # editable + "api_key": os.getenv("OPENAI_API_KEY") or Settings.llm.api_key, # editable "model_name": os.getenv("EMBEDDING_MODEL"), }, },packages/create-llama/templates/components/use-cases/python/financial_report/agent_tool.py (1)
224-255: Clever async generator pattern for early tool call detection.The implementation uses a boolean indicator to signal tool calls early in the stream, which is a good design for responsive UIs.
Consider adding a docstring to explain the yielding pattern:
async def _tool_call_generator( llm: FunctionCallingLLM, tools: list[BaseTool], chat_history: list[ChatMessage], ) -> AsyncGenerator[ChatResponse | bool, None]: """ Async generator that yields: - First yield: bool indicating if response contains tool calls - Subsequent yields: ChatResponse chunks for streaming - Final yield: Complete ChatResponse (for tool calls) """
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (47)
.changeset/good-avocados-try.md(1 hunks).github/workflows/e2e.yml(0 hunks)packages/create-llama/e2e/python/resolve_dependencies.spec.ts(2 hunks)packages/create-llama/e2e/shared/llamaindexserver_template.spec.ts(5 hunks)packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts(2 hunks)packages/create-llama/helpers/env-variables.ts(6 hunks)packages/create-llama/helpers/index.ts(5 hunks)packages/create-llama/helpers/python.ts(5 hunks)packages/create-llama/helpers/run-app.ts(3 hunks)packages/create-llama/helpers/types.ts(1 hunks)packages/create-llama/helpers/use-case.ts(1 hunks)packages/create-llama/questions/index.ts(3 hunks)packages/create-llama/templates/components/ts-proxy/index.ts(1 hunks)packages/create-llama/templates/components/ts-proxy/package.json(1 hunks)packages/create-llama/templates/components/ui/layout/header.tsx(1 hunks)packages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.md(1 hunks)packages/create-llama/templates/components/use-cases/python/agentic_rag/citation.py(1 hunks)packages/create-llama/templates/components/use-cases/python/agentic_rag/query.py(1 hunks)packages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py(2 hunks)packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md(1 hunks)packages/create-llama/templates/components/use-cases/python/code_generator/utils.py(1 hunks)packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py(4 hunks)packages/create-llama/templates/components/use-cases/python/deep_research/README-template.md(1 hunks)packages/create-llama/templates/components/use-cases/python/deep_research/utils.py(1 hunks)packages/create-llama/templates/components/use-cases/python/deep_research/workflow.py(4 hunks)packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md(1 hunks)packages/create-llama/templates/components/use-cases/python/document_generator/utils.py(1 hunks)packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py(8 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/agent_tool.py(1 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/document_generator.py(1 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/events.py(1 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/interpreter.py(1 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/query.py(1 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/utils.py(1 hunks)packages/create-llama/templates/components/use-cases/python/financial_report/workflow.py(6 hunks)packages/create-llama/templates/components/use-cases/python/hitl/README-template.md(0 hunks)packages/create-llama/templates/components/use-cases/python/hitl/events.py(0 hunks)packages/create-llama/templates/components/use-cases/python/hitl/workflow.py(0 hunks)packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.py(2 hunks)packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.py(1 hunks)packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.py(1 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/generate.py(0 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/llama_deploy.yml(1 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/main.py(0 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.toml(3 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.py(1 hunks)packages/create-llama/templates/types/llamaindexserver/fastapi/src/index.py(1 hunks)
💤 Files with no reviewable changes (6)
- .github/workflows/e2e.yml
- packages/create-llama/templates/components/use-cases/python/hitl/README-template.md
- packages/create-llama/templates/types/llamaindexserver/fastapi/main.py
- packages/create-llama/templates/types/llamaindexserver/fastapi/generate.py
- packages/create-llama/templates/components/use-cases/python/hitl/events.py
- packages/create-llama/templates/components/use-cases/python/hitl/workflow.py
🧰 Additional context used
📓 Path-based instructions (9)
`packages/create-llama/templates/**/*`: Templates for the CLI should be organize...
packages/create-llama/templates/**/*: Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
📄 Source: CodeRabbit Inference Engine (CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/templates/components/ui/layout/header.tsxpackages/create-llama/templates/components/ts-proxy/index.tspackages/create-llama/templates/components/ts-proxy/package.jsonpackages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.pypackages/create-llama/templates/types/llamaindexserver/fastapi/llama_deploy.ymlpackages/create-llama/templates/components/use-cases/python/financial_report/utils.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.pypackages/create-llama/templates/types/llamaindexserver/fastapi/src/index.pypackages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.tomlpackages/create-llama/templates/components/use-cases/python/deep_research/utils.pypackages/create-llama/templates/components/use-cases/python/financial_report/query.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/query.pypackages/create-llama/templates/components/use-cases/python/code_generator/utils.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/citation.pypackages/create-llama/templates/components/use-cases/python/deep_research/README-template.mdpackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.pypackages/create-llama/templates/components/use-cases/python/deep_research/workflow.pypackages/create-llama/templates/components/use-cases/python/financial_report/workflow.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.mdpackages/create-llama/templates/components/use-cases/python/document_generator/utils.pypackages/create-llama/templates/components/use-cases/python/financial_report/document_generator.pypackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.pypackages/create-llama/templates/components/use-cases/python/document_generator/workflow.pypackages/create-llama/templates/components/use-cases/python/financial_report/events.pypackages/create-llama/templates/components/use-cases/python/code_generator/workflow.pypackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.pypackages/create-llama/templates/components/use-cases/python/financial_report/interpreter.pypackages/create-llama/templates/components/use-cases/python/financial_report/agent_tool.pypackages/create-llama/templates/components/use-cases/python/document_generator/README-template.mdpackages/create-llama/templates/components/use-cases/python/code_generator/README-template.md
`**/*.{ts,tsx}`: TypeScript code should be linted using ESLint and formatted with Prettier, as enforced by 'pnpm lint' and 'pnpm format' at the root level.
**/*.{ts,tsx}: TypeScript code should be linted using ESLint and formatted with Prettier, as enforced by 'pnpm lint' and 'pnpm format' at the root level.
📄 Source: CodeRabbit Inference Engine (CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/templates/components/ui/layout/header.tsxpackages/create-llama/e2e/typescript/resolve_dependencies.spec.tspackages/create-llama/templates/components/ts-proxy/index.tspackages/create-llama/e2e/python/resolve_dependencies.spec.tspackages/create-llama/e2e/shared/llamaindexserver_template.spec.tspackages/create-llama/questions/index.tspackages/create-llama/helpers/types.tspackages/create-llama/helpers/use-case.tspackages/create-llama/helpers/run-app.tspackages/create-llama/helpers/index.tspackages/create-llama/helpers/python.tspackages/create-llama/helpers/env-variables.ts
`packages/create-llama/templates/**/*`: Project templates for different framewor...
packages/create-llama/templates/**/*: Project templates for different frameworks and use cases should be stored in thetemplates/directory.
Templates should be organized by framework, type, and components within thetemplates/directory.
📄 Source: CodeRabbit Inference Engine (packages/create-llama/CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/templates/components/ui/layout/header.tsxpackages/create-llama/templates/components/ts-proxy/index.tspackages/create-llama/templates/components/ts-proxy/package.jsonpackages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.pypackages/create-llama/templates/types/llamaindexserver/fastapi/llama_deploy.ymlpackages/create-llama/templates/components/use-cases/python/financial_report/utils.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.pypackages/create-llama/templates/types/llamaindexserver/fastapi/src/index.pypackages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.tomlpackages/create-llama/templates/components/use-cases/python/deep_research/utils.pypackages/create-llama/templates/components/use-cases/python/financial_report/query.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/query.pypackages/create-llama/templates/components/use-cases/python/code_generator/utils.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/citation.pypackages/create-llama/templates/components/use-cases/python/deep_research/README-template.mdpackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.pypackages/create-llama/templates/components/use-cases/python/deep_research/workflow.pypackages/create-llama/templates/components/use-cases/python/financial_report/workflow.pypackages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.mdpackages/create-llama/templates/components/use-cases/python/document_generator/utils.pypackages/create-llama/templates/components/use-cases/python/financial_report/document_generator.pypackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.pypackages/create-llama/templates/components/use-cases/python/document_generator/workflow.pypackages/create-llama/templates/components/use-cases/python/financial_report/events.pypackages/create-llama/templates/components/use-cases/python/code_generator/workflow.pypackages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.pypackages/create-llama/templates/components/use-cases/python/financial_report/interpreter.pypackages/create-llama/templates/components/use-cases/python/financial_report/agent_tool.pypackages/create-llama/templates/components/use-cases/python/document_generator/README-template.mdpackages/create-llama/templates/components/use-cases/python/code_generator/README-template.md
`packages/create-llama/e2e/**/*`: Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
packages/create-llama/e2e/**/*: Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
📄 Source: CodeRabbit Inference Engine (CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/e2e/typescript/resolve_dependencies.spec.tspackages/create-llama/e2e/python/resolve_dependencies.spec.tspackages/create-llama/e2e/shared/llamaindexserver_template.spec.ts
`packages/create-llama/e2e/**/*`: End-to-end tests using Playwright should be placed in the `e2e/` directory.
packages/create-llama/e2e/**/*: End-to-end tests using Playwright should be placed in thee2e/directory.
📄 Source: CodeRabbit Inference Engine (packages/create-llama/CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/e2e/typescript/resolve_dependencies.spec.tspackages/create-llama/e2e/python/resolve_dependencies.spec.tspackages/create-llama/e2e/shared/llamaindexserver_template.spec.ts
`packages/create-llama/**/index.ts`: The main CLI entry point should be implemen...
packages/create-llama/**/index.ts: The main CLI entry point should be implemented inindex.tsusing Commander.js for argument parsing.
The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
📄 Source: CodeRabbit Inference Engine (packages/create-llama/CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/templates/components/ts-proxy/index.tspackages/create-llama/questions/index.tspackages/create-llama/helpers/index.ts
`packages/create-llama/**/package.json`: The package configuration and binary en...
packages/create-llama/**/package.json: The package configuration and binary entry point should be defined inpackage.json, with the binary pointing to./dist/index.js.
Build scripts should be defined inpackage.jsonfor building, cleaning, and developing the CLI.
Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined inpackage.json.
The package should support apack-installscript inpackage.jsonfor creating and installing the local package for testing.
📄 Source: CodeRabbit Inference Engine (packages/create-llama/CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/templates/components/ts-proxy/package.json
`packages/create-llama/questions/**/*`: Interactive prompts for user configuration should be implemented in the `questions/` directory.
packages/create-llama/questions/**/*: Interactive prompts for user configuration should be implemented in thequestions/directory.
📄 Source: CodeRabbit Inference Engine (packages/create-llama/CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/questions/index.ts
`packages/create-llama/helpers/**/*`: Utility functions for package management, ...
packages/create-llama/helpers/**/*: Utility functions for package management, file operations, and configuration should be placed in thehelpers/directory.
Helper modules should include installation, data sources, providers, tools, and environment configuration logic, and be located in thehelpers/directory.
📄 Source: CodeRabbit Inference Engine (packages/create-llama/CLAUDE.md)
List of files the instruction was applied to:
packages/create-llama/helpers/types.tspackages/create-llama/helpers/use-case.tspackages/create-llama/helpers/run-app.tspackages/create-llama/helpers/index.tspackages/create-llama/helpers/python.tspackages/create-llama/helpers/env-variables.ts
🧠 Learnings (39)
📓 Common learnings
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{examples,docs}/**/*.{ipynb,md} : Jupyter notebooks and markdown files should be used for examples and documentation.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*test*.py : Python unit tests should use pytest and provide comprehensive API and service coverage.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/templates/**/* : Templates should be organized by framework, type, and components within the `templates/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/templates/**/* : Project templates for different frameworks and use cases should be stored in the `templates/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
.changeset/good-avocados-try.md (11)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{examples,docs}/**/*.{ipynb,md} : Jupyter notebooks and markdown files should be used for examples and documentation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package should support a `pack-install` script in `package.json` for creating and installing the local package for testing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Build scripts should be defined in `package.json` for building, cleaning, and developing the CLI.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*test*.py : Python unit tests should use pytest and provide comprehensive API and service coverage.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
packages/create-llama/templates/components/ui/layout/header.tsx (10)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/gen-ui.ts : The generateEventComponent function, responsible for using LLMs to auto-generate React components, should be implemented in src/utils/gen-ui.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/layout/**/*.tsx : Place custom React layout components in the `layout/` directory, e.g., `layout/header.tsx`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/server.ts : The LlamaIndexServer class should be implemented in src/server.ts and serve as the main server implementation that wraps Next.js.
packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts (21)
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Target ES2022 and use bundler module resolution in TypeScript configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript should be used throughout the codebase for type safety.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/e2e/**/* : End-to-end tests using Playwright should be placed in the `e2e/` directory.
Learnt from: thucpn
PR: run-llama/create-llama#108
File: templates/components/engines/typescript/agent/tools/index.ts:4-4
Timestamp: 2024-07-26T21:06:39.705Z
Learning: @thucpn, thank you for pointing out the usage of `new OpenAPIActionToolSpec`. You're absolutely right. Since `OpenAPIActionToolSpec` is being instantiated, it should indeed be imported normally to ensure it's included in the runtime JavaScript bundle. I appreciate your vigilance in catching that detail. If there's anything else you need, feel free to let me know!
<!--
The `OpenAPIActionToolSpec` from `templates/components/engines/typescript/agent/tools/openapi-action.ts` is instantiated in the code, not just used as a type. This affects how it should be imported in TypeScript.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{node_modules/**,dist/**} : Exclude `node_modules` and `dist` directories from TypeScript compilation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.{ts,tsx} : Demonstrate proper async/await patterns and error handling for LLM operations.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Version synchronization must be maintained between TypeScript and Python packages.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use Zod for schema validation when defining tool parameters.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Define tools using the `tool()` function with Zod schemas for parameters, including `name`, `description`, `parameters`, and `execute` implementation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.
Learnt from: thucpn
PR: run-llama/create-llama#300
File: templates/components/multiagent/typescript/workflow/factory.ts:102-126
Timestamp: 2024-10-09T02:27:13.710Z
Learning: In TypeScript, refactoring code to reduce duplication must consider the impact on type safety, especially when dealing with different output types like streaming and non-streaming responses. Introducing casting to handle different types can lead to less maintainable code. Future suggestions should account for TypeScript's type system constraints.
packages/create-llama/templates/components/ts-proxy/index.ts (14)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/server.ts : The LlamaIndexServer class should be implemented in src/server.ts and serve as the main server implementation that wraps Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/gen-ui.ts : The generateEventComponent function, responsible for using LLMs to auto-generate React components, should be implemented in src/utils/gen-ui.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/events.ts : Event system logic, including source, agent, and artifact events, as well as helper functions for converting LlamaIndex data to UI events, should be implemented in src/events.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/components/**/* : Structure custom UI components in dedicated directories.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
packages/create-llama/templates/components/ts-proxy/package.json (16)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package should support a `pack-install` script in `package.json` for creating and installing the local package for testing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Build scripts should be defined in `package.json` for building, cleaning, and developing the CLI.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/server.ts : The LlamaIndexServer class should be implemented in src/server.ts and serve as the main server implementation that wraps Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.{ts,tsx} : Demonstrate proper async/await patterns and error handling for LLM operations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/next/**/*.{js,jsx,ts,tsx} : Tailwind CSS should be used for styling UI components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/gen-ui.ts : The generateEventComponent function, responsible for using LLMs to auto-generate React components, should be implemented in src/utils/gen-ui.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/next/**/*.{js,jsx,ts,tsx} : UI components for the chat interface, including message history, streaming responses, canvas panel, and custom layouts, should be implemented in the next/ directory using shadcn/ui components and Tailwind CSS.
packages/create-llama/e2e/python/resolve_dependencies.spec.ts (20)
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Target ES2022 and use bundler module resolution in TypeScript configuration.
Learnt from: thucpn
PR: run-llama/create-llama#108
File: templates/components/engines/typescript/agent/tools/index.ts:4-4
Timestamp: 2024-07-26T21:06:39.705Z
Learning: @thucpn, thank you for pointing out the usage of `new OpenAPIActionToolSpec`. You're absolutely right. Since `OpenAPIActionToolSpec` is being instantiated, it should indeed be imported normally to ensure it's included in the runtime JavaScript bundle. I appreciate your vigilance in catching that detail. If there's anything else you need, feel free to let me know!
<!--
The `OpenAPIActionToolSpec` from `templates/components/engines/typescript/agent/tools/openapi-action.ts` is instantiated in the code, not just used as a type. This affects how it should be imported in TypeScript.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/e2e/**/* : End-to-end tests using Playwright should be placed in the `e2e/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Version synchronization must be maintained between TypeScript and Python packages.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.{ts,tsx} : Demonstrate proper async/await patterns and error handling for LLM operations.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Utility functions for package management, file operations, and configuration should be placed in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be type-checked using mypy with strict settings.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/test_*.py : Tests should be implemented using pytest, pytest-asyncio, and pytest-mock.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*test*.py : Python unit tests should use pytest and provide comprehensive API and service coverage.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be linted using ruff and pylint.
packages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.py (11)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/**/* : FastAPI routers, models, and request handling should be implemented within the api/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
packages/create-llama/templates/types/llamaindexserver/fastapi/llama_deploy.yml (22)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Follow LlamaIndex patterns for external service connections in tool integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be linted using ruff and pylint.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/server.ts : The LlamaIndexServer class should be implemented in src/server.ts and serve as the main server implementation that wraps Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/resources/**/* : Static assets and bundled UI files should be placed in the resources/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/next/**/*.{js,jsx,ts,tsx} : UI components for the chat interface, including message history, streaming responses, canvas panel, and custom layouts, should be implemented in the next/ directory using shadcn/ui components and Tailwind CSS.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
packages/create-llama/e2e/shared/llamaindexserver_template.spec.ts (22)
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*test*.py : Python unit tests should use pytest and provide comprehensive API and service coverage.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/test_*.py : Tests should be implemented using pytest, pytest-asyncio, and pytest-mock.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript should be used throughout the codebase for type safety.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/e2e/**/* : End-to-end tests using Playwright should be placed in the `e2e/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Changes to templates require rebuilding the CLI and should be validated with end-to-end tests.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/templates/**/* : Project templates for different frameworks and use cases should be stored in the `templates/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Templates should use a component-based system allowing mix-and-match of frameworks, vector databases, observability tools, and integrations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.{ts,tsx} : Demonstrate proper async/await patterns and error handling for LLM operations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package should support a `pack-install` script in `package.json` for creating and installing the local package for testing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: The application generation flow should include project validation, interactive questioning, template installation, environment setup, dependency installation, and post-install actions.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
packages/create-llama/questions/index.ts (9)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/questions/**/* : Interactive prompts for user configuration should be implemented in the `questions/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: thucpn
PR: run-llama/create-llama#0
File: :0-0
Timestamp: 2024-10-16T13:04:24.943Z
Learning: For the AstraDB integration in `create-llama`, errors related to missing environment variables in `checkRequiredEnvVars` are intended to be thrown to the server API, not handled by exiting the process.
Learnt from: thucpn
PR: run-llama/create-llama#0
File: :0-0
Timestamp: 2024-07-26T21:06:39.705Z
Learning: For the AstraDB integration in `create-llama`, errors related to missing environment variables in `checkRequiredEnvVars` are intended to be thrown to the server API, not handled by exiting the process.
packages/create-llama/helpers/types.ts (8)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript should be used throughout the codebase for type safety.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Utility functions for package management, file operations, and configuration should be placed in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
packages/create-llama/templates/components/use-cases/python/financial_report/utils.py (2)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
packages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py (16)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Use factory functions for stateless workflow creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/workflow.ts : The runWorkflow function should execute workflows with proper event handling and be implemented in src/utils/workflow.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Agent workflows should implement the startAgentEvent/stopAgentEvent contract for event handling.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Follow LlamaIndex patterns for external service connections in tool integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/events.ts : Event system logic, including source, agent, and artifact events, as well as helper functions for converting LlamaIndex data to UI events, should be implemented in src/events.ts.
packages/create-llama/templates/types/llamaindexserver/fastapi/src/index.py (23)
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/**/* : FastAPI routers, models, and request handling should be implemented within the api/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Follow LlamaIndex patterns for external service connections in tool integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/events.ts : Event system logic, including source, agent, and artifact events, as well as helper functions for converting LlamaIndex data to UI events, should be implemented in src/events.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/{data,output}/** : Data and output folders for file integration should be mounted and served as static assets via Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: thucpn
PR: run-llama/create-llama#0
File: :0-0
Timestamp: 2024-10-16T13:04:24.943Z
Learning: For the AstraDB integration in `create-llama`, errors related to missing environment variables in `checkRequiredEnvVars` are intended to be thrown to the server API, not handled by exiting the process.
Learnt from: thucpn
PR: run-llama/create-llama#0
File: :0-0
Timestamp: 2024-07-26T21:06:39.705Z
Learning: For the AstraDB integration in `create-llama`, errors related to missing environment variables in `checkRequiredEnvVars` are intended to be thrown to the server API, not handled by exiting the process.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Helper modules should include installation, data sources, providers, tools, and environment configuration logic, and be located in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be linted using ruff and pylint.
packages/create-llama/helpers/use-case.ts (12)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript should be used throughout the codebase for type safety.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Utility functions for package management, file operations, and configuration should be placed in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Target ES2022 and use bundler module resolution in TypeScript configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Helper modules should include installation, data sources, providers, tools, and environment configuration logic, and be located in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Shared UI components and styling should be used across both TypeScript and Python implementations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
packages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.toml (20)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be type-checked using mypy with strict settings.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/**/* : FastAPI routers, models, and request handling should be implemented within the api/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/gen-ui.ts : The generateEventComponent function, responsible for using LLMs to auto-generate React components, should be implemented in src/utils/gen-ui.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/.ui/**/* : Downloaded UI static files should be placed in the .ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/test_*.py : Tests should be implemented using pytest, pytest-asyncio, and pytest-mock.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be linted using ruff and pylint.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be formatted using black.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*test*.py : Python unit tests should use pytest and provide comprehensive API and service coverage.
Learnt from: leehuwuj
PR: run-llama/create-llama#324
File: templates/components/multiagent/python/app/api/routers/vercel_response.py:0-0
Timestamp: 2024-10-09T02:27:13.710Z
Learning: The project only supports Python 3.11 and Python 3.12.
packages/create-llama/templates/components/use-cases/python/deep_research/utils.py (2)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
packages/create-llama/templates/components/use-cases/python/financial_report/query.py (1)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
packages/create-llama/templates/components/use-cases/python/agentic_rag/query.py (2)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
packages/create-llama/helpers/run-app.ts (21)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/server.ts : The LlamaIndexServer class should be implemented in src/server.ts and serve as the main server implementation that wraps Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.{ts,tsx} : Demonstrate proper async/await patterns and error handling for LLM operations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package should support a `pack-install` script in `package.json` for creating and installing the local package for testing.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Follow LlamaIndex patterns for external service connections in tool integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Server package builds should follow a multi-step process: prebuild (clean), build (bunchee compilation), postbuild (Next.js/static assets), and prepare:py-static (Python integration assets).
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: The build process should include prebuild (cleaning), build (compilation with bunchee), postbuild (preparing TypeScript server and Python static assets), prepare:ts-server (Next.js app and API routes), and prepare:py-static (Python integration).
packages/create-llama/templates/components/use-cases/python/deep_research/README-template.md (13)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Changes to templates require rebuilding the CLI and should be validated with end-to-end tests.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{examples,docs}/**/*.{ipynb,md} : Jupyter notebooks and markdown files should be used for examples and documentation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.py (19)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/components/**/* : Structure custom UI components in dedicated directories.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/*.py : All Python code should be linted using ruff and pylint.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/test_*.py : Tests should be implemented using pytest, pytest-asyncio, and pytest-mock.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/.ui/**/* : Downloaded UI static files should be placed in the .ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
packages/create-llama/templates/components/use-cases/python/deep_research/workflow.py (21)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Use factory functions for stateless workflow creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/next/**/*.{js,jsx,ts,tsx} : UI components for the chat interface, including message history, streaming responses, canvas panel, and custom layouts, should be implemented in the next/ directory using shadcn/ui components and Tailwind CSS.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use Zod for schema validation when defining tool parameters.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/{data,output}/** : Data and output folders for file integration should be mounted and served as static assets via Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{node_modules/**,dist/**} : Exclude `node_modules` and `dist` directories from TypeScript compilation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript should be used throughout the codebase for type safety.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/public/config.js : Static assets, including client-side config, should be placed in the public/ directory (e.g., public/config.js).
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Design clear Pydantic models for UI event schemas.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: leehuwuj
PR: run-llama/create-llama#630
File: python/llama-index-server/llama_index/server/api/utils/workflow.py:22-28
Timestamp: 2025-05-30T03:43:07.617Z
Learning: The ChatRequest model in python/llama-index-server/llama_index/server/api/models.py has a validate_id method that restricts the id field to alphanumeric characters, underscores, and hyphens only, preventing path traversal attacks. The chat_id parameter used in WorkflowService methods comes from this validated request.id, so no additional validation is needed in the WorkflowService.get_storage_path method.
packages/create-llama/templates/components/use-cases/python/financial_report/workflow.py (11)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Use factory functions for stateless workflow creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/workflow.ts : The runWorkflow function should execute workflows with proper event handling and be implemented in src/utils/workflow.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Process chat history appropriately in workflow logic.
packages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.md (17)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Changes to templates require rebuilding the CLI and should be validated with end-to-end tests.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{examples,docs}/**/*.{ipynb,md} : Jupyter notebooks and markdown files should be used for examples and documentation.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Process chat history appropriately in workflow logic.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
packages/create-llama/helpers/index.ts (13)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Utility functions for package management, file operations, and configuration should be placed in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Helper modules should include installation, data sources, providers, tools, and environment configuration logic, and be located in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package should support a `pack-install` script in `package.json` for creating and installing the local package for testing.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Data source handling should support local files, web URLs, and database connections with flexible configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.{ts,tsx} : Demonstrate proper async/await patterns and error handling for LLM operations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Build scripts should be defined in `package.json` for building, cleaning, and developing the CLI.
packages/create-llama/templates/components/use-cases/python/financial_report/document_generator.py (2)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.py (10)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/**/* : FastAPI routers, models, and request handling should be implemented within the api/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{data,output}/**/* : Static files from data/ and output/ directories must be served at /api/files/data/* and /api/files/output/* endpoints.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*test*.py : Python unit tests should use pytest and provide comprehensive API and service coverage.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
packages/create-llama/helpers/python.ts (26)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/types.ts : Type definitions for WorkflowFactory, UIConfig, and LlamaIndexServerOptions should be implemented in src/types.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The main CLI entry point should be implemented in `index.ts` using Commander.js for argument parsing.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/server/**/*.{ts,tsx} : TypeScript server code should be located in 'packages/server/' and use Next.js as the framework, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/templates/**/* : Templates should be organized by framework, type, and components within the `templates/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : The package configuration and binary entry point should be defined in `package.json`, with the binary pointing to `./dist/index.js`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Templates should use a component-based system allowing mix-and-match of frameworks, vector databases, observability tools, and integrations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Target ES2022 and use bundler module resolution in TypeScript configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/templates/**/* : Project templates for different frameworks and use cases should be stored in the `templates/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/workflow.ts : The runWorkflow function should execute workflows with proper event handling and be implemented in src/utils/workflow.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: The build process should include prebuild (cleaning), build (compilation with bunchee), postbuild (preparing TypeScript server and Python static assets), prepare:ts-server (Next.js app and API routes), and prepare:py-static (Python integration).
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/gen-ui.ts : The generateEventComponent function, responsible for using LLMs to auto-generate React components, should be implemented in src/utils/gen-ui.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/next/**/*.{js,jsx,ts,tsx} : UI components for the chat interface, including message history, streaming responses, canvas panel, and custom layouts, should be implemented in the next/ directory using shadcn/ui components and Tailwind CSS.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/.ui/**/* : Downloaded UI static files should be placed in the .ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/components/**/* : Structure custom UI components in dedicated directories.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/layout/**/*.tsx : Place custom React layout components in the `layout/` directory, e.g., `layout/header.tsx`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (8)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Use factory functions for stateless workflow creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
Learnt from: leehuwuj
PR: run-llama/create-llama#630
File: python/llama-index-server/llama_index/server/api/utils/workflow.py:22-28
Timestamp: 2025-05-30T03:43:07.617Z
Learning: The ChatRequest model in python/llama-index-server/llama_index/server/api/models.py has a validate_id method that restricts the id field to alphanumeric characters, underscores, and hyphens only, preventing path traversal attacks. The chat_id parameter used in WorkflowService methods comes from this validated request.id, so no additional validation is needed in the WorkflowService.get_storage_path method.
packages/create-llama/templates/components/use-cases/python/financial_report/events.py (4)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/events.ts : Event system logic, including source, agent, and artifact events, as well as helper functions for converting LlamaIndex data to UI events, should be implemented in src/events.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Design clear Pydantic models for UI event schemas.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Custom events should be defined using Zod schemas and UI components generated with LLMs.
packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (9)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/models.py : Structured event types for workflow communication, including UIEvent, ArtifactEvent, SourceNodesEvent, and AgentRunEvent, should be defined in api/models.py using Pydantic data models.
Learnt from: leehuwuj
PR: run-llama/create-llama#630
File: python/llama-index-server/llama_index/server/api/utils/workflow.py:22-28
Timestamp: 2025-05-30T03:43:07.617Z
Learning: The ChatRequest model in python/llama-index-server/llama_index/server/api/models.py has a validate_id method that restricts the id field to alphanumeric characters, underscores, and hyphens only, preventing path traversal attacks. The chat_id parameter used in WorkflowService methods comes from this validated request.id, so no additional validation is needed in the WorkflowService.get_storage_path method.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Use factory functions for stateless workflow creation.
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.py (12)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/__init__.py : Package exports, including LlamaIndexServer, UIConfig, and UIEvent, should be defined in llama_index/server/__init__.py.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to python/llama-index-server/**/*.py : Python server code should be located in 'python/llama-index-server/' and use FastAPI, with the core server logic implemented in a 'LlamaIndexServer' class.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/server.py : The main LlamaIndexServer class should be implemented in llama_index/server/server.py and extend FastAPI.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/llama_index/server/resources/**/* : Bundled UI assets should be included in llama_index/server/resources for package distribution.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/server.ts : The LlamaIndexServer class should be implemented in src/server.ts and serve as the main server implementation that wraps Next.js.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/**/* : FastAPI routers, models, and request handling should be implemented within the api/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Follow LlamaIndex patterns for external service connections in tool integration.
packages/create-llama/templates/components/use-cases/python/financial_report/interpreter.py (1)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
packages/create-llama/templates/components/use-cases/python/financial_report/agent_tool.py (5)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: The tool should support multiple AI providers with a unified `ModelConfig` interface for provider selection, API key management, model specification, and embedding configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md (15)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{examples,docs}/**/*.{ipynb,md} : Jupyter notebooks and markdown files should be used for examples and documentation.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Changes to templates require rebuilding the CLI and should be validated with end-to-end tests.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
packages/create-llama/helpers/env-variables.ts (19)
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/index.ts : The CLI should accept command-line options for framework selection, template type, model providers, vector databases, data sources, tools, and observability options.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/.env : Environment variables should be managed using .env files for API keys and configuration.
Learnt from: thucpn
PR: run-llama/create-llama#0
File: :0-0
Timestamp: 2024-10-16T13:04:24.943Z
Learning: For the AstraDB integration in `create-llama`, errors related to missing environment variables in `checkRequiredEnvVars` are intended to be thrown to the server API, not handled by exiting the process.
Learnt from: thucpn
PR: run-llama/create-llama#0
File: :0-0
Timestamp: 2024-07-26T21:06:39.705Z
Learning: For the AstraDB integration in `create-llama`, errors related to missing environment variables in `checkRequiredEnvVars` are intended to be thrown to the server API, not handled by exiting the process.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/package.json : Testing scripts for end-to-end, Python-specific, and TypeScript-specific templates should be defined in `package.json`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/helpers/**/* : Utility functions for package management, file operations, and configuration should be placed in the `helpers/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/**/create-app.ts : Core application creation logic and orchestration should be implemented in `create-app.ts`.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Changes to templates require rebuilding the CLI and should be validated with end-to-end tests.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/services/**/* : Business logic for file handling, LlamaCloud integration, and UI generation should be implemented within the services/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Applies to packages/create-llama/e2e/**/* : End-to-end tests using Playwright should be placed in the `e2e/` directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/e2e/**/* : Playwright end-to-end tests should be placed in 'packages/create-llama/e2e/' and validate both Python and TypeScript generated projects.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: Templates should use a component-based system allowing mix-and-match of frameworks, vector databases, observability tools, and integrations.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `@llamaindex/workflow` with tool arrays for agent creation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Target ES2022 and use bundler module resolution in TypeScript configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/create-llama/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:39.549Z
Learning: The tool should support multiple AI providers with a unified `ModelConfig` interface for provider selection, API key management, model specification, and embedding configuration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: CLI build artifacts and template caches should be cleaned using the 'npm run clean' script in 'packages/create-llama/'.
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md (16)
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/main.py : AI-powered component generation using LLM workflows should be implemented in gen_ui/main.py, including GenUIWorkflow, planning, aggregation, code generation, and validation.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/gen_ui/**/* : AI-powered UI component generation system should be implemented within the gen_ui/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Changes to templates require rebuilding the CLI and should be validated with end-to-end tests.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{examples,docs}/**/*.{ipynb,md} : Jupyter notebooks and markdown files should be used for examples and documentation.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/utils/gen-ui.ts : The generateEventComponent function, responsible for using LLMs to auto-generate React components, should be implemented in src/utils/gen-ui.ts.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/**/pyproject.toml : Package configuration, dependencies, and build settings must be specified in pyproject.toml.
Learnt from: CR
PR: run-llama/create-llama#0
File: CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:26.711Z
Learning: Applies to packages/create-llama/templates/**/* : Templates for the CLI should be organized under 'packages/create-llama/templates/', with 'types/' for base project structures and 'components/' for reusable framework components.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/tools/**/* : Document generation, interpreter tools, and index querying utilities should be implemented within the tools/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/{components,layout}/**/* : Custom UI components should be placed in the components/ directory, and custom layout sections in the layout/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: The LlamaIndexServer should be configured using the workflow_factory parameter, with environment and UI configuration options as shown in the provided example.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/examples/**/* : Sample workflows demonstrating different features should be placed in the examples/ directory.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint must support streaming responses compatible with Vercel, background tasks for file downloads, and LlamaCloud integration if enabled.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/api/routers/chat.py : The /api/chat endpoint should be implemented in api/routers/chat.py and support streaming responses, message format conversion, background tasks, and optional LlamaCloud integration.
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Configure UI with options such as `starterQuestions`, `layoutDir`, `devMode`, and `suggestNextQuestions` in the server setup.
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Applies to python/llama-index-server/layout/**/* : Use shared layout components across workflows for layout consistency.
🧬 Code Graph Analysis (12)
packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts (1)
packages/create-llama/helpers/use-case.ts (1)
ALL_TYPESCRIPT_USE_CASES(3-10)
packages/create-llama/e2e/python/resolve_dependencies.spec.ts (2)
packages/create-llama/helpers/types.ts (1)
TemplateUseCase(44-50)packages/create-llama/helpers/use-case.ts (1)
ALL_PYTHON_USE_CASES(12-18)
packages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.py (1)
packages/create-llama/templates/types/llamaindexserver/fastapi/src/settings.py (1)
init_settings(8-12)
packages/create-llama/e2e/shared/llamaindexserver_template.spec.ts (1)
packages/create-llama/helpers/use-case.ts (2)
ALL_TYPESCRIPT_USE_CASES(3-10)ALL_PYTHON_USE_CASES(12-18)
packages/create-llama/helpers/use-case.ts (1)
packages/create-llama/helpers/types.ts (3)
TemplateUseCase(44-50)EnvVar(93-97)Dependency(99-104)
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.py (3)
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.py (1)
get_index(87-103)packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.py (2)
LLamaCloudFileService(31-74)add_file_to_pipeline(36-74)packages/create-llama/templates/types/llamaindexserver/fastapi/src/settings.py (1)
init_settings(8-12)
packages/create-llama/templates/components/use-cases/python/financial_report/document_generator.py (2)
python/llama-index-server/llama_index/server/settings.py (1)
file_server_url_prefix(20-21)packages/create-llama/templates/components/use-cases/python/financial_report/interpreter.py (1)
to_tool(278-280)
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/service.py (1)
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.py (1)
get_client(106-108)
packages/create-llama/helpers/python.ts (3)
packages/create-llama/helpers/types.ts (2)
InstallTemplateArgs(77-91)Dependency(99-104)packages/create-llama/helpers/use-case.ts (1)
USE_CASE_CONFIGS(20-84)packages/create-llama/helpers/copy.ts (1)
copy(13-49)
packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (4)
python/llama-index-server/llama_index/server/models/artifacts.py (1)
ArtifactEvent(58-66)packages/create-llama/templates/components/use-cases/python/code_generator/utils.py (1)
get_last_artifact(128-131)packages/create-llama/templates/types/llamaindexserver/fastapi/src/settings.py (1)
init_settings(8-12)packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (2)
prepare_chat_history(94-124)PlanEvent(43-45)
packages/create-llama/templates/components/use-cases/python/financial_report/interpreter.py (1)
packages/create-llama/templates/components/use-cases/python/financial_report/document_generator.py (1)
to_tool(250-252)
packages/create-llama/helpers/env-variables.ts (2)
packages/create-llama/helpers/types.ts (5)
ModelConfig(13-20)TemplateFramework(22-22)TemplateType(21-21)TemplateUseCase(44-50)EnvVar(93-97)packages/create-llama/helpers/use-case.ts (1)
USE_CASE_CONFIGS(20-84)
🪛 LanguageTool
packages/create-llama/templates/components/use-cases/python/deep_research/README-template.md
[grammar] ~97-~97: Ensure spelling is correct
Context: ...ration - llamaDeploy: The LlamaDeploy configration (deployment name and workflow name that...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
packages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.md
[grammar] ~97-~97: Ensure spelling is correct
Context: ...ration - llamaDeploy: The LlamaDeploy configration (deployment name and workflow name that...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md
[grammar] ~89-~89: Ensure spelling is correct
Context: ...ration - llamaDeploy: The LlamaDeploy configration (deployment name and workflow name that...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md
[grammar] ~90-~90: Ensure spelling is correct
Context: ...ration - llamaDeploy: The LlamaDeploy configration (deployment name and workflow name that...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.17.2)
packages/create-llama/templates/components/use-cases/python/deep_research/README-template.md
39-39: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
49-49: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
packages/create-llama/templates/components/use-cases/python/agentic_rag/README-template.md
32-32: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
42-42: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md
31-31: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
41-41: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md
32-32: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
42-42: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, none)
- GitHub Check: typescript (22, macos-latest, nextjs, llamacloud)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: typescript (22, ubuntu-22.04, nextjs, llamacloud)
- GitHub Check: typescript (22, ubuntu-22.04, nextjs, none)
- GitHub Check: Unit Tests (ubuntu-latest, 3.9)
- GitHub Check: typescript (22, macos-latest, nextjs, none)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, none)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, llamacloud)
- GitHub Check: typescript (22, windows-latest, nextjs, none)
- GitHub Check: typescript (22, windows-latest, nextjs, llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, none)
- GitHub Check: lint
🔇 Additional comments (53)
packages/create-llama/templates/components/ui/layout/header.tsx (1)
24-24: Good fix for deployment compatibility!This change correctly addresses the past review comment by using an absolute URL instead of a relative path, ensuring the logo displays properly across different deployment environments, including LlamaDeploy servers.
packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts (2)
11-11: Well-organized refactoring for use case separation.Good change to import TypeScript-specific use cases, making the test suite more targeted and maintainable.
24-24: Consistent usage of TypeScript-specific use cases.The iteration correctly uses the new
ALL_TYPESCRIPT_USE_CASESarray, maintaining consistency with the updated import.packages/create-llama/templates/components/ts-proxy/package.json (1)
1-18: Well-structured package configuration.The package.json is properly configured for a TypeScript proxy server with appropriate dependencies and development tools.
packages/create-llama/templates/types/llamaindexserver/fastapi/llama_deploy.yml (1)
19-24: Clean UI service configuration.The UI service configuration correctly specifies the port and source directory for the TypeScript proxy server.
packages/create-llama/e2e/python/resolve_dependencies.spec.ts (2)
6-7: LGTM! Import consolidation aligns with the new modular structure.The import changes reflect the separation of use cases from general types, which improves modularity and maintainability.
17-19: Confirm that the LlamaCloud subset exercises all vectorDB behaviorsI’ve checked
ALL_PYTHON_USE_CASESinpackages/create-llama/helpers/use-case.tsand it includes additional entries—e.g."code_generator","document_generator","hitl". The conditional in
packages/create-llama/e2e/python/resolve_dependencies.spec.ts (lines 17–19) limits tests to:
"agentic_rag""deep_research""financial_report"Please verify that these three use cases adequately cover all LlamaCloud-specific vector database features. If any of the omitted cases expose unique behaviors or edge cases, consider adding them to this subset.
• File: packages/create-llama/e2e/python/resolve_dependencies.spec.ts (17–19)
packages/create-llama/e2e/shared/llamaindexserver_template.spec.ts (4)
20-24: LGTM! Framework-specific use case selection improves test accuracy.The conditional logic properly separates TypeScript and Python use cases, and the
isPythonLlamaDeployflag provides a clear way to differentiate between deployment types.
43-43: Verify the postInstallAction logic for Python LlamaDeploy.The change from always running the app to only installing dependencies for Python LlamaDeploy is appropriate since the deployment model is different.
Please confirm that the Python LlamaDeploy setup doesn't require running the app during the test setup phase.
58-61: Appropriate frontend test skip for Python LlamaDeploy.Frontend tests are correctly skipped for Python LlamaDeploy since the architecture separates UI components from the backend Python service.
73-76: Enhanced chat test skip logic is comprehensive.The skip logic now covers both specific use cases and the Python framework in general, which aligns with the new deployment architecture.
packages/create-llama/templates/types/llamaindexserver/fastapi/src/generate.py (2)
26-29: LGTM! Proper environment variable handling with sensible defaults.The use of
os.environ.get("DATA_DIR", "ui/data")provides a good default while allowing for configuration flexibility. The recursive directory reading is appropriate for comprehensive document indexing.
21-22: Verify environment variable loading order.The code loads environment variables after importing settings, which might cause issues if the settings module depends on environment variables during import.
Consider moving
load_dotenv()before the settings import to ensure environment variables are available during module initialization:def generate_index(): """ Index the documents in the data directory. """ + load_dotenv() from src.index import STORAGE_DIR from src.settings import init_settings from llama_index.core.indices import ( VectorStoreIndex, ) from llama_index.core.readers import SimpleDirectoryReader - load_dotenv() init_settings()packages/create-llama/helpers/types.ts (2)
93-97: LGTM! Well-structured EnvVar type definition.The
EnvVartype provides a clean interface for environment variable management with appropriate optional fields for flexible usage across different contexts.
99-104: LGTM! Comprehensive Dependency interface.The
Dependencyinterface supports advanced package management with version constraints, extras, and custom constraints, which is essential for the Python dependency resolution system.packages/create-llama/templates/types/llamaindexserver/fastapi/src/index.py (3)
9-9: LGTM! Storage directory update aligns with new structure.The change from
"storage"to"src/storage"is consistent with the new directory layout for Python LlamaDeploy deployments.
12-12: LGTM! Function signature simplification improves clarity.Removing the unused
chat_requestparameter simplifies the interface and aligns with the new workflow architecture where environment loading and settings initialization are handled explicitly.
18-18: LGTM! Direct StorageContext usage is appropriate.The direct use of
StorageContext.from_defaults(persist_dir=STORAGE_DIR)eliminates unnecessary abstraction and makes the code more straightforward.packages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.toml (5)
31-31: LGTM: Script entry point updated correctly.The script entry point has been updated from
generate:generate_indextosrc.generate:generate_indexto reflect the new source directory structure.
49-49: LGTM: MyPy configuration updated consistently.The MyPy module override has been updated from
app.*tosrc.*, which is consistent with the project restructuring.
59-60: LGTM: Build configuration updated for new structure.The wheel build target now correctly specifies
packages = ["src"]to include the new source directory structure.
15-17: All new LlamaIndex dependencies are available and compatible
- File:
packages/create-llama/templates/types/llamaindexserver/fastapi/pyproject.toml(lines 15–17)llama-index-readers-file>=0.4.6,<1.0.0→ latest available 0.4.11llama-index-indices-managed-llama-cloud>=0.6.3,<1.0.0→ latest available 0.7.10llama-deploy→ latest available 0.8.1No further changes needed.
20-21: Verified llama-deploy source
The Git repository at https://github.com/run-llama/llama_deploy is accessible (HTTP 200) and contains a valid Python package (pyproject.toml + llama_deploy module). No further action required.packages/create-llama/questions/index.ts (3)
24-70: LGTM: Use case selection separated correctly.The use case selection has been properly separated into its own prompt, improving the user experience by making the selection flow more logical.
72-88: LGTM: Framework selection logic is correct.The conditional logic correctly excludes Python FastAPI for the "hitl" use case, with a helpful comment referencing the chat-ui example. This makes sense given the architectural differences.
114-118: LGTM: LlamaCloud exclusion logic is appropriate.The logic correctly excludes LlamaCloud prompts for use cases that don't require data sources ("code_generator", "document_generator", "hitl"), which prevents unnecessary configuration steps.
packages/create-llama/templates/components/use-cases/python/financial_report/utils.py (1)
11-46: LGTM: Well-designed streaming response handler.The
write_response_to_streamfunction provides a clean abstraction for handling both streaming and non-streaming responses. The implementation correctly:
- Uses type hints with Union types for different response types
- Handles AsyncGenerator for streaming responses
- Properly extracts delta and raw data from response chunks
- Provides fallback for non-streaming responses
- Includes comprehensive documentation
packages/create-llama/templates/components/use-cases/python/agentic_rag/workflow.py (3)
1-9: LGTM: Import structure updated correctly.The imports have been properly updated to use local
srcmodules instead of external dependencies, which aligns with the new project structure using LlamaDeploy.
12-15: LGTM: Explicit environment initialization.The explicit calls to
load_dotenv()andinit_settings()ensure proper environment configuration. The function signature simplification (removing the optionalchat_requestparameter) makes the API cleaner.
21-26: LGTM: Citation functionality integration.The integration of citation functionality with the query tool and system prompt is well-implemented. This provides better traceability for the agent's responses.
packages/create-llama/helpers/use-case.ts (4)
3-10: LGTM: TypeScript use cases properly defined.The TypeScript use cases array includes all supported use cases including "hitl" which is TypeScript-only based on the framework selection logic in the questions file.
12-18: LGTM: Python use cases correctly exclude "hitl".The Python use cases array correctly excludes "hitl" which aligns with the framework selection logic that prevents Python FastAPI for the "hitl" use case.
34-58: LGTM: Financial report configuration is comprehensive.The financial_report use case configuration properly includes:
- Relevant starter questions
- Required E2B_API_KEY environment variable with description
- All necessary dependencies (e2b-code-interpreter, markdown, xhtml2pdf) with appropriate version constraints
20-27: Well-structured configuration interface.The configuration interface provides a clean structure for each use case with:
- Required starter questions
- Optional additional environment variables
- Optional additional dependencies
This design allows for flexible configuration while maintaining type safety.
packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/generate.py (1)
47-53: Verify the impact of not waiting for file processingSetting
wait_for_processing=Falsemeans files are added to the pipeline without waiting for completion. This could cause issues if downstream processes expect the files to be fully ingested.Please ensure that:
- Downstream processes handle partially processed files correctly
- There's a mechanism to check processing status if needed
- Error handling accounts for files that may fail processing asynchronously
Consider documenting this behavior or adding a configuration option to control whether to wait for processing.
packages/create-llama/helpers/index.ts (2)
117-148: Well-structured conditional logic for Python LlamaDeploy support.The implementation cleanly separates the data directory logic between traditional deployments and Python LlamaDeploy, making the code more maintainable and the intent clear.
190-195: Good architectural decision to skip output directories for LlamaDeploy.Conditionally skipping the output directory creation for Python LlamaDeploy deployments is a sensible optimization that avoids creating unnecessary directories.
packages/create-llama/templates/components/use-cases/python/financial_report/workflow.py (3)
123-130: Good normalization of chat history.The change to convert chat history dictionaries into
ChatMessageobjects ensures consistent message formatting throughout the workflow. This prevents the AttributeError mentioned in past review comments.
162-163: Proper handling of streaming responses.Good update to use the
write_response_to_streamutility for handling streaming LLM responses. This ensures consistent stream handling across the codebase.
52-52: DocumentGenerator Instantiation Confirmed SafeThe default
FILE_SERVER_URL_PREFIXis defined indocument_generator.pyas
"/deployments/chat/ui/api/files/output/tools", so callingDocumentGenerator()without arguments will never trigger theValueError. This change is backward-compatible.packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (2)
33-35: Good centralization of environment and settings initialization.Moving the environment loading and settings initialization into
create_workflow()improves encapsulation and ensures consistent initialization across all usages.
104-115: Proper chat history handling with artifact extraction.Excellent refactoring to:
- Normalize chat history into
ChatMessageobjects- Use the new
get_last_artifactutility for extracting artifacts from chat historyThis ensures consistent message handling and proper artifact tracking.
packages/create-llama/templates/components/use-cases/python/agentic_rag/citation.py (1)
1-107: Well-structured citation implementation!The citation functionality is cleanly implemented with:
- Clear and instructive prompt template for citation generation
- Proper separation of concerns with dedicated processor and synthesizer classes
- Comprehensive validation in the
enable_citationfunction- Good use of type hints throughout
packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (1)
32-36: Good refactoring for centralized settings management!The workflow initialization has been properly simplified:
- Environment variables and settings are now centrally managed
- Chat history construction correctly builds
ChatMessageobjects from event data- The global workflow instance follows the consistent pattern across other use cases
Also applies to: 90-92, 101-108, 359-359
packages/create-llama/templates/components/use-cases/python/financial_report/document_generator.py (1)
111-253: Well-structured document generation implementation!The DocumentGenerator class is well-designed with:
- Clear separation of HTML and PDF generation logic
- Proper error handling for missing dependencies
- Clean file URL generation for serving documents
- Good use of CSS styling for both HTML and PDF outputs
packages/create-llama/templates/components/use-cases/python/financial_report/interpreter.py (1)
41-281: Excellent implementation of the E2B code interpreter!The E2BCodeInterpreter class is very well-designed with:
- Proper lifecycle management including cleanup in
__del__- Comprehensive error handling for file operations
- Smart retry logic with a maximum of 3 attempts
- Clear separation of concerns for file handling, execution, and result parsing
- Good logging throughout for debugging
packages/create-llama/helpers/env-variables.ts (2)
228-266: LGTM! Well-structured conditional environment variable handling.The refactoring properly handles the different environment variable requirements between Python LlamaDeploy and other frameworks. The use of
NEXT_PUBLIC_STARTER_QUESTIONSfor Python LlamaDeploy aligns with Next.js conventions for client-side accessible variables.
392-399: Good conditional exclusion of framework-specific variables.The exclusion of
APP_HOSTandAPP_PORTfor Python LlamaDeploy templates is appropriate since these are managed by the llama-deploy configuration.packages/create-llama/helpers/python.ts (2)
13-35: Clean refactoring to use options object pattern.The refactoring improves the function signature by using a single options object with Pick utility type, making it more maintainable and aligned with TypeScript best practices.
421-458: Well-organized directory structure for LlamaDeploy.The separation of source files into
src/and UI files intoui/provides a clean architecture that aligns with LlamaDeploy's deployment model.packages/create-llama/templates/components/vectordbs/llamaindexserver/llamacloud/python/index.py (1)
17-85: Excellent configuration management with proper validation.The Pydantic models provide robust configuration handling with:
- Environment variable fallbacks
- Field validation with clear error messages
- Proper encapsulation of sensitive fields with
exclude=Truepackages/create-llama/templates/components/use-cases/python/financial_report/agent_tool.py (2)
29-33: Potential method signature conflict with parent class.The
acallmethod signature adds actxparameter which may conflict with the parentFunctionToolclass. The# type: ignorecomment suggests this is a known issue.Consider documenting why this type ignore is necessary or explore alternative designs such as:
- Using composition instead of inheritance
- Passing context through a different mechanism
- Creating a wrapper method with a different name
101-176: Excellent implementation of parallel tool execution with progress tracking.The function handles multiple scenarios well:
- Single vs multiple tool calls
- Progress tracking with event emission
- Graceful error handling for missing tools
Python use-cases structure:
Update use cases:
Human in the Loop (removed)Update create-llama:
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores