forked from crewAIInc/crewAI
-
Notifications
You must be signed in to change notification settings - Fork 0
Feat/per user token tracing #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Devasy
wants to merge
14
commits into
main
Choose a base branch
from
feat/per-user-token-tracing
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
14 commits
Select commit
Hold shift + click to select a range
56b538c
feat: add detailed token metrics tracking for agents and tasks
Devasy 8586061
feat: enhance per-agent token metrics accuracy by aggregating task data
Devasy c73b36a
Adding HITL for Flows (#4143)
joaomdmoura a0c2662
Merge branch 'main' into feat/per-user-token-tracing
Devasy b9dd166
Lorenze/agent executor flow pattern (#3975)
lorenzejay 467ee29
Improve EventListener and TraceCollectionListener for improved event……
lorenzejay f3c17a2
feat: Introduce production-ready Flows and Crews architecture with ne…
lorenzejay afea8a5
Fix token tracking issues in async tasks and agent metrics
Devasy 9bbf53e
Merge branch 'feat/per-user-token-tracing' of https://github.com/Deva…
Devasy 314642f
Merge branch 'main' into feat/per-user-token-tracing
Devasy 0f0538c
Fix async task token tracking race condition
Devasy 4f583fe
Fix late-binding closure in async task wrapper
Devasy 40f0692
Fix inconsistent per-agent dictionary keys and document threading lim…
Devasy f62a5a9
Fix threading race condition for async task token tracking
Devasy File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,154 @@ | ||
| --- | ||
| title: Production Architecture | ||
| description: Best practices for building production-ready AI applications with CrewAI | ||
| icon: server | ||
| mode: "wide" | ||
| --- | ||
|
|
||
| # The Flow-First Mindset | ||
|
|
||
| When building production AI applications with CrewAI, **we recommend starting with a Flow**. | ||
|
|
||
| While it's possible to run individual Crews or Agents, wrapping them in a Flow provides the necessary structure for a robust, scalable application. | ||
|
|
||
| ## Why Flows? | ||
|
|
||
| 1. **State Management**: Flows provide a built-in way to manage state across different steps of your application. This is crucial for passing data between Crews, maintaining context, and handling user inputs. | ||
| 2. **Control**: Flows allow you to define precise execution paths, including loops, conditionals, and branching logic. This is essential for handling edge cases and ensuring your application behaves predictably. | ||
| 3. **Observability**: Flows provide a clear structure that makes it easier to trace execution, debug issues, and monitor performance. We recommend using [CrewAI Tracing](/en/observability/tracing) for detailed insights. Simply run `crewai login` to enable free observability features. | ||
|
|
||
| ## The Architecture | ||
|
|
||
| A typical production CrewAI application looks like this: | ||
|
|
||
| ```mermaid | ||
| graph TD | ||
| Start((Start)) --> Flow[Flow Orchestrator] | ||
| Flow --> State{State Management} | ||
| State --> Step1[Step 1: Data Gathering] | ||
| Step1 --> Crew1[Research Crew] | ||
| Crew1 --> State | ||
| State --> Step2{Condition Check} | ||
| Step2 -- "Valid" --> Step3[Step 3: Execution] | ||
| Step3 --> Crew2[Action Crew] | ||
| Step2 -- "Invalid" --> End((End)) | ||
| Crew2 --> End | ||
| ``` | ||
|
|
||
| ### 1. The Flow Class | ||
| Your `Flow` class is the entry point. It defines the state schema and the methods that execute your logic. | ||
|
|
||
| ```python | ||
| from crewai.flow.flow import Flow, listen, start | ||
| from pydantic import BaseModel | ||
|
|
||
| class AppState(BaseModel): | ||
| user_input: str = "" | ||
| research_results: str = "" | ||
| final_report: str = "" | ||
|
|
||
| class ProductionFlow(Flow[AppState]): | ||
| @start() | ||
| def gather_input(self): | ||
| # ... logic to get input ... | ||
| pass | ||
|
|
||
| @listen(gather_input) | ||
| def run_research_crew(self): | ||
| # ... trigger a Crew ... | ||
| pass | ||
| ``` | ||
|
|
||
| ### 2. State Management | ||
| Use Pydantic models to define your state. This ensures type safety and makes it clear what data is available at each step. | ||
|
|
||
| - **Keep it minimal**: Store only what you need to persist between steps. | ||
| - **Use structured data**: Avoid unstructured dictionaries when possible. | ||
|
|
||
| ### 3. Crews as Units of Work | ||
| Delegate complex tasks to Crews. A Crew should be focused on a specific goal (e.g., "Research a topic", "Write a blog post"). | ||
|
|
||
| - **Don't over-engineer Crews**: Keep them focused. | ||
| - **Pass state explicitly**: Pass the necessary data from the Flow state to the Crew inputs. | ||
|
|
||
| ```python | ||
| @listen(gather_input) | ||
| def run_research_crew(self): | ||
| crew = ResearchCrew() | ||
| result = crew.kickoff(inputs={"topic": self.state.user_input}) | ||
| self.state.research_results = result.raw | ||
| ``` | ||
|
|
||
| ## Control Primitives | ||
|
|
||
| Leverage CrewAI's control primitives to add robustness and control to your Crews. | ||
|
|
||
| ### 1. Task Guardrails | ||
| Use [Task Guardrails](/en/concepts/tasks#task-guardrails) to validate task outputs before they are accepted. This ensures that your agents produce high-quality results. | ||
|
|
||
| ```python | ||
| def validate_content(result: TaskOutput) -> Tuple[bool, Any]: | ||
| if len(result.raw) < 100: | ||
| return (False, "Content is too short. Please expand.") | ||
| return (True, result.raw) | ||
|
|
||
| task = Task( | ||
| ..., | ||
| guardrail=validate_content | ||
| ) | ||
| ``` | ||
|
|
||
| ### 2. Structured Outputs | ||
| Always use structured outputs (`output_pydantic` or `output_json`) when passing data between tasks or to your application. This prevents parsing errors and ensures type safety. | ||
|
|
||
| ```python | ||
| class ResearchResult(BaseModel): | ||
| summary: str | ||
| sources: List[str] | ||
|
|
||
| task = Task( | ||
| ..., | ||
| output_pydantic=ResearchResult | ||
| ) | ||
| ``` | ||
|
|
||
| ### 3. LLM Hooks | ||
| Use [LLM Hooks](/en/learn/llm-hooks) to inspect or modify messages before they are sent to the LLM, or to sanitize responses. | ||
|
|
||
| ```python | ||
| @before_llm_call | ||
| def log_request(context): | ||
| print(f"Agent {context.agent.role} is calling the LLM...") | ||
| ``` | ||
|
|
||
| ## Deployment Patterns | ||
|
|
||
| When deploying your Flow, consider the following: | ||
|
|
||
| ### CrewAI Enterprise | ||
| The easiest way to deploy your Flow is using CrewAI Enterprise. It handles the infrastructure, authentication, and monitoring for you. | ||
|
|
||
| Check out the [Deployment Guide](/en/enterprise/guides/deploy-crew) to get started. | ||
|
|
||
| ```bash | ||
| crewai deploy create | ||
| ``` | ||
|
|
||
| ### Async Execution | ||
| For long-running tasks, use `kickoff_async` to avoid blocking your API. | ||
|
|
||
| ### Persistence | ||
| Use the `@persist` decorator to save the state of your Flow to a database. This allows you to resume execution if the process crashes or if you need to wait for human input. | ||
|
|
||
| ```python | ||
| @persist | ||
| class ProductionFlow(Flow[AppState]): | ||
| # ... | ||
| ``` | ||
|
|
||
| ## Summary | ||
|
|
||
| - **Start with a Flow.** | ||
| - **Define a clear State.** | ||
| - **Use Crews for complex tasks.** | ||
| - **Deploy with an API and persistence.** |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: Devasy/crewAI-telemetry
Length of output: 852
🏁 Script executed:
Repository: Devasy/crewAI-telemetry
Length of output: 941
🏁 Script executed:
Repository: Devasy/crewAI-telemetry
Length of output: 3747
🏁 Script executed:
cat -n lib/crewai/src/crewai/flow/human_feedback.py | head -100Repository: Devasy/crewAI-telemetry
Length of output: 4159
🏁 Script executed:
Repository: Devasy/crewAI-telemetry
Length of output: 9370
🏁 Script executed:
Repository: Devasy/crewAI-telemetry
Length of output: 6650
Fix listener signature in routed human-feedback example
The
@listen("approved")method should not expect aresultparameter. Whenemitis specified, the decorator returns the outcome string (notHumanFeedbackResult), and the feedback object is stored onself.last_human_feedback. Update the example:🤖 Prompt for AI Agents