Skip to content

Feature: Per-Agent Token Tracking for Cost Attribution #39

@Devasy

Description

@Devasy

Overview

Implement per-agent token tracking to provide granular visibility into token consumption by each CrewAI agent in the workflow.

Current State

✅ Workflow-level token tracking exists via crew.calculate_usage_metrics()
❌ No per-agent breakdown available

Motivation

  • Cost Attribution: Understand which agent consumes the most tokens
  • Optimization Opportunities: Identify agents that need context pruning
  • Debugging: Track token usage anomalies per agent
  • Analytics: Historical trends for each agent's efficiency

Proposed Solution

Use agent.llm.get_token_usage_summary() to capture per-agent token metrics after workflow completion.

Each agent has its own LLM instance that maintains cumulative token counters. By querying each agent's LLM after the crew finishes execution, we can calculate per-agent metrics.

Implementation Plan

1. Update crew.py

After crew.kickoff() completes, capture token usage from each agent:

  • Step Planner Agent
  • Element Identifier Agent
  • Code Assembler Agent
  • Code Validator Agent

Return per_agent_metrics dictionary along with existing return values.

2. Update workflow_service.py

  • Capture the new per_agent_metrics return value from run_crew()
  • Log per-agent breakdown for visibility
  • Pass metrics to WorkflowMetrics for persistence

3. Update WorkflowMetrics model (Optional)

Add per_agent_metrics field to store the breakdown for historical analysis.

Expected Output

📊 Per-agent token breakdown:
   - step_planner: 645 tokens (prompt: 512, completion: 133)
   - element_identifier: 823 tokens (prompt: 598, completion: 225)
   - code_assembler: 754 tokens (prompt: 465, completion: 289)
   - code_validator: 325 tokens (prompt: 248, completion: 77)

Reference

This implementation follows the same approach used in CrewAI PR #4132:
crewAIInc/crewAI#4132

Files to Modify

  • src/backend/crew_ai/crew.py
  • src/backend/services/workflow_service.py
  • src/backend/core/workflow_metrics.py (optional, for persistence)

Acceptance Criteria

  • Per-agent token metrics captured after workflow completion
  • Metrics logged for debugging visibility
  • No impact on existing functionality
  • Metrics optionally persisted in WorkflowMetrics model

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions