Skip to content

Conversation

@supreme-gg-gg
Copy link
Contributor

@supreme-gg-gg supreme-gg-gg commented Sep 20, 2025

This PR extends BYO agents support for agents created with the CrewAI framework. The design follows kagent-langgraph.

Changes

  • Create new kagent-crewai package for creating A2A servers using CrewAI
  • Provide samples/crewai for Crew and Flow mode to show developers how to integrate (<10 lines of change from CrewAi code to setup the integration)

Next steps

  • Implement memory and persistence for CrewAI agents
  • Support OpenTelemetry CrewAI Instrumentation

Testing

To get started, follow instructions in samples/crewai/research-crew. Alternatively, follow samples/crewai/poem_flow for a Flow mode example. Both examples are taken from the CrewAI QuickStart tutorial, showcasing how to create agent, tasks, provider, crew, flow, etc. in CrewAI and integrate it seamlessly by just modifying the code entry point.

How it looks like from the Kagent UI:

Screenshot 2025-09-20 at 6 24 03 pm

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
…mple

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
@supreme-gg-gg supreme-gg-gg marked this pull request as ready for review September 22, 2025 04:37
Copilot AI review requested due to automatic review settings September 22, 2025 04:37
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for CrewAI agents to the BYO (Bring Your Own) agent framework in KAgent. It follows a similar design pattern to the existing kagent-langgraph integration but adapts it for CrewAI's multi-agent orchestration capabilities.

  • Creates the kagent-crewai package to enable A2A server integration with CrewAI crews and flows
  • Provides two comprehensive sample implementations: a research crew with multiple agents and a poem generation flow
  • Maintains the standard CrewAI developer experience while adding KAgent integration with minimal code changes

Reviewed Changes

Copilot reviewed 26 out of 29 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
python/packages/kagent-crewai/ Core package providing CrewAI integration with A2A protocol support and event streaming
python/samples/crewai/research-crew/ Sample research crew demonstrating multi-agent collaboration with web search capabilities
python/samples/crewai/poem_flow/ Sample CrewAI flow showing state management and sequential execution patterns
python/pyproject.toml Workspace configuration updated to include CrewAI samples
python/Makefile Build targets added for CrewAI sample containers
Comments suppressed due to low confidence (1)

python/packages/kagent-crewai/src/kagent/crewai/_executor.py:1

  • This line appears to be copy-pasted from the poem flow sample and doesn't belong in the generic executor. The log message should be generic or removed entirely.
import asyncio

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.


import uvicorn
from crewai.flow import Flow, listen, start
from kagent.core import configure_tracing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think setting this up in as part of the integration, so end users can have it automatically without specifying in each of their crewai agents?

Would you be also interested (either here, or as a follow-up) seeing how leveraging this SDK would improve instrumentation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better if it user manually enable tracing, I can put this as an argument when creating the app so the user doesn't need this import: app = KAgentApp(..., tracing=True)

For the opentelemetry CrewAI SDK would you mind elaborating on what features are you looking for? I'm down to implement this as well, but I saw that there are already tracing in kagent core

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically this would be enabling tracing for crewai. We bake in tracing for various LLM providers, http calls, and ADK, but we don't want to ALWAYS load tracing libraries for agent libraries we aren't actively using. I think I agree with @supreme-gg-gg about app = KAgentApp(..., tracing=True), but maybe it should be True by default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically this would be enabling tracing for crewai. We bake in tracing for various LLM providers, http calls, and ADK, but we don't want to ALWAYS load tracing libraries for agent libraries we aren't actively using. I think I agree with @supreme-gg-gg about app = KAgentApp(..., tracing=True), but maybe it should be True by default.

Would CrewAI tracing with this SDK be on top of or replace the existing provider-specific tracing (e.g. OpenAI, Anthropic) in kagent core? it would be placed in the CrewAI package, is that correct

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it would replace those, as what we have in kagent-core utils covers LLM-specific and auxiliary tracing. I'd expect the CrewAI SDK to add additional, CrewAI specific spans that can be used on top of what we have as core tracing.

Have you had a chance to take a look at what spans are you getting with the current changes? If we have LLM calls then I'd say we are fine w/r/t tracing with this PR, and can do add the CrewAI SDK as a potential follow-up, if it's useful.

Copy link
Contributor Author

@supreme-gg-gg supreme-gg-gg Oct 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krisztianfekete Sorry for the late response, I did a few experiments with CrewAI tracing and validated that the CrewAI spans are indeed sent to the collector. CrewAI has its internal instrumentor, but I found that by adding the CrewAI Instrumentation SDK you suggested the spans will be more comprehensive.

Screenshot 2025-10-01 at 5 27 08 PM

Regarding providers, the CrewAI SDK traces all calls to LiteLLM, but provider-specific SDK is required for spans containing the actual LLM call content e.g. prompts, token count, so I will use an approach similar to Langgraph tracing where both the provider specific + framework specific SDK are used.

I will be adding this to the new follow up PR #951

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for following up!

In the core library we have Gemini/Anthropic/OpenAI SDKs already, so by adding the CrewAI one, we should have support for the most popular options I think.

Copy link
Contributor

@EItanya EItanya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking great so far! Mostly my comments are about the executor and some Docker config.

Comment on lines 106 to 109
loop = asyncio.get_running_loop()

def _enqueue_event(event: Any):
asyncio.run_coroutine_threadsafe(event_queue.enqueue_event(event), loop)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this helper func, why can't we just enqueue the envoy from the scoped handlers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handlers must be synchronous because they're invoked by the crewAI event bus. However the enqueue_event function from A2A is async, so we need this helper to execute the enqueue instead of doing it directly in the handler

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will the impact be of running it this way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there are any major risks or downsides for this workaround, the only minor problem I speculate might be that if the enqueue fails we can't log and report it since it's somewhat like "fire and forget"

Comment on lines 4 to 14
FROM python:3.13-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*

# Install uv for fast Python package management
RUN pip install uv
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use uv image to avoid installing uv separately

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I update the langgraph example to use uv image as well?

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
@supreme-gg-gg
Copy link
Contributor Author

@EItanya could you review my changes to the Dockerfiles for python app and the samples, thanks!

Comment on lines +13 to +16
kickoff = "poem_flow.main:kickoff"
run_crew = "poem_flow.main:kickoff"
plot = "poem_flow.main:plot"
poem_flow = "poem_flow.main:main"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need these to be scripts? Scripts end up being global in a uv workspace. We can fix in a follow-up

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove these in the follow up, they're generated by crewAI

@EItanya EItanya merged commit a3b8d1a into kagent-dev:main Sep 25, 2025
16 checks passed
EItanya pushed a commit that referenced this pull request Oct 9, 2025
This PR is a follow up for #920 

## Features

- [x] Creates go handlers for `/crewai` routes for storing and
retrieving memory items
- [x] Updates the database handler to store and retrieve long term
memory items from CrewAI Crew and state for CrewAI Flow
- [x] Adds custom memory store to agents before execution to allow for
session-based persistence
- [x] Updates samples to show usage of memory by setting `memory=True`
and persistence by `@persist()`
- [x] Support tracing with `opentelemetry-instrumentation-crewai` for
crewai specific spans as suggested in code review

## Tests

e2e test is added for CrewAI Poem flow sample agents using the mock LLM
server. The test case will create the agent resource, create the mock
LLM server using mock response in `invoke_creawi_agent.json`, test
synchronous, streaming, and persistence for the agent. It requires the
agent container to be build and pushed to the registry (by running `make
poem-flow-sample` or the Dockerfile directly in
`samples/crewai/poem_flow`).

The following changes are made to helper functions in e2e test:

1. `runSyncTest` accepts an optional `contextID` to be included in the
mock message to test session persistence
2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate
if the expected output should be checked for in the history messages or
the artifact returned by the A2A server, since the A2A protocol
specifies that `Artifacts` are the standard way to convey final outputs
of a task

---------

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
jmhbh pushed a commit to jmhbh/kagent that referenced this pull request Oct 10, 2025
…gent-dev#951)

This PR is a follow up for kagent-dev#920

- [x] Creates go handlers for `/crewai` routes for storing and
retrieving memory items
- [x] Updates the database handler to store and retrieve long term
memory items from CrewAI Crew and state for CrewAI Flow
- [x] Adds custom memory store to agents before execution to allow for
session-based persistence
- [x] Updates samples to show usage of memory by setting `memory=True`
and persistence by `@persist()`
- [x] Support tracing with `opentelemetry-instrumentation-crewai` for
crewai specific spans as suggested in code review

e2e test is added for CrewAI Poem flow sample agents using the mock LLM
server. The test case will create the agent resource, create the mock
LLM server using mock response in `invoke_creawi_agent.json`, test
synchronous, streaming, and persistence for the agent. It requires the
agent container to be build and pushed to the registry (by running `make
poem-flow-sample` or the Dockerfile directly in
`samples/crewai/poem_flow`).

The following changes are made to helper functions in e2e test:

1. `runSyncTest` accepts an optional `contextID` to be included in the
mock message to test session persistence
2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate
if the expected output should be checked for in the history messages or
the artifact returned by the A2A server, since the A2A protocol
specifies that `Artifacts` are the standard way to convey final outputs
of a task

---------

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
dhaifley pushed a commit to dhaifley/kagent that referenced this pull request Oct 14, 2025
…gent-dev#951)

This PR is a follow up for kagent-dev#920 

## Features

- [x] Creates go handlers for `/crewai` routes for storing and
retrieving memory items
- [x] Updates the database handler to store and retrieve long term
memory items from CrewAI Crew and state for CrewAI Flow
- [x] Adds custom memory store to agents before execution to allow for
session-based persistence
- [x] Updates samples to show usage of memory by setting `memory=True`
and persistence by `@persist()`
- [x] Support tracing with `opentelemetry-instrumentation-crewai` for
crewai specific spans as suggested in code review

## Tests

e2e test is added for CrewAI Poem flow sample agents using the mock LLM
server. The test case will create the agent resource, create the mock
LLM server using mock response in `invoke_creawi_agent.json`, test
synchronous, streaming, and persistence for the agent. It requires the
agent container to be build and pushed to the registry (by running `make
poem-flow-sample` or the Dockerfile directly in
`samples/crewai/poem_flow`).

The following changes are made to helper functions in e2e test:

1. `runSyncTest` accepts an optional `contextID` to be included in the
mock message to test session persistence
2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate
if the expected output should be checked for in the history messages or
the artifact returned by the A2A server, since the A2A protocol
specifies that `Artifacts` are the standard way to convey final outputs
of a task

---------

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
dhaifley pushed a commit to dhaifley/kagent that referenced this pull request Oct 15, 2025
…gent-dev#951)

This PR is a follow up for kagent-dev#920 

## Features

- [x] Creates go handlers for `/crewai` routes for storing and
retrieving memory items
- [x] Updates the database handler to store and retrieve long term
memory items from CrewAI Crew and state for CrewAI Flow
- [x] Adds custom memory store to agents before execution to allow for
session-based persistence
- [x] Updates samples to show usage of memory by setting `memory=True`
and persistence by `@persist()`
- [x] Support tracing with `opentelemetry-instrumentation-crewai` for
crewai specific spans as suggested in code review

## Tests

e2e test is added for CrewAI Poem flow sample agents using the mock LLM
server. The test case will create the agent resource, create the mock
LLM server using mock response in `invoke_creawi_agent.json`, test
synchronous, streaming, and persistence for the agent. It requires the
agent container to be build and pushed to the registry (by running `make
poem-flow-sample` or the Dockerfile directly in
`samples/crewai/poem_flow`).

The following changes are made to helper functions in e2e test:

1. `runSyncTest` accepts an optional `contextID` to be included in the
mock message to test session persistence
2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate
if the expected output should be checked for in the history messages or
the artifact returned by the A2A server, since the A2A protocol
specifies that `Artifacts` are the standard way to convey final outputs
of a task

---------

Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants