-
Notifications
You must be signed in to change notification settings - Fork 375
[Feature] Support CrewAI for BYO agents #920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
…mple Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
4ee6b3f to
fe7faa9
Compare
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for CrewAI agents to the BYO (Bring Your Own) agent framework in KAgent. It follows a similar design pattern to the existing kagent-langgraph integration but adapts it for CrewAI's multi-agent orchestration capabilities.
- Creates the
kagent-crewaipackage to enable A2A server integration with CrewAI crews and flows - Provides two comprehensive sample implementations: a research crew with multiple agents and a poem generation flow
- Maintains the standard CrewAI developer experience while adding KAgent integration with minimal code changes
Reviewed Changes
Copilot reviewed 26 out of 29 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
python/packages/kagent-crewai/ |
Core package providing CrewAI integration with A2A protocol support and event streaming |
python/samples/crewai/research-crew/ |
Sample research crew demonstrating multi-agent collaboration with web search capabilities |
python/samples/crewai/poem_flow/ |
Sample CrewAI flow showing state management and sequential execution patterns |
python/pyproject.toml |
Workspace configuration updated to include CrewAI samples |
python/Makefile |
Build targets added for CrewAI sample containers |
Comments suppressed due to low confidence (1)
python/packages/kagent-crewai/src/kagent/crewai/_executor.py:1
- This line appears to be copy-pasted from the poem flow sample and doesn't belong in the generic executor. The log message should be generic or removed entirely.
import asyncio
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
python/samples/crewai/poem_flow/src/poem_flow/crews/poem_crew/config/agents.yaml
Outdated
Show resolved
Hide resolved
|
|
||
| import uvicorn | ||
| from crewai.flow import Flow, listen, start | ||
| from kagent.core import configure_tracing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think setting this up in as part of the integration, so end users can have it automatically without specifying in each of their crewai agents?
Would you be also interested (either here, or as a follow-up) seeing how leveraging this SDK would improve instrumentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better if it user manually enable tracing, I can put this as an argument when creating the app so the user doesn't need this import: app = KAgentApp(..., tracing=True)
For the opentelemetry CrewAI SDK would you mind elaborating on what features are you looking for? I'm down to implement this as well, but I saw that there are already tracing in kagent core
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specifically this would be enabling tracing for crewai. We bake in tracing for various LLM providers, http calls, and ADK, but we don't want to ALWAYS load tracing libraries for agent libraries we aren't actively using. I think I agree with @supreme-gg-gg about app = KAgentApp(..., tracing=True), but maybe it should be True by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specifically this would be enabling tracing for
crewai. We bake in tracing for various LLM providers,httpcalls, and ADK, but we don't want to ALWAYS load tracing libraries for agent libraries we aren't actively using. I think I agree with @supreme-gg-gg aboutapp = KAgentApp(..., tracing=True), but maybe it should beTrueby default.
Would CrewAI tracing with this SDK be on top of or replace the existing provider-specific tracing (e.g. OpenAI, Anthropic) in kagent core? it would be placed in the CrewAI package, is that correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if it would replace those, as what we have in kagent-core utils covers LLM-specific and auxiliary tracing. I'd expect the CrewAI SDK to add additional, CrewAI specific spans that can be used on top of what we have as core tracing.
Have you had a chance to take a look at what spans are you getting with the current changes? If we have LLM calls then I'd say we are fine w/r/t tracing with this PR, and can do add the CrewAI SDK as a potential follow-up, if it's useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krisztianfekete Sorry for the late response, I did a few experiments with CrewAI tracing and validated that the CrewAI spans are indeed sent to the collector. CrewAI has its internal instrumentor, but I found that by adding the CrewAI Instrumentation SDK you suggested the spans will be more comprehensive.
Regarding providers, the CrewAI SDK traces all calls to LiteLLM, but provider-specific SDK is required for spans containing the actual LLM call content e.g. prompts, token count, so I will use an approach similar to Langgraph tracing where both the provider specific + framework specific SDK are used.
I will be adding this to the new follow up PR #951
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for following up!
In the core library we have Gemini/Anthropic/OpenAI SDKs already, so by adding the CrewAI one, we should have support for the most popular options I think.
EItanya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great so far! Mostly my comments are about the executor and some Docker config.
| loop = asyncio.get_running_loop() | ||
|
|
||
| def _enqueue_event(event: Any): | ||
| asyncio.run_coroutine_threadsafe(event_queue.enqueue_event(event), loop) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this helper func, why can't we just enqueue the envoy from the scoped handlers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handlers must be synchronous because they're invoked by the crewAI event bus. However the enqueue_event function from A2A is async, so we need this helper to execute the enqueue instead of doing it directly in the handler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will the impact be of running it this way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there are any major risks or downsides for this workaround, the only minor problem I speculate might be that if the enqueue fails we can't log and report it since it's somewhat like "fire and forget"
| FROM python:3.13-slim | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| # Install system dependencies | ||
| RUN apt-get update && apt-get install -y \ | ||
| build-essential \ | ||
| && rm -rf /var/lib/apt/lists/* | ||
|
|
||
| # Install uv for fast Python package management | ||
| RUN pip install uv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use uv image to avoid installing uv separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I update the langgraph example to use uv image as well?
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
|
@EItanya could you review my changes to the Dockerfiles for python app and the samples, thanks! |
| kickoff = "poem_flow.main:kickoff" | ||
| run_crew = "poem_flow.main:kickoff" | ||
| plot = "poem_flow.main:plot" | ||
| poem_flow = "poem_flow.main:main" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need these to be scripts? Scripts end up being global in a uv workspace. We can fix in a follow-up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove these in the follow up, they're generated by crewAI
This PR is a follow up for #920 ## Features - [x] Creates go handlers for `/crewai` routes for storing and retrieving memory items - [x] Updates the database handler to store and retrieve long term memory items from CrewAI Crew and state for CrewAI Flow - [x] Adds custom memory store to agents before execution to allow for session-based persistence - [x] Updates samples to show usage of memory by setting `memory=True` and persistence by `@persist()` - [x] Support tracing with `opentelemetry-instrumentation-crewai` for crewai specific spans as suggested in code review ## Tests e2e test is added for CrewAI Poem flow sample agents using the mock LLM server. The test case will create the agent resource, create the mock LLM server using mock response in `invoke_creawi_agent.json`, test synchronous, streaming, and persistence for the agent. It requires the agent container to be build and pushed to the registry (by running `make poem-flow-sample` or the Dockerfile directly in `samples/crewai/poem_flow`). The following changes are made to helper functions in e2e test: 1. `runSyncTest` accepts an optional `contextID` to be included in the mock message to test session persistence 2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate if the expected output should be checked for in the history messages or the artifact returned by the A2A server, since the A2A protocol specifies that `Artifacts` are the standard way to convey final outputs of a task --------- Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
…gent-dev#951) This PR is a follow up for kagent-dev#920 - [x] Creates go handlers for `/crewai` routes for storing and retrieving memory items - [x] Updates the database handler to store and retrieve long term memory items from CrewAI Crew and state for CrewAI Flow - [x] Adds custom memory store to agents before execution to allow for session-based persistence - [x] Updates samples to show usage of memory by setting `memory=True` and persistence by `@persist()` - [x] Support tracing with `opentelemetry-instrumentation-crewai` for crewai specific spans as suggested in code review e2e test is added for CrewAI Poem flow sample agents using the mock LLM server. The test case will create the agent resource, create the mock LLM server using mock response in `invoke_creawi_agent.json`, test synchronous, streaming, and persistence for the agent. It requires the agent container to be build and pushed to the registry (by running `make poem-flow-sample` or the Dockerfile directly in `samples/crewai/poem_flow`). The following changes are made to helper functions in e2e test: 1. `runSyncTest` accepts an optional `contextID` to be included in the mock message to test session persistence 2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate if the expected output should be checked for in the history messages or the artifact returned by the A2A server, since the A2A protocol specifies that `Artifacts` are the standard way to convey final outputs of a task --------- Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
…gent-dev#951) This PR is a follow up for kagent-dev#920 ## Features - [x] Creates go handlers for `/crewai` routes for storing and retrieving memory items - [x] Updates the database handler to store and retrieve long term memory items from CrewAI Crew and state for CrewAI Flow - [x] Adds custom memory store to agents before execution to allow for session-based persistence - [x] Updates samples to show usage of memory by setting `memory=True` and persistence by `@persist()` - [x] Support tracing with `opentelemetry-instrumentation-crewai` for crewai specific spans as suggested in code review ## Tests e2e test is added for CrewAI Poem flow sample agents using the mock LLM server. The test case will create the agent resource, create the mock LLM server using mock response in `invoke_creawi_agent.json`, test synchronous, streaming, and persistence for the agent. It requires the agent container to be build and pushed to the registry (by running `make poem-flow-sample` or the Dockerfile directly in `samples/crewai/poem_flow`). The following changes are made to helper functions in e2e test: 1. `runSyncTest` accepts an optional `contextID` to be included in the mock message to test session persistence 2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate if the expected output should be checked for in the history messages or the artifact returned by the A2A server, since the A2A protocol specifies that `Artifacts` are the standard way to convey final outputs of a task --------- Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
…gent-dev#951) This PR is a follow up for kagent-dev#920 ## Features - [x] Creates go handlers for `/crewai` routes for storing and retrieving memory items - [x] Updates the database handler to store and retrieve long term memory items from CrewAI Crew and state for CrewAI Flow - [x] Adds custom memory store to agents before execution to allow for session-based persistence - [x] Updates samples to show usage of memory by setting `memory=True` and persistence by `@persist()` - [x] Support tracing with `opentelemetry-instrumentation-crewai` for crewai specific spans as suggested in code review ## Tests e2e test is added for CrewAI Poem flow sample agents using the mock LLM server. The test case will create the agent resource, create the mock LLM server using mock response in `invoke_creawi_agent.json`, test synchronous, streaming, and persistence for the agent. It requires the agent container to be build and pushed to the registry (by running `make poem-flow-sample` or the Dockerfile directly in `samples/crewai/poem_flow`). The following changes are made to helper functions in e2e test: 1. `runSyncTest` accepts an optional `contextID` to be included in the mock message to test session persistence 2. `runSyncTest` accepts an optional `useArtifacts` argument to indicate if the expected output should be checked for in the history messages or the artifact returned by the A2A server, since the A2A protocol specifies that `Artifacts` are the standard way to convey final outputs of a task --------- Signed-off-by: Jet Chiang <jetjiang.ez@gmail.com>
This PR extends BYO agents support for agents created with the CrewAI framework. The design follows
kagent-langgraph.Changes
kagent-crewaipackage for creating A2A servers using CrewAIsamples/crewaifor Crew and Flow mode to show developers how to integrate (<10 lines of change from CrewAi code to setup the integration)Next steps
Testing
To get started, follow instructions in
samples/crewai/research-crew. Alternatively, followsamples/crewai/poem_flowfor a Flow mode example. Both examples are taken from the CrewAI QuickStart tutorial, showcasing how to create agent, tasks, provider, crew, flow, etc. in CrewAI and integrate it seamlessly by just modifying the code entry point.How it looks like from the Kagent UI: