Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -308,6 +308,7 @@
"en/learn/hierarchical-process",
"en/learn/human-input-on-execution",
"en/learn/human-in-the-loop",
"en/learn/human-feedback-in-flows",
"en/learn/kickoff-async",
"en/learn/kickoff-for-each",
"en/learn/llm-connections",
Expand Down Expand Up @@ -735,6 +736,7 @@
"pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop",
"pt-BR/learn/human-feedback-in-flows",
"pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections",
Expand Down Expand Up @@ -1171,6 +1173,7 @@
"ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop",
"ko/learn/human-feedback-in-flows",
"ko/learn/kickoff-async",
"ko/learn/kickoff-for-each",
"ko/learn/llm-connections",
Expand Down
49 changes: 49 additions & 0 deletions docs/en/concepts/flows.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho

When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.

### Human in the Loop (human feedback)

The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.

```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult

class ReviewFlow(Flow):
@start()
@human_feedback(
message="Do you approve this content?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Content to be reviewed..."

@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Approved! Feedback: {result.feedback}")

@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejected. Reason: {result.feedback}")
```

When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.

You can also use `@human_feedback` without routing to simply collect feedback:

```python Code
@start()
@human_feedback(message="Any comments on this output?")
def my_method(self):
return "Output for review"

@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Access feedback via result.feedback
# Access original output via result.output
pass
```

Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).

For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).

## Adding Agents to Flows

Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
Expand Down
Loading
Loading