Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cp readme over #2274

Merged
merged 1 commit into from
Oct 31, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions libs/langgraph/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 🦜🕸️LangGraph

Check notice on line 1 in libs/langgraph/README.md

View workflow job for this annotation

GitHub Actions / benchmark

Benchmark results

......................................... fanout_to_subgraph_10x: Mean +- std dev: 48.3 ms +- 0.7 ms ......................................... fanout_to_subgraph_10x_sync: Mean +- std dev: 44.0 ms +- 0.5 ms ......................................... fanout_to_subgraph_10x_checkpoint: Mean +- std dev: 77.0 ms +- 1.5 ms ......................................... fanout_to_subgraph_10x_checkpoint_sync: Mean +- std dev: 85.3 ms +- 0.9 ms ......................................... fanout_to_subgraph_100x: Mean +- std dev: 472 ms +- 9 ms ......................................... fanout_to_subgraph_100x_sync: Mean +- std dev: 428 ms +- 4 ms ......................................... fanout_to_subgraph_100x_checkpoint: Mean +- std dev: 813 ms +- 39 ms ......................................... fanout_to_subgraph_100x_checkpoint_sync: Mean +- std dev: 847 ms +- 20 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (3.25 ms) is 10% of the mean (31.2 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. react_agent_10x: Mean +- std dev: 31.2 ms +- 3.2 ms ......................................... react_agent_10x_sync: Mean +- std dev: 23.1 ms +- 2.2 ms ......................................... react_agent_10x_checkpoint: Mean +- std dev: 48.8 ms +- 4.8 ms ......................................... react_agent_10x_checkpoint_sync: Mean +- std dev: 38.4 ms +- 3.8 ms ......................................... react_agent_100x: Mean +- std dev: 321 ms +- 7 ms ......................................... react_agent_100x_sync: Mean +- std dev: 258 ms +- 3 ms ......................................... react_agent_100x_checkpoint: Mean +- std dev: 950 ms +- 21 ms ......................................... react_agent_100x_checkpoint_sync: Mean +- std dev: 865 ms +- 18 ms ......................................... wide_state_25x300: Mean +- std dev: 18.7 ms +- 0.3 ms ......................................... wide_state_25x300_sync: Mean +- std dev: 11.1 ms +- 0.1 ms ......................................... wide_state_25x300_checkpoint: Mean +- std dev: 284 ms +- 17 ms ......................................... wide_state_25x300_checkpoint_sync: Mean +- std dev: 276 ms +- 17 ms ......................................... wide_state_15x600: Mean +- std dev: 21.6 ms +- 0.5 ms ......................................... wide_state_15x600_sync: Mean +- std dev: 12.7 ms +- 0.2 ms ......................................... wide_state_15x600_checkpoint: Mean +- std dev: 485 ms +- 16 ms ......................................... wide_state_15x600_checkpoint_sync: Mean +- std dev: 471 ms +- 17 ms ......................................... wide_state_9x1200: Mean +- std dev: 21.7 ms +- 0.4 ms ......................................... wide_state_9x1200_sync: Mean +- std dev: 12.7 ms +- 0.1 ms ......................................... wide_state_9x1200_checkpoint: Mean +- std dev: 317 ms +- 16 ms ......................................... wide_state_9x1200_checkpoint_sync: Mean +- std dev: 304 ms +- 17 ms

Check notice on line 1 in libs/langgraph/README.md

View workflow job for this annotation

GitHub Actions / benchmark

Comparison against main

+----------------------------------------+---------+-----------------------+ | Benchmark | main | changes | +========================================+=========+=======================+ | wide_state_9x1200_checkpoint | 323 ms | 317 ms: 1.02x faster | +----------------------------------------+---------+-----------------------+ | wide_state_9x1200_checkpoint_sync | 309 ms | 304 ms: 1.02x faster | +----------------------------------------+---------+-----------------------+ | fanout_to_subgraph_100x | 477 ms | 472 ms: 1.01x faster | +----------------------------------------+---------+-----------------------+ | wide_state_9x1200_sync | 12.8 ms | 12.7 ms: 1.01x faster | +----------------------------------------+---------+-----------------------+ | fanout_to_subgraph_10x_sync | 43.8 ms | 44.0 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | wide_state_15x600_sync | 12.6 ms | 12.7 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | react_agent_100x | 319 ms | 321 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | wide_state_9x1200 | 21.5 ms | 21.7 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | fanout_to_subgraph_10x_checkpoint_sync | 84.5 ms | 85.3 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | react_agent_100x_sync | 255 ms | 258 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | fanout_to_subgraph_10x_checkpoint | 76.2 ms | 77.0 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | wide_state_25x300 | 18.6 ms | 18.7 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | fanout_to_subgraph_100x_sync | 423 ms | 428 ms: 1.01x slower | +----------------------------------------+---------+-----------------------+ | wide_state_25x300_sync | 10.9 ms | 11.1 ms: 1.02x slower | +----------------------------------------+---------+-----------------------+ | react_agent_10x_sync | 22.6 ms | 23.1 ms: 1.02x slower | +----------------------------------------+---------+-----------------------+ | react_agent_10x_checkpoint_sync | 37.4 ms | 38.4 ms: 1.03x slower | +----------------------------------------+---------+-----------------------+ | react_agent_10x_checkpoint | 47.4 ms | 48.8 ms: 1.03x slower | +----------------------------------------+---------+-----------------------+ | react_agent_100x_checkpoint_sync | 839 ms | 865 ms: 1.03x slower | +----------------------------------------+---------+-----------------------+ | react_agent_100x_checkpoint | 913 ms | 950 ms: 1.04x slower | +----------------------------------------+---------+-----------------------+ | Geometric mean | (ref) | 1.01x slower | +----------------------------------------+---------+-----------------------+ Benchmark hidden because not significant (9): fanout_to_subgraph_100x_checkpoint_sync, react_agent_10x, wide_state_15x600_checkpoint_sync, fanout_to_subgraph_100x_checkpoint, wide_state_25x300_checkpoint, fanout_to_subgraph_10x, wide_state_15x600, wide_state_15x600_checkpoint, wide_state_25x300_checkpoint_sync

![Version](https://img.shields.io/pypi/v/langgraph)
[![Downloads](https://static.pepy.tech/badge/langgraph/month)](https://pepy.tech/project/langgraph)
Expand All @@ -16,6 +16,8 @@

LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.

[LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform) is infrastructure for deploying LangGraph agents. It is a commercial solution for deploying agentic applications to production, built on the open-source LangGraph framework. The LangGraph Platform consists of several components that work together to support the development, deployment, debugging, and monitoring of LangGraph applications: [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server) (APIs), [LangGraph SDKs](https://langchain-ai.github.io/langgraph/concepts/sdk) (clients for the APIs), [LangGraph CLI](https://langchain-ai.github.io/langgraph/concepts/langgraph_cli) (command line tool for building the server), [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio) (UI/debugger),

To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available for free [here](https://academy.langchain.com/courses/intro-to-langgraph).

### Key Features
Expand All @@ -26,6 +28,16 @@
- **Streaming Support**: Stream outputs as they are produced by each node (including token streaming).
- **Integration with LangChain**: LangGraph integrates seamlessly with [LangChain](https://github.com/langchain-ai/langchain/) and [LangSmith](https://docs.smith.langchain.com/) (but does not require them).

### LangGraph Platform

LangGraph Platform is a commercial solution for deploying agentic applications to production, built on the open-source LangGraph framework.
Here are some common issues that arise in complex deployments, which LangGraph Platform addresses:

- **Streaming support**: LangGraph Server provides [multiple streaming modes](https://langchain-ai.github.io/langgraph/concepts/streaming) optimized for various application needs
- **Background runs**: Runs agents asynchronously in the background
- **Support for long running agents**: Infrastructure that can handle long running processes
- **[Double texting](https://langchain-ai.github.io/langgraph/concepts/double_texting)**: Handle the case where you get two messages from the user before the agent can respond
- **Handle burstiness**: Task queue for ensuring requests are handled consistently without loss, even under heavy loads

## Installation

Expand Down
Loading