diff --git a/docs/core_docs/docs/how_to/index.mdx b/docs/core_docs/docs/how_to/index.mdx
index aec6c78c15ca..802aa03435df 100644
--- a/docs/core_docs/docs/how_to/index.mdx
+++ b/docs/core_docs/docs/how_to/index.mdx
@@ -73,6 +73,8 @@ These are the core building blocks you can use when building applications.
- [How to: get log probabilities](/docs/how_to/logprobs)
- [How to: stream a response back](/docs/how_to/chat_streaming)
- [How to: track token usage](/docs/how_to/chat_token_usage_tracking)
+- [How to: stream tool calls](/docs/how_to/tool_streaming)
+- [How to: few shot prompt tool behavior](/docs/how_to/tool_calling#few-shotting-with-tools)
### Messages
@@ -164,7 +166,9 @@ LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to p
- [How to: pass tool results back to model](/docs/how_to/tool_results_pass_to_model/)
- [How to: add ad-hoc tool calling capability to LLMs and Chat Models](/docs/how_to/tools_prompting)
- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
+- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: access the `RunnableConfig` object within a custom tool](/docs/how_to/tool_configure)
+- [How to: stream events from child runs within a custom tool](/docs/how_to/tool_stream_events)
- [How to: return extra artifacts from a custom tool](/docs/how_to/tool_artifacts)
### Agents
diff --git a/docs/core_docs/docs/how_to/tool_stream_events.ipynb b/docs/core_docs/docs/how_to/tool_stream_events.ipynb
new file mode 100644
index 000000000000..34ab59282176
--- /dev/null
+++ b/docs/core_docs/docs/how_to/tool_stream_events.ipynb
@@ -0,0 +1,682 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# How to stream events from child runs within a custom tool\n",
+ "\n",
+ "```{=mdx}\n",
+ ":::info Prerequisites\n",
+ "\n",
+ "This guide assumes familiarity with the following concepts:\n",
+ "- [LangChain Tools](/docs/concepts/#tools)\n",
+ "- [Custom tools](/docs/how_to/custom_tools)\n",
+ "- [Using stream events](/docs/how_to/streaming/#using-stream-events)\n",
+ "- [Accessing RunnableConfig within a custom tool](/docs/how_to/tool_configure/)\n",
+ "\n",
+ ":::\n",
+ "```\n",
+ "\n",
+ "If you have tools that call chat models, retrievers, or other runnables, you may want to access internal events from those runnables or configure them with additional properties. This guide shows you how to manually pass parameters properly so that you can do this using the [`.streamEvents()`](/docs/how_to/streaming/#using-stream-events) method.\n",
+ "\n",
+ "```{=mdx}\n",
+ ":::caution Compatibility\n",
+ "\n",
+ "In order to support a wider variety of JavaScript environments, the base LangChain package does not automatically propagate configuration to child runnables by default. This includes callbacks necessary for `.streamEvents()`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
+ "\n",
+ "You will need to manually propagate the `RunnableConfig` object to the child runnable. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
+ "\n",
+ "This guide also requires `@langchain/core>=0.2.16`.\n",
+ ":::\n",
+ "```\n",
+ "\n",
+ "Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n",
+ "\n",
+ "```{=mdx}\n",
+ "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { ChatAnthropic } from \"@langchain/anthropic\";\n",
+ "const model = new ChatAnthropic({\n",
+ " model: \"claude-3-5-sonnet-20240620\",\n",
+ " temperature: 0,\n",
+ "});"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { z } from \"zod\";\n",
+ "import { tool } from \"@langchain/core/tools\";\n",
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
+ "import { StringOutputParser } from \"@langchain/core/output_parsers\";\n",
+ "\n",
+ "const specialSummarizationTool = tool(async (input) => {\n",
+ " const prompt = ChatPromptTemplate.fromTemplate(\n",
+ " \"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n{long_text}\"\n",
+ " );\n",
+ " const reverse = (x: string) => {\n",
+ " return x.split(\"\").reverse().join(\"\");\n",
+ " };\n",
+ " const chain = prompt\n",
+ " .pipe(model)\n",
+ " .pipe(new StringOutputParser())\n",
+ " .pipe(reverse);\n",
+ " const summary = await chain.invoke({ long_text: input.long_text });\n",
+ " return summary;\n",
+ "}, {\n",
+ " name: \"special_summarization_tool\",\n",
+ " description: \"A tool that summarizes input text using advanced techniques.\",\n",
+ " schema: z.object({\n",
+ " long_text: z.string(),\n",
+ " }),\n",
+ "});"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Invoking the tool directly works just fine:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ ".yad noitaudarg rof tiftuo sesoohc yrraB ;scisyhp seifed eeB\n"
+ ]
+ }
+ ],
+ "source": [
+ "const LONG_TEXT = `\n",
+ "NARRATOR:\n",
+ "(Black screen with text; The sound of buzzing bees can be heard)\n",
+ "According to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\n",
+ "BARRY BENSON:\n",
+ "(Barry is picking out a shirt)\n",
+ "Yellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\n",
+ "JANET BENSON:\n",
+ "Barry! Breakfast is ready!\n",
+ "BARRY:\n",
+ "Coming! Hang on a second.`;\n",
+ "\n",
+ "await specialSummarizationTool.invoke({ long_text: LONG_TEXT });"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "But if you wanted to access the raw output from the chat model rather than the full tool, you might try to use the [`.streamEvents()`](/docs/how_to/streaming/#using-stream-events) method and look for an `on_chat_model_end` event. Here's what happens:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "const stream = await specialSummarizationTool.streamEvents(\n",
+ " { long_text: LONG_TEXT },\n",
+ " { version: \"v2\" },\n",
+ ");\n",
+ "\n",
+ "for await (const event of stream) {\n",
+ " if (event.event === \"on_chat_model_end\") {\n",
+ " // Never triggers!\n",
+ " console.log(event);\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You'll notice that there are no chat model events emitted from the child run!\n",
+ "\n",
+ "This is because the example above does not pass the tool's config object into the internal chain. To fix this, redefine your tool to take a special parameter typed as `RunnableConfig` (see [this guide](/docs/how_to/tool_configure) for more details). You'll also need to pass that parameter through into the internal chain when executing it:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "const specialSummarizationToolWithConfig = tool(async (input, config) => {\n",
+ " const prompt = ChatPromptTemplate.fromTemplate(\n",
+ " \"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n{long_text}\"\n",
+ " );\n",
+ " const reverse = (x: string) => {\n",
+ " return x.split(\"\").reverse().join(\"\");\n",
+ " };\n",
+ " const chain = prompt\n",
+ " .pipe(model)\n",
+ " .pipe(new StringOutputParser())\n",
+ " .pipe(reverse);\n",
+ " // Pass the \"config\" object as an argument to any executed runnables\n",
+ " const summary = await chain.invoke({ long_text: input.long_text }, config);\n",
+ " return summary;\n",
+ "}, {\n",
+ " name: \"special_summarization_tool\",\n",
+ " description: \"A tool that summarizes input text using advanced techniques.\",\n",
+ " schema: z.object({\n",
+ " long_text: z.string(),\n",
+ " }),\n",
+ "});"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And now try the same `.streamEvents()` call as before with your new tool:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\n",
+ " event: 'on_chat_model_end',\n",
+ " data: {\n",
+ " output: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: 'Bee defies physics; Barry chooses outfit for graduation day.',\n",
+ " name: undefined,\n",
+ " additional_kwargs: [Object],\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: [Object]\n",
+ " },\n",
+ " input: { messages: [Array] }\n",
+ " },\n",
+ " run_id: '27ac7b2e-591c-4adc-89ec-64d96e233ec8',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "const stream = await specialSummarizationToolWithConfig.streamEvents(\n",
+ " { long_text: LONG_TEXT },\n",
+ " { version: \"v2\" },\n",
+ ");\n",
+ "\n",
+ "for await (const event of stream) {\n",
+ " if (event.event === \"on_chat_model_end\") {\n",
+ " // Never triggers!\n",
+ " console.log(event);\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Awesome! This time there's an event emitted.\n",
+ "\n",
+ "For streaming, `.streamEvents()` automatically calls internal runnables in a chain with streaming enabled if possible, so if you wanted to a stream of tokens as they are generated from the chat model, you could simply filter to look for `on_chat_model_stream` events with no other changes:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: 'Bee',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ' def',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: 'ies physics',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ';',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ' Barry',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ' cho',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: 'oses outfit',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ' for',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ' graduation',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: ' day',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n",
+ "{\n",
+ " event: 'on_chat_model_stream',\n",
+ " data: {\n",
+ " chunk: AIMessageChunk {\n",
+ " lc_serializable: true,\n",
+ " lc_kwargs: [Object],\n",
+ " lc_namespace: [Array],\n",
+ " content: '.',\n",
+ " name: undefined,\n",
+ " additional_kwargs: {},\n",
+ " response_metadata: {},\n",
+ " id: undefined,\n",
+ " tool_calls: [],\n",
+ " invalid_tool_calls: [],\n",
+ " tool_call_chunks: [],\n",
+ " usage_metadata: undefined\n",
+ " }\n",
+ " },\n",
+ " run_id: '938c0469-83c6-4dbd-862e-cd73381165de',\n",
+ " name: 'ChatAnthropic',\n",
+ " tags: [ 'seq:step:2' ],\n",
+ " metadata: {\n",
+ " ls_provider: 'anthropic',\n",
+ " ls_model_name: 'claude-3-5-sonnet-20240620',\n",
+ " ls_model_type: 'chat',\n",
+ " ls_temperature: 0,\n",
+ " ls_max_tokens: 2048,\n",
+ " ls_stop: undefined\n",
+ " }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "const stream = await specialSummarizationToolWithConfig.streamEvents(\n",
+ " { long_text: LONG_TEXT },\n",
+ " { version: \"v2\" },\n",
+ ");\n",
+ "\n",
+ "for await (const event of stream) {\n",
+ " if (event.event === \"on_chat_model_stream\") {\n",
+ " // Never triggers!\n",
+ " console.log(event);\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Automatically passing config (Advanced)\n",
+ "\n",
+ "If you've used [LangGraph](https://langchain-ai.github.io/langgraphjs/), you may have noticed that you don't need to pass config in nested calls. This is because LangGraph takes advantage of an API called [`async_hooks`](https://nodejs.org/api/async_hooks.html), which is not supported in many, but not all environments.\n",
+ "\n",
+ "If you wish, you can enable automatic configuration passing by running the following code to import and enable `AsyncLocalStorage` globally:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { AsyncLocalStorageProviderSingleton } from \"@langchain/core/singletons\";\n",
+ "import { AsyncLocalStorage } from \"async_hooks\";\n",
+ "\n",
+ "AsyncLocalStorageProviderSingleton.initializeGlobalInstance(\n",
+ " new AsyncLocalStorage()\n",
+ ");"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Next steps\n",
+ "\n",
+ "You've now seen how to stream events from within a tool. Next, check out the following guides for more on using tools:\n",
+ "\n",
+ "- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
+ "- Pass [tool results back to a model](/docs/how_to/tool_results_pass_to_model)\n",
+ "- [Dispatch custom callback events](/docs/how_to/callbacks_custom_events)\n",
+ "\n",
+ "You can also check out some more specific uses of tool calling:\n",
+ "\n",
+ "- Building [tool-using chains and agents](/docs/how_to#tools)\n",
+ "- Getting [structured outputs](/docs/how_to/structured_output/) from models"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "TypeScript",
+ "language": "typescript",
+ "name": "tslab"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "mode": "typescript",
+ "name": "javascript",
+ "typescript": true
+ },
+ "file_extension": ".ts",
+ "mimetype": "text/typescript",
+ "name": "typescript",
+ "version": "3.7.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/docs/core_docs/docs/how_to/tool_streaming.ipynb b/docs/core_docs/docs/how_to/tool_streaming.ipynb
new file mode 100644
index 000000000000..141fb0674859
--- /dev/null
+++ b/docs/core_docs/docs/how_to/tool_streaming.ipynb
@@ -0,0 +1,451 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# How to stream tool calls\n",
+ "\n",
+ "When tools are called in a streaming context, \n",
+ "[message chunks](https://api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html) \n",
+ "will be populated with [tool call chunk](https://api.js.langchain.com/types/langchain_core_messages_tool.ToolCallChunk.html) \n",
+ "objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes \n",
+ "optional string fields for the tool `name`, `args`, and `id`, and includes an optional \n",
+ "integer field `index` that can be used to join chunks together. Fields are optional \n",
+ "because portions of a tool call may be streamed across different chunks (e.g., a chunk \n",
+ "that includes a substring of the arguments may have null values for the tool name and id).\n",
+ "\n",
+ "Because message chunks inherit from their parent message class, an \n",
+ "[`AIMessageChunk`](https://api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html) \n",
+ "with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. \n",
+ "These fields are parsed best-effort from the message's tool call chunks.\n",
+ "\n",
+ "Note that not all providers currently support streaming for tool calls. Before we start let's define our tools and our model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { z } from \"zod\";\n",
+ "import { tool } from \"@langchain/core/tools\";\n",
+ "import { ChatOpenAI } from \"@langchain/openai\";\n",
+ "\n",
+ "const addTool = tool(async (input) => {\n",
+ " return input.a + input.b;\n",
+ "}, {\n",
+ " name: \"add\",\n",
+ " description: \"Adds a and b.\",\n",
+ " schema: z.object({\n",
+ " a: z.number(),\n",
+ " b: z.number(),\n",
+ " }),\n",
+ "});\n",
+ "\n",
+ "const multiplyTool = tool(async (input) => {\n",
+ " return input.a * input.b;\n",
+ "}, {\n",
+ " name: \"multiply\",\n",
+ " description: \"Multiplies a and b.\",\n",
+ " schema: z.object({\n",
+ " a: z.number(),\n",
+ " b: z.number(),\n",
+ " }),\n",
+ "});\n",
+ "\n",
+ "const tools = [addTool, multiplyTool];\n",
+ "\n",
+ "const model = new ChatOpenAI({\n",
+ " model: \"gpt-4o\",\n",
+ " temperature: 0,\n",
+ "});\n",
+ "\n",
+ "const modelWithTools = model.bindTools(tools);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now let's define our query and stream our output:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '',\n",
+ " id: 'call_MdIlJL5CAYD7iz9gTm5lwWtJ',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: '{\"a\"',\n",
+ " id: undefined,\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: ': 3, ',\n",
+ " id: undefined,\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: '\"b\": 1',\n",
+ " id: undefined,\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: '2}',\n",
+ " id: undefined,\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '',\n",
+ " id: 'call_ihL9W6ylSRlYigrohe9SClmW',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: '{\"a\"',\n",
+ " id: undefined,\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: ': 11,',\n",
+ " id: undefined,\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: ' \"b\": ',\n",
+ " id: undefined,\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: undefined,\n",
+ " args: '49}',\n",
+ " id: undefined,\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[]\n",
+ "[]\n"
+ ]
+ }
+ ],
+ "source": [
+ "const query = \"What is 3 * 12? Also, what is 11 + 49?\";\n",
+ "\n",
+ "const stream = await modelWithTools.stream(query);\n",
+ "\n",
+ "for await (const chunk of stream) {\n",
+ " console.log(chunk.tool_call_chunks);\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n",
+ "\n",
+ "For example, below we accumulate tool call chunks:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\"',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, ',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 1',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '{\"a\"',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '{\"a\": 11,',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '{\"a\": 11, \"b\": ',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '{\"a\": 11, \"b\": 49}',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '{\"a\": 11, \"b\": 49}',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n",
+ "[\n",
+ " {\n",
+ " name: 'multiply',\n",
+ " args: '{\"a\": 3, \"b\": 12}',\n",
+ " id: 'call_0zGpgVz81Ew0HA4oKblG0s0a',\n",
+ " index: 0,\n",
+ " type: 'tool_call_chunk'\n",
+ " },\n",
+ " {\n",
+ " name: 'add',\n",
+ " args: '{\"a\": 11, \"b\": 49}',\n",
+ " id: 'call_ufY7lDSeCQwWbdq1XQQ2PBHR',\n",
+ " index: 1,\n",
+ " type: 'tool_call_chunk'\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { concat } from \"@langchain/core/utils/stream\";\n",
+ "\n",
+ "const stream = await modelWithTools.stream(query);\n",
+ "\n",
+ "let gathered = undefined;\n",
+ "\n",
+ "for await (const chunk of stream) {\n",
+ " gathered = gathered !== undefined ? concat(gathered, chunk) : chunk;\n",
+ " console.log(gathered.tool_call_chunks);\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "At the end, we can see the final aggregated tool call chunks include the fully gathered raw string value:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "string\n"
+ ]
+ }
+ ],
+ "source": [
+ "console.log(typeof gathered.tool_call_chunks[0].args);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And we can also see the fully parsed tool call as an object at the end:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "object\n"
+ ]
+ }
+ ],
+ "source": [
+ "console.log(typeof gathered.tool_calls[0].args);"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "TypeScript",
+ "language": "typescript",
+ "name": "tslab"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "mode": "typescript",
+ "name": "javascript",
+ "typescript": true
+ },
+ "file_extension": ".ts",
+ "mimetype": "text/typescript",
+ "name": "typescript",
+ "version": "3.7.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/docs/core_docs/docs/how_to/tools_error.ipynb b/docs/core_docs/docs/how_to/tools_error.ipynb
new file mode 100644
index 000000000000..fdf2825dd974
--- /dev/null
+++ b/docs/core_docs/docs/how_to/tools_error.ipynb
@@ -0,0 +1,244 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "5d60cbb9-2a6a-43ea-a9e9-f67b16ddd2b2",
+ "metadata": {},
+ "source": [
+ "# How to handle tool errors\n",
+ "\n",
+ "```{=mdx}\n",
+ ":::info Prerequisites\n",
+ "\n",
+ "This guide assumes familiarity with the following concepts:\n",
+ "- [Chat models](/docs/concepts/#chat-models)\n",
+ "- [LangChain Tools](/docs/concepts/#tools)\n",
+ "- [How to use a model to call tools](/docs/how_to/tool_calling)\n",
+ "\n",
+ ":::\n",
+ "```\n",
+ "\n",
+ "Calling tools with an LLM isn't perfect. The model may try to call a tool that doesn't exist or fail to return arguments that match the requested schema. Strategies like keeping schemas simple, reducing the number of tools you pass at once, and having good names and descriptions can help mitigate this risk, but aren't foolproof.\n",
+ "\n",
+ "This guide covers some ways to build error handling into your chains to mitigate these failure modes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0a50f93a-5d6f-4691-8f98-27239a1c2f95",
+ "metadata": {},
+ "source": [
+ "## Chain\n",
+ "\n",
+ "Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "1d20604e-c4d1-4d21-841b-23e4f61aec36",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { z } from \"zod\";\n",
+ "import { ChatOpenAI } from \"@langchain/openai\";\n",
+ "import { tool } from \"@langchain/core/tools\";\n",
+ "\n",
+ "const llm = new ChatOpenAI({\n",
+ " model: \"gpt-3.5-turbo-0125\",\n",
+ " temperature: 0,\n",
+ "});\n",
+ "\n",
+ "const complexTool = tool(async (params) => {\n",
+ " return params.int_arg * params.float_arg;\n",
+ "}, {\n",
+ " name: \"complex_tool\",\n",
+ " description: \"Do something complex with a complex tool.\",\n",
+ " schema: z.object({\n",
+ " int_arg: z.number(),\n",
+ " float_arg: z.number(),\n",
+ " number_arg: z.object({}),\n",
+ " })\n",
+ "});\n",
+ "\n",
+ "const llmWithTools = llm.bindTools([complexTool]);\n",
+ "\n",
+ "const chain = llmWithTools\n",
+ " .pipe((message) => message.tool_calls?.[0].args)\n",
+ " .pipe(complexTool);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c34f005e-63f0-4841-9461-ca36c36607fc",
+ "metadata": {},
+ "source": [
+ "We can see that when we try to invoke this chain the model fails to correctly call the tool:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "d354664c-ac44-4967-a35f-8912b3ad9477",
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "Error",
+ "evalue": "Received tool input did not match expected schema",
+ "output_type": "error",
+ "traceback": [
+ "Stack trace:",
+ "Error: Received tool input did not match expected schema",
+ " at DynamicStructuredTool.call (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.2.16/dist/tools/index.js:100:19)",
+ " at eventLoopTick (ext:core/01_core.js:63:7)",
+ " at async RunnableSequence.invoke (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.2.16_1/dist/runnables/base.js:1139:27)",
+ " at async :1:22"
+ ]
+ }
+ ],
+ "source": [
+ "await chain.invoke(\n",
+ " \"use complex tool. the args are 5, 2.1, potato\"\n",
+ ");"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "890d989d-2d39-4571-9a55-d3496b9b5d27",
+ "metadata": {},
+ "source": [
+ "## Try/except tool call\n",
+ "\n",
+ "The simplest way to more gracefully handle errors is to try/except the tool-calling step and return a helpful message on errors:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "8fedb550-683d-45ae-8876-ae7acb332019",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Calling tool with arguments:\n",
+ "\n",
+ "{\"int_arg\":5,\"float_arg\":2.1,\"number_arg\":\"potato\"}\n",
+ "\n",
+ "raised the following error:\n",
+ "\n",
+ "Error: Received tool input did not match expected schema\n"
+ ]
+ }
+ ],
+ "source": [
+ "const tryExceptToolWrapper = async (input, config) => {\n",
+ " try {\n",
+ " const result = await complexTool.invoke(input);\n",
+ " return result;\n",
+ " } catch (e) {\n",
+ " return `Calling tool with arguments:\\n\\n${JSON.stringify(input)}\\n\\nraised the following error:\\n\\n${e}`\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "const chain = llmWithTools\n",
+ " .pipe((message) => message.tool_calls?.[0].args)\n",
+ " .pipe(tryExceptToolWrapper);\n",
+ "\n",
+ "const res = await chain.invoke(\"use complex tool. the args are 5, 2.1, potato\");\n",
+ "\n",
+ "console.log(res);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3b2f6393-cb47-49d0-921c-09550a049fe4",
+ "metadata": {},
+ "source": [
+ "## Fallbacks\n",
+ "\n",
+ "We can also try to fallback to a better model in the event of a tool invocation error. In this case we'll fall back to an identical chain that uses `gpt-4-1106-preview` instead of `gpt-3.5-turbo`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "02cc4223-35fa-4240-976a-012299ca703c",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "\u001b[33m10.5\u001b[39m"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "const chain = llmWithTools\n",
+ " .pipe((message) => message.tool_calls?.[0].args)\n",
+ " .pipe(complexTool);\n",
+ "\n",
+ "const betterModel = new ChatOpenAI({\n",
+ " model: \"gpt-4-1106-preview\",\n",
+ " temperature: 0,\n",
+ "}).bindTools([complexTool]);\n",
+ "\n",
+ "const betterChain = betterModel\n",
+ " .pipe((message) => message.tool_calls?.[0].args)\n",
+ " .pipe(complexTool);\n",
+ "\n",
+ "const chainWithFallback = chain.withFallbacks({ fallbacks: [betterChain] });\n",
+ "\n",
+ "await chainWithFallback.invoke(\"use complex tool. the args are 5, 2.1, potato\");"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "412f8c4e-cc83-4d87-84a1-5ba2f8edb1e9",
+ "metadata": {},
+ "source": [
+ "Looking at the [LangSmith trace](https://smith.langchain.com/public/ea31e7ca-4abc-48e3-9943-700100c86622/r) for this chain run, we can see that the first chain call fails as expected and it's the fallback that succeeds."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6b97af9f",
+ "metadata": {},
+ "source": [
+ "## Next steps\n",
+ "\n",
+ "Now you've seen some strategies how to handle tool calling errors. Next, you can learn more about how to use tools:\n",
+ "\n",
+ "- Few shot prompting [with tools](/docs/how_to/tool_calling#few-shotting-with-tools)\n",
+ "- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
+ "- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
+ "\n",
+ "You can also check out some more specific uses of tool calling:\n",
+ "\n",
+ "- Getting [structured outputs](/docs/how_to/structured_output/) from models"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Deno",
+ "language": "typescript",
+ "name": "deno"
+ },
+ "language_info": {
+ "file_extension": ".ts",
+ "mimetype": "text/x.typescript",
+ "name": "typescript",
+ "nb_converter": "script",
+ "pygments_lexer": "typescript",
+ "version": "5.3.3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}