Skip to content

Commit

Permalink
Add callout, update chatbot tools guide (langchain-ai#5883)
Browse files Browse the repository at this point in the history
  • Loading branch information
jacoblee93 authored Jun 24, 2024
1 parent ff0ac6e commit a4a2d11
Show file tree
Hide file tree
Showing 2 changed files with 48 additions and 93 deletions.
9 changes: 8 additions & 1 deletion docs/core_docs/docs/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -160,8 +160,15 @@ For specifics on how to use chat models, see the [relevant how-to guides here](/

<span data-heading-keywords="llm,llms"></span>

:::caution
Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models),
even for non-chat use cases.

You are probably looking for [the section above instead](/docs/concepts/#chat-models).
:::

Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see below).
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above).

Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This gives them the same interface as [Chat Models](/docs/concepts/#chat-models).
Expand Down
132 changes: 40 additions & 92 deletions docs/core_docs/docs/how_to/chatbots_tools.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,19 @@
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following:\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chatbots](/docs/tutorials/chatbot)\n",
"- [Tools](/docs/concepts#tools)\n",
"- [Chatbots](/docs/concepts/#messages)\n",
"- [Agents](/docs/tutorials/agents)\n",
"- [Chat history](/docs/concepts/#chat-history)\n",
"\n",
":::\n",
"\n",
"This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n",
"\n",
"## Setup\n",
"\n",
"For this guide, we’ll be using an OpenAI tools agent with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.\n",
"For this guide, we’ll be using an [tool calling agent](/docs/how_to/agent_executor) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.\n",
"\n",
"You’ll need to [sign up for an account on the Tavily website](https://tavily.com), and install the following packages:\n",
"\n",
Expand Down Expand Up @@ -71,7 +72,7 @@
" ChatPromptTemplate,\n",
"} from \"@langchain/core/prompts\";\n",
"\n",
"// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n",
"// Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent\n",
"const prompt = ChatPromptTemplate.fromMessages([\n",
" [\n",
" \"system\",\n",
Expand All @@ -95,9 +96,9 @@
"metadata": {},
"outputs": [],
"source": [
"import { AgentExecutor, createOpenAIToolsAgent } from \"langchain/agents\";\n",
"import { AgentExecutor, createToolCallingAgent } from \"langchain/agents\";\n",
"\n",
"const agent = await createOpenAIToolsAgent({\n",
"const agent = await createToolCallingAgent({\n",
" llm,\n",
" tools,\n",
" prompt,\n",
Expand Down Expand Up @@ -139,7 +140,7 @@
" response_metadata: {}\n",
" }\n",
" ],\n",
" output: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
" output: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
"}"
]
},
Expand Down Expand Up @@ -187,7 +188,7 @@
" response_metadata: {}\n",
" }\n",
" ],\n",
" output: \u001b[32m\"The current conservation status of the Great Barrier Reef is a cause for concern. The International \"\u001b[39m... 801 more characters\n",
" output: \u001b[32m\"The Great Barrier Reef has recorded its highest amount of coral cover since the Australian Institute\"\u001b[39m... 688 more characters\n",
"}"
]
},
Expand Down Expand Up @@ -253,7 +254,8 @@
" additional_kwargs: {},\n",
" response_metadata: {},\n",
" tool_calls: [],\n",
" invalid_tool_calls: []\n",
" invalid_tool_calls: [],\n",
" usage_metadata: \u001b[90mundefined\u001b[39m\n",
" },\n",
" HumanMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
Expand Down Expand Up @@ -294,7 +296,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If preferred, you can also wrap the agent executor in a `RunnableWithMessageHistory` class to internally manage history messages. First, we need to slightly modify the prompt to take a separate input variable so that the wrapper can parse which input value to store as history:\n"
"If preferred, you can also wrap the agent executor in a [`RunnableWithMessageHistory`](/docs/how_to/message_history/) class to internally manage history messages. Let's redeclare it this way:"
]
},
{
Expand All @@ -303,21 +305,10 @@
"metadata": {},
"outputs": [],
"source": [
"// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n",
"const prompt2 = ChatPromptTemplate.fromMessages([\n",
" [\n",
" \"system\",\n",
" \"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\",\n",
" ],\n",
" [\"placeholder\", \"{chat_history}\"],\n",
" [\"human\", \"{input}\"],\n",
" [\"placeholder\", \"{agent_scratchpad}\"],\n",
"]);\n",
"\n",
"const agent2 = await createOpenAIToolsAgent({\n",
"const agent2 = await createToolCallingAgent({\n",
" llm,\n",
" tools,\n",
" prompt: prompt2,\n",
" prompt,\n",
"});\n",
"\n",
"const agentExecutor2 = new AgentExecutor({ agent: agent2, tools });"
Expand All @@ -332,35 +323,14 @@
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n",
"import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n",
"\n",
"const demoEphemeralChatMessageHistory = new ChatMessageHistory();\n",
"\n",
"const conversationalAgentExecutor = new RunnableWithMessageHistory({\n",
" runnable: agentExecutor2,\n",
" getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,\n",
" inputMessagesKey: \"input\",\n",
" outputMessagesKey: \"output\",\n",
" historyMessagesKey: \"chat_history\",\n",
"});"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{\n",
" input: \u001b[32m\"I'm Nemo!\"\u001b[39m,\n",
" chat_history: [\n",
" messages: [\n",
" HumanMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
Expand All @@ -373,52 +343,46 @@
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" AIMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
" tool_calls: [],\n",
" invalid_tool_calls: [],\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {},\n",
" tool_calls: [],\n",
" invalid_tool_calls: []\n",
" }\n",
" ],\n",
" output: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
" output: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m\n",
"}"
]
},
"execution_count": 10,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n",
"import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n",
"\n",
"const demoEphemeralChatMessageHistory = new ChatMessageHistory();\n",
"\n",
"const conversationalAgentExecutor = new RunnableWithMessageHistory({\n",
" runnable: agentExecutor2,\n",
" getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,\n",
" inputMessagesKey: \"messages\",\n",
" outputMessagesKey: \"output\",\n",
"});\n",
"\n",
"await conversationalAgentExecutor.invoke(\n",
" { input: \"I'm Nemo!\" },\n",
" { messages: [new HumanMessage(\"I'm Nemo!\")] },\n",
" { configurable: { sessionId: \"unused\" } }\n",
");"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{\n",
" input: \u001b[32m\"What is my name?\"\u001b[39m,\n",
" chat_history: [\n",
" messages: [\n",
" HumanMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
Expand All @@ -435,19 +399,20 @@
" AIMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
" content: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
" tool_calls: [],\n",
" invalid_tool_calls: [],\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
" content: \u001b[32m\"Hello Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
" content: \u001b[32m\"Hi Nemo! It's great to meet you. How can I assist you today?\"\u001b[39m,\n",
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {},\n",
" tool_calls: [],\n",
" invalid_tool_calls: []\n",
" invalid_tool_calls: [],\n",
" usage_metadata: \u001b[90mundefined\u001b[39m\n",
" },\n",
" HumanMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
Expand All @@ -461,37 +426,20 @@
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" AIMessage {\n",
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
" lc_kwargs: {\n",
" content: \u001b[32m\"Your name is Nemo!\"\u001b[39m,\n",
" tool_calls: [],\n",
" invalid_tool_calls: [],\n",
" additional_kwargs: {},\n",
" response_metadata: {}\n",
" },\n",
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
" content: \u001b[32m\"Your name is Nemo!\"\u001b[39m,\n",
" name: \u001b[90mundefined\u001b[39m,\n",
" additional_kwargs: {},\n",
" response_metadata: {},\n",
" tool_calls: [],\n",
" invalid_tool_calls: []\n",
" }\n",
" ],\n",
" output: \u001b[32m\"Your name is Nemo!\"\u001b[39m\n",
"}"
]
},
"execution_count": 11,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await conversationalAgentExecutor.invoke(\n",
" { input: \"What is my name?\" },\n",
" { messages: [new HumanMessage(\"What is my name?\")] },\n",
" { configurable: { sessionId: \"unused\" } }\n",
");"
]
Expand Down

0 comments on commit a4a2d11

Please sign in to comment.