Mix.install([
{:langchain, "~> 0.3.0-rc.0"}
])
This notebook shows how to use the Elixir LangChain library to expose an Elixir function as a tool that can be executed by an LLM like ChatGPT, Anthropic and others. The LangChain library wraps this all up making it easy and portable between different LLMs.
Let's define the Elixir function we want to expose to ChatGPT so we can see how it works.
In this example we'll create a get_user_info
function that takes a user ID and returns the relevant user's information for the current user to a web app.
For simplicity, we're skipping an actual database and storing our fake records on the module.
defmodule MyApp do
@pretend_db %{
1 => %{user_id: 1, name: "Michael Johnson", account_type: :trial, favorite_animal: "Horse"},
2 => %{user_id: 2, name: "Joan Jett", account_type: :member, favorite_animal: "Aardvark"}
}
def get_user_info(user_id) do
@pretend_db[user_id]
end
end
{:module, MyApp, <<70, 79, 82, 49, 0, 0, 7, ...>>, {:get_user_info, 1}}
It's a simple lookup using a user_id
to find return a map of a user's data.
MyApp.get_user_info(1)
%{name: "Michael Johnson", user_id: 1, account_type: :trial, favorite_animal: "Horse"}
With our function ready, let's cover how we can give the LLM access to this.
With an Elixir function defined, we will wrap it in a LangChain Function
structure so it can be easily shared with an LLM.
This is what that looks like:
alias LangChain.Function
function =
Function.new!(%{
name: "get_user_info",
description: "Return JSON object of the current users's relevant information.",
function: fn _args, %{user_id: user_id} = _context ->
# This uses the user_id provided through the context to call our Elixir function.
{:ok, Jason.encode!(MyApp.get_user_info(user_id))}
end
})
%LangChain.Function{
name: "get_user_info",
description: "Return JSON object of the current users's relevant information.",
display_text: nil,
function: #Function<41.125776118/2 in :erl_eval.expr/6>,
async: true,
parameters_schema: nil,
parameters: []
}
The function name
we provide is how the LLM will execute the function if the LLM chooses to call it.
The description
is for the LLM to know what the function can do so it can decide which function to call for which purpose.
The function
argument is passed an annonymous function whose job it is to be the glue that bridges data coming from the LLM with context from our application before calling other functions from our application.
The LangChain.Function
acts as a bridge between the LLM and our application. The Elixir function receives 2 arguments. The first is any arguments passed to the function by the LLM if we defined any as being required. In this example, the LLM doesn't provide any arguments. The second argument is an application context that we'll get to next.
The context
is specific to our application and does not go through the LLM at all. Think of this as the current user logged into our Phoenix web application. We want the user's interaction with the LLM to be relevant and limited to only what the current user can see and do.
For returning the final result, since the LLM responses must be text, we convert the returned Map into JSON. Additionally, we return it inside an {:ok, json}
tuple, letting the library know that the function call was successful.
We need to setup the LangChain library to connect with ChatGPT using our API key. In a real Elixir application, this would be done in the config/config.exs
file using something like this:
config :langchain, :openai_key, fn -> System.fetch_env!("OPENAI_API_KEY") end
For the Livebook notebook, use the "Secrets" on the sidebar to create an OPENAI_API_KEY
secret with you API key. That is accessible here using "LB_OPENAI_API_KEY"
.
Application.put_env(:langchain, :openai_key, System.fetch_env!("LB_OPENAI_API_KEY"))
:ok
We'll use the LangChain.Message
struct to define the messages for what we want the LLM to do. Our system
message instructs the LLM how to behave.
In this example, we want the assistant to generate Haiku poems about the current user's favorite animal. However, we only want it to work for users who are "members" and not "trial" users.
The instructions we're giving the LLM will require it to execute the function to get additional information. Yes, this is both a simple and contrived example. In a real system, we wouldn't even make the API call to the server for a "trial" user and we could pass along the additional information with the first request.
What we're demonstrating here is that the LLM can interact with our Elixir application, use multiple pieces of returned information to make business logic decisions and fullfil our system requests.
alias LangChain.Message
messages = [
Message.new_system!(~s(You are a helpful haiku poem generating assistant.
ONLY generate a haiku for users with an `account_type` of "member".
If the user has an `account_type` of "trial", say you can't do it,
but you would love to help them if they upgrade and become a member.)),
Message.new_user!("The current user is requesting a Haiku poem about their favorite animal.")
]
[
%LangChain.Message{
content: "You are a helpful haiku poem generating assistant.\n ONLY generate a haiku for users with an `account_type` of \"member\".\n If the user has an `account_type` of \"trial\", say you can't do it,\n but you would love to help them if they upgrade and become a member.",
processed_content: nil,
index: nil,
status: :complete,
role: :system,
name: nil,
tool_calls: [],
tool_results: nil
},
%LangChain.Message{
content: "The current user is requesting a Haiku poem about their favorite animal.",
processed_content: nil,
index: nil,
status: :complete,
role: :user,
name: nil,
tool_calls: [],
tool_results: nil
}
]
For this example, we're talking to OpenAI's ChatGPT service. Let's setup that model. At this point, we can also specify which version of ChatGPT we want to talk with.
For the kind of work we're asking it to do, GPT-4o is the latest model that has great support for tools.
alias LangChain.ChatModels.ChatOpenAI
chat_model = ChatOpenAI.new!(%{model: "gpt-4o", temperature: 1, stream: false})
%LangChain.ChatModels.ChatOpenAI{
endpoint: "https://api.openai.com/v1/chat/completions",
model: "gpt-4o",
api_key: nil,
temperature: 1.0,
frequency_penalty: 0.0,
receive_timeout: 60000,
seed: nil,
n: 1,
json_response: false,
stream: false,
max_tokens: nil,
stream_options: nil,
callbacks: [],
user: nil
}
Here we'll define some special context that we want passed through to our LangChain.Function
when it is executed.
In a real application, this might be session based user or account information. It's whatever is relevant to our application that changes how a function should operate or the data it should access.
context = %{user_id: 2}
%{user_id: 2}
After trying this with user_id: 2
, a member who should have a Haiku generated for them, change it to user_id: 1
to see it be polietly denied.
We're ready to make the API call!
Notice the custom_context: context
setting that is passed in when creating the LLMChain
. That information is the application-specific context we want to be passed to our Function
when executed.
Also, note the verbose: true
setting. That causes a number of IO.inspect
calls to be printed showing what's happening internally.
Additionally, the stream: false
option says we want the result only when it's complete. This example isn't setup for receving a streaming response. We're keeping it simple!
alias LangChain.Chains.LLMChain
{:ok, updated_chain} =
%{llm: chat_model, custom_context: context, verbose: true}
|> LLMChain.new!()
|> LLMChain.add_messages(messages)
|> LLMChain.add_tools([function])
# keep running the LLM chain against the LLM if needed to evaluate
# function calls and provide a response.
|> LLMChain.run(mode: :while_needs_response)
response = updated_chain.last_message
IO.write(response.content)
response.content
LLM: %LangChain.ChatModels.ChatOpenAI{
endpoint: "https://api.openai.com/v1/chat/completions",
model: "gpt-4o",
api_key: nil,
temperature: 1.0,
frequency_penalty: 0.0,
receive_timeout: 60000,
seed: nil,
n: 1,
json_response: false,
stream: false,
max_tokens: nil,
stream_options: nil,
callbacks: [],
user: nil
}
MESSAGES: [
%LangChain.Message{
content: "You are a helpful haiku poem generating assistant.\n ONLY generate a haiku for users with an `account_type` of \"member\".\n If the user has an `account_type` of \"trial\", say you can't do it,\n but you would love to help them if they upgrade and become a member.",
processed_content: nil,
index: nil,
status: :complete,
role: :system,
name: nil,
tool_calls: [],
tool_results: nil
},
%LangChain.Message{
content: "The current user is requesting a Haiku poem about their favorite animal.",
processed_content: nil,
index: nil,
status: :complete,
role: :user,
name: nil,
tool_calls: [],
tool_results: nil
}
]
TOOLS: [
%LangChain.Function{
name: "get_user_info",
description: "Return JSON object of the current users's relevant information.",
display_text: nil,
function: #Function<41.125776118/2 in :erl_eval.expr/6>,
async: true,
parameters_schema: nil,
parameters: []
}
]
SINGLE MESSAGE RESPONSE: %LangChain.Message{
content: nil,
processed_content: nil,
index: 0,
status: :complete,
role: :assistant,
name: nil,
tool_calls: [
%LangChain.Message.ToolCall{
status: :complete,
type: :function,
call_id: "call_Na42ylypjHbP2IkigPbDQBNV",
name: "get_user_info",
arguments: %{},
index: nil
}
],
tool_results: nil
}
MESSAGE PROCESSED: %LangChain.Message{
content: nil,
processed_content: nil,
index: 0,
status: :complete,
role: :assistant,
name: nil,
tool_calls: [
%LangChain.Message.ToolCall{
status: :complete,
type: :function,
call_id: "call_Na42ylypjHbP2IkigPbDQBNV",
name: "get_user_info",
arguments: %{},
index: nil
}
],
tool_results: nil
}
EXECUTING FUNCTION: "get_user_info"
17:37:37.179 [debug] Executing function "get_user_info"
FUNCTION RESULT: "{\"name\":\"Joan Jett\",\"user_id\":2,\"account_type\":\"member\",\"favorite_animal\":\"Aardvark\"}"
TOOL RESULTS: %LangChain.Message{
content: nil,
processed_content: nil,
index: nil,
status: :complete,
role: :tool,
name: nil,
tool_calls: [],
tool_results: [
%LangChain.Message.ToolResult{
type: :function,
tool_call_id: "call_Na42ylypjHbP2IkigPbDQBNV",
name: "get_user_info",
content: "{\"name\":\"Joan Jett\",\"user_id\":2,\"account_type\":\"member\",\"favorite_animal\":\"Aardvark\"}",
display_text: nil,
is_error: false
}
]
}
SINGLE MESSAGE RESPONSE: %LangChain.Message{
content: "Joan's favorite, \nAardvark graces night with charm, \nSilent earth's delight.",
processed_content: nil,
index: 0,
status: :complete,
role: :assistant,
name: nil,
tool_calls: [],
tool_results: nil
}
MESSAGE PROCESSED: %LangChain.Message{
content: "Joan's favorite, \nAardvark graces night with charm, \nSilent earth's delight.",
processed_content: nil,
index: 0,
status: :complete,
role: :assistant,
name: nil,
tool_calls: [],
tool_results: nil
}
Joan's favorite,
Aardvark graces night with charm,
Silent earth's delight.
"Joan's favorite, \nAardvark graces night with charm, \nSilent earth's delight."
TIP: Try changing the context
to user_id: 1
now and see what happens when a different user context is provided.
After a successful call, we can see in the verbose logs that:
- the LLM requested to execute the tool which is our function
- LLMChain executed the Elixir function attached to the
Function
struct - the response of our Elixir function passed through the anonymous function on
Function
and was re-submitted back to the LLM - the LLM reacted to the result of our function call
This means it worked! We successfully let an LLM directly interact with our Elixir application!
With this, we could expose functions that allow the LLM to request additional information specific to the current user, or we could even define functions that allow the LLM to change things in the user's account on their behalf!
The rest is up to us.