⚡ Building applications with LLMs through composability ⚡
👨💻👩💻 CURRENTLY SEEKING PEOPLE TO FORM THE CORE GROUP OF MAINTAINERS WITH
Langchain.rb is a library that's an abstraction layer on top many emergent AI, ML and other DS tools. The goal is to abstract complexity and difficult concepts to make building AI/ML-supercharged applications approachable for traditional software engineers.
Install the gem and add to the application's Gemfile by executing:
$ bundle add langchainrb
If bundler is not being used to manage dependencies, install the gem by executing:
$ gem install langchainrb
require "langchain"
List of currently supported vector search databases and features:
Database | Querying | Storage | Schema Management | Backups | Rails Integration | ??? |
---|---|---|---|---|---|---|
Weaviate | ✅ | WIP | WIP | WIP | ||
Qdrant | ✅ | WIP | WIP | WIP | ||
Milvus | ✅ | WIP | WIP | WIP | ||
Pinecone | ✅ | WIP | WIP | WIP |
Choose the LLM provider you'll be using (OpenAI or Cohere) and retrieve the API key.
Pick the vector search database you'll be using and instantiate the client:
client = Vectorsearch::Weaviate.new(
url: ENV["WEAVIATE_URL"],
api_key: ENV["WEAVIATE_API_KEY"],
llm: :openai, # or :cohere
llm_api_key: ENV["OPENAI_API_KEY"]
)
# You can instantiate any other supported vector search database:
client = Vectorsearch::Milvus.new(...)
client = Vectorsearch::Qdrant.new(...)
client = Vectorsearch::Pinecone.new(...)
# Creating the default schema
client.create_default_schema
# Store your documents in your vector search database
client.add_texts(
texts: [
"Begin by preheating your oven to 375°F (190°C). Prepare four boneless, skinless chicken breasts by cutting a pocket into the side of each breast, being careful not to cut all the way through. Season the chicken with salt and pepper to taste. In a large skillet, melt 2 tablespoons of unsalted butter over medium heat. Add 1 small diced onion and 2 minced garlic cloves, and cook until softened, about 3-4 minutes. Add 8 ounces of fresh spinach and cook until wilted, about 3 minutes. Remove the skillet from heat and let the mixture cool slightly.",
"In a bowl, combine the spinach mixture with 4 ounces of softened cream cheese, 1/4 cup of grated Parmesan cheese, 1/4 cup of shredded mozzarella cheese, and 1/4 teaspoon of red pepper flakes. Mix until well combined. Stuff each chicken breast pocket with an equal amount of the spinach mixture. Seal the pocket with a toothpick if necessary. In the same skillet, heat 1 tablespoon of olive oil over medium-high heat. Add the stuffed chicken breasts and sear on each side for 3-4 minutes, or until golden brown."
]
)
# Retrieve similar documents based on the query string passed in
client.similarity_search(
query:,
k: # number of results to be retrieved
)
# Retrieve similar documents based on the embedding passed in
client.similarity_search_by_vector(
embedding:,
k: # number of results to be retrieved
)
# Q&A-style querying based on the question passed in
client.ask(
question:
)
openai = LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
openai.embed(text: "foo bar")
openai.complete(prompt: "What is the meaning of life?")
cohere = LLM::Cohere.new(api_key: ENV["COHERE_API_KEY"])
cohere.embed(text: "foo bar")
cohere.complete(prompt: "What is the meaning of life?")
Create a prompt with one input variable:
prompt = Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"])
prompt.format(adjective: "funny") # "Tell me a funny joke."
Create a prompt with multiple input variables:
prompt = Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"])
prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens."
Creating a PromptTemplate using just a prompt and no input_variables:
prompt = Prompt::PromptTemplate.from_template("Tell me a {adjective} joke about {content}.")
prompt.input_variables # ["adjective", "content"]
prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens."
Save prompt template to JSON file:
prompt.save(file_path: "spec/fixtures/prompt/prompt_template.json")
Loading a new prompt template using a JSON file:
prompt = Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json")
prompt.input_variables # ["adjective", "content"]
Create a prompt with a few shot examples:
prompt = Prompt::FewShotPromptTemplate.new(
prefix: "Write antonyms for the following words.",
suffix: "Input: {adjective}\nOutput:",
example_prompt: Prompt::PromptTemplate.new(
input_variables: ["input", "output"],
template: "Input: {input}\nOutput: {output}"
),
examples: [
{ "input": "happy", "output": "sad" },
{ "input": "tall", "output": "short" }
],
input_variables: ["adjective"]
)
prompt.format(adjective: "good")
# Write antonyms for the following words.
#
# Input: happy
# Output: sad
#
# Input: tall
# Output: short
#
# Input: good
# Output:
Save prompt template to JSON file:
prompt.save(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json")
Loading a new prompt template using a JSON file:
prompt = Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json")
prompt.prefix # "Write antonyms for the following words."
After checking out the repo, run bin/setup
to install dependencies. Then, run rake spec
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
. To release a new version, update the version number in version.rb
, and then run bundle exec rake release
, which will create a git tag for the version, push git commits and the created tag, and push the .gem
file to rubygems.org.
Bug reports and pull requests are welcome on GitHub at https://github.com/andreibondarev/langchain.
The gem is available as open source under the terms of the MIT License.