Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial LCEL Support #568

Merged
merged 4 commits into from
Feb 7, 2024
Merged

Initial LCEL Support #568

merged 4 commits into from
Feb 7, 2024

Conversation

CalebCourier
Copy link
Collaborator

This PR contains a first draft for setting up Guard as a Runnable for use within LCEL.
It is based on the first draft of chaining.

The Guard wrapper presented here is captured in a different class that inherits from our (not really) functional/chaining Guard so we can decide if we want this to exist in this repo or as a separate package. The downside of including it here is it increases our dependency size since it requires langchain_core

This PR also includes a refactor of the ValidatorError class to ValidationError. The reasons for this refactor are in the comments in guardrails/errors/__init__.py. The reason this refactor is included in this PR is because we are using ValidationError to raise from invoke in the event that the input fails validation. We raise to prevent parsing errors since failed validation means an output of None. If we want to take a different approach the more specific commits of this PR can be picked onto a new branch; just lmk via review.

Try it out:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from guardrails.functional.chain import Guard
from guardrails.validators import RegexMatch, ReadingTime


topic = "ice cream"


prompt = ChatPromptTemplate.from_template("ELIF: {topic}")
model = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()


guard = (
    Guard()
    .add(RegexMatch(topic, match_type="search"))
    .add(ReadingTime(1)) # Passes validation
    # .add(ReadingTime(1/60)) # Fails validation
)


chain = prompt | model | guard | output_parser

try:
    response = chain.invoke({"topic": topic})

    print("type(response): ", type(response))
    print(response)
except Exception as e:
    from rich import print as rich_print

    rich_print(e)
    rich_print("\n")
    rich_print("status: ", guard.history.last.status)
    rich_print("\n")
    rich_print("failed validations: ", guard.history.last.failed_validations)

@CalebCourier CalebCourier changed the base branch from main to chaining February 5, 2024 22:00
@zsimjee zsimjee marked this pull request as ready for review February 7, 2024 01:51
@zsimjee zsimjee merged commit a6403e0 into chaining Feb 7, 2024
@zsimjee zsimjee deleted the lcel-support branch February 7, 2024 01:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants