What is the difference between Kor and the StructuredOutputs from Langchain? #161
Replies: 1 comment 1 reply
-
Hi @alberduris!
Use
Kor uses Providing examples helps the model learn how to use the schema and match the encoding correctly, reducing encoding errors. Some guidelines are included here: https://eyurtsev.github.io/kor/guidelines.html Zero-shot exampleBelow is a zero-shot example. I recommend generally using examples when easy to specify as they will likely improve performance. import enum
from typing import Optional, List
import langchain
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from kor import from_pydantic, create_extraction_chain
## Model
model_name = 'text-davinci-003'
temperature = 0.0
llm = OpenAI(model_name=model_name, temperature=temperature) SchemaThe class Action(enum.Enum):
play = "play"
stop = "stop"
previous = "previous"
next_ = "next"
class MusicRequest(BaseModel):
song: Optional[List[str]] = Field(
description="The song(s) that the user would like to be played."
)
album: Optional[List[str]] = Field(
description="The album(s) that the user would like to be played."
)
artist: Optional[List[str]] = Field(
description="The artist(s) whose music the user would like to hear.",
examples=[("Songs by paul simon", "paul simon")],
)
action: Optional[Action] = Field(
description="The action that should be taken; one of `play`, `stop`, `next`, `previous`",
examples=[
# ("Please stop the music", "stop"),
# ("play something", "play"),
# ("play a song", "play"),
# ("next song", "next"),
],
) Korquery = "play a bob dylan song"
schema, validator = from_pydantic(MusicRequest)
chain = create_extraction_chain(
llm, schema, encoder_or_encoder_class="json", validator=validator
)
chain.predict_and_parse(text=query)['validated_data'] OutputParserparser = PydanticOutputParser(pydantic_object=MusicRequest)
prompt = PromptTemplate(
template="Determine what music request the user is making.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=query)
output = llm(_input.to_string())
parser.parse(output) |
Beta Was this translation helpful? Give feedback.
-
Can someone clarify the distinctions between Kor and StructuredOutputs in Langchain? I'm trying to understand the nuances and functionalities of both solutions. Any insights or experiences would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions