This sample is using OpenAI chat model(ChatGPT/GPT4) to identify customer intent from customer's question.
By going through this sample you will learn how to create a flow from existing working code (written in LangChain in this case).
This is the existing code.
Install promptflow sdk and other dependencies:
pip install -r requirements.txt
Ensure you have put your azure OpenAI endpoint key in .env file.
cat .env
- init flow directory - create promptflow folder from existing python file
pf flow init --flow . --entry intent.py --function extract_intent --prompt-template chat_prompt=user_intent_zero_shot.jinja2
The generated files:
- extract_intent_tool.py: Wrap the func
extract_intent
in theintent.py
script into a Python Tool. - flow.dag.yaml: Describes the DAG(Directed Acyclic Graph) of this flow.
- .gitignore: File/folder in the flow to be ignored.
- create needed custom connection
pf connection create -f .env --name custom_connection
- test flow with single line input
pf flow test --flow . --inputs ./data/sample.json
- run with multiple lines input
pf run create --flow . --data ./data --column-mapping history='${data.history}' customer_info='${data.customer_info}'
You can also skip providing column-mapping
if provided data has same column name as the flow.
Reference here for default behavior when column-mapping
not provided in CLI.
- list/show
# list created run
pf run list
# get a sample completed run name
name=$(pf run list | jq '.[] | select(.name | contains("customer_intent_extraction")) | .name'| head -n 1 | tr -d '"')
# show run
pf run show --name $name
# show specific run detail, top 3 lines
pf run show-details --name $name -r 3
- evaluation
# create evaluation run
pf run create --flow ../../evaluation/eval-classification-accuracy --data ./data --column-mapping groundtruth='${data.intent}' prediction='${run.outputs.output}' --run $name
# get the evaluation run in previous step
eval_run_name=$(pf run list | jq '.[] | select(.name | contains("eval_classification_accuracy")) | .name'| head -n 1 | tr -d '"')
# show run
pf run show --name $eval_run_name
# show run output
pf run show-details --name $eval_run_name -r 3
- visualize
# visualize in browser
pf run visualize --name $eval_run_name # your evaluation run name
pf flow serve --source . --port 5123 --host localhost
Visit http://localhost:5213 to access the test app.
# pf flow export --source . --format docker --output ./package