Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(evals): document llm_generate with output parser #1741

Merged
merged 1 commit into from
Nov 14, 2023

Conversation

mikeldking
Copy link
Contributor

@mikeldking mikeldking commented Nov 14, 2023

Adds documentation for #1736

Targets main so it can go out with the next release

@mikeldking mikeldking merged commit 1e70ec3 into main Nov 14, 2023
12 checks passed
@mikeldking mikeldking deleted the mikeldking/docs-llm-generate branch November 14, 2023 04:21
mikeldking added a commit that referenced this pull request Nov 14, 2023
* Add explanation template

* Spike out explanations

* Ruff 🐶

* Use tailored explanation prompt

* Add explanation templates for all evals

* Wire up prompt template objects

* Update models to use new template object

* Ruff 🐶

* Resolve type and linter issues

* Fix more typing issues

* Address first round of feedback

* Extract `ClassificationTemplate` ABC

* Label extraction belongs to the "template" object

* Add logging for unparseable labels

* Patch in openai key environment variable for tests

* Refactor to address feedback

* Evaluators should use PromptTemplates

* Pair with Mikyo

* Fix for CI

* `PROMPT_TEMPLATE_STR` -> `PROMPT_TEMPLATE`

* Print prompt if verbose

* Add __repr__ to `PromptTemplate`

* fix relevance notebook

* docs: update evals

* Normalize prompt templates in llm_classify

* Ruff 🐶

* feat(evals): add an output_parser to llm_generate (#1736)

* feat(evals): add an output_parser param for structured data extraction

* remove brittle test

* docs(evals): document llm_generate with output parser (#1741)

---------

Co-authored-by: Mikyo King <mikyo@arize.com>
mikeldking added a commit that referenced this pull request Nov 15, 2023
* Add explanation template

* Spike out explanations

* Ruff 🐶

* Use tailored explanation prompt

* Add explanation templates for all evals

* Wire up prompt template objects

* Update models to use new template object

* Ruff 🐶

* Resolve type and linter issues

* Fix more typing issues

* Address first round of feedback

* Extract `ClassificationTemplate` ABC

* Label extraction belongs to the "template" object

* Add logging for unparseable labels

* Patch in openai key environment variable for tests

* Refactor to address feedback

* Evaluators should use PromptTemplates

* Pair with Mikyo

* Fix for CI

* `PROMPT_TEMPLATE_STR` -> `PROMPT_TEMPLATE`

* Print prompt if verbose

* Add __repr__ to `PromptTemplate`

* fix relevance notebook

* docs: update evals

* Normalize prompt templates in llm_classify

* Ruff 🐶

* feat(evals): add an output_parser to llm_generate (#1736)

* feat(evals): add an output_parser param for structured data extraction

* remove brittle test

* docs(evals): document llm_generate with output parser (#1741)

---------

Co-authored-by: Mikyo King <mikyo@arize.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

1 participant