Skip to content

Commit

Permalink
LLM_OPENAI_SHOW_RESPONSES=1 debug trick, closes #286
Browse files Browse the repository at this point in the history
  • Loading branch information
simonw committed Sep 19, 2023
1 parent 4d18da4 commit ae14ce4
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 0 deletions.
10 changes: 10 additions & 0 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,16 @@ To run the tests:

pytest

## Debugging tricks

The default OpenAI plugin has a debugging mechanism for showing the exact responses that came back from the OpenAI API.

Set the `LLM_OPENAI_SHOW_RESPONSES` environment variable like this:
```bash
LLM_OPENAI_SHOW_RESPONSES=1 llm -m chatgpt 'three word slogan for an an otter-run bakery'
```
This will output the response (including streaming responses) to standard error, as shown in [issues 286](https://github.com/simonw/llm/issues/286).

## Documentation

Documentation for this project uses [MyST](https://myst-parser.readthedocs.io/) - it is written in Markdown and rendered using Sphinx.
Expand Down
10 changes: 10 additions & 0 deletions llm/default_plugins/openai_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import click
import datetime
import openai
import os

try:
from pydantic import field_validator, Field # type: ignore
Expand All @@ -15,6 +16,15 @@
import json
import yaml

if os.environ.get("LLM_OPENAI_SHOW_RESPONSES"):

def log_response(response, *args, **kwargs):
click.echo(response.text, err=True)
return response

openai.requestssession = requests.Session()
openai.requestssession.hooks["response"].append(log_response)


@hookimpl
def register_models(register):
Expand Down

0 comments on commit ae14ce4

Please sign in to comment.