LLM plugin for OpenAI models.
This plugin is a preview. LLM currently ships with OpenAI models as part of its default collection, implemented using the Chat Completions API.
This plugin implements those same models using the new Responses API.
Currently the only reason to use this plugin over the LLM defaults is to access o1-pro, which can only be used via the Responses API.
Install this plugin in the same environment as LLM.
llm install llm-openai-plugin
To run a prompt against o1-pro
do this:
llm -m openai/o1-pro "Convince me that pelicans are the most noble of birds"
Run this to see a full list of models - they start with the openai/
prefix:
llm models -q openai/
Here's the output of that command:
OpenAI: openai/gpt-4o
OpenAI: openai/gpt-4o-mini
OpenAI: openai/o3-mini
OpenAI: openai/o1-mini
OpenAI: openai/o1
OpenAI: openai/o1-pro
Add --options
to see a full list of options that can be provided to each model.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-openai-plugin
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
python -m pytest
This project uses pytest-recording to record OpenAI API responses for the tests, and syrupy to capture snapshots of their results.
If you add a new test that calls the API you can capture the API response and snapshot like this:
PYTEST_OPENAI_API_KEY="$(llm keys get openai)" pytest --record-mode once --snapshot-update
Then review the new snapshots in tests/__snapshots__/
to make sure they look correct.