Skip to content

Releases: simonw/llm

0.17.1

01 Nov 21:22
Compare
Choose a tag to compare
  • Fixed a bug where llm chat crashes if a follow-up prompt is provided. #601

0.17

29 Oct 02:39
Compare
Choose a tag to compare

Support for attachments, allowing multi-modal models to accept images, audio, video and other formats. #587

The default OpenAI gpt-4o and gpt-4o-mini models can both now be prompted with JPEG, GIF, PNG and WEBP images.

Attachments in the CLI can be URLs:

llm -m gpt-4o "describe this image" \
  -a https://static.simonwillison.net/static/2024/pelicans.jpg

Or file paths:

llm -m gpt-4o-mini "extract text" -a image1.jpg -a image2.jpg

Or binary data, which may need to use --attachment-type to specify the MIME type:

cat image | llm -m gpt-4o-mini "extract text" --attachment-type - image/jpeg

Attachments are also available in the Python API:

model = llm.get_model("gpt-4o-mini")
response = model.prompt(
    "Describe these images",
    attachments=[
        llm.Attachment(path="pelican.jpg"),
        llm.Attachment(url="https://static.simonwillison.net/static/2024/pelicans.jpg"),
    ]
)

Plugins that provide alternative models can support attachments, see Attachments for multi-modal models for details.

The latest llm-claude-3 plugin now supports attachments for Anthropic's Claude 3 and 3.5 models. The llm-gemini plugin supports attachments for Google's Gemini 1.5 models.

Also in this release: OpenAI models now record their "usage" data in the database even when the response was streamed. These records can be viewed using llm logs --json. #591

0.17a0

28 Oct 22:49
Compare
Choose a tag to compare
0.17a0 Pre-release
Pre-release

Alpha support for attachments, allowing multi-modal models to accept images, audio, video and other formats. #578

Attachments in the CLI can be URLs:

llm "describe this image" \
  -a https://static.simonwillison.net/static/2024/pelicans.jpg

Or file paths:

llm "extract text" -a image1.jpg -a image2.jpg

Or binary data, which may need to use --attachment-type to specify the MIME type:

cat image | llm "extract text" --attachment-type - image/jpeg

Attachments are also available in the Python API:

model = llm.get_model("gpt-4o-mini")
response = model.prompt(
    "Describe these images",
    attachments=[
        llm.Attachment(path="pelican.jpg"),
        llm.Attachment(url="https://static.simonwillison.net/static/2024/pelicans.jpg"),
    ]
)

Plugins that provide alternative models can support attachments, see Attachments for multi-modal models for details.

0.16

12 Sep 23:20
Compare
Choose a tag to compare
  • OpenAI models now use the internal self.get_key() mechanism, which means they can be used from Python code in a way that will pick up keys that have been configured using llm keys set or the OPENAI_API_KEY environment variable. #552. This code now works correctly:
    import llm
    print(llm.get_model("gpt-4o-mini").prompt("hi"))
  • New documented API methods: llm.get_default_model(), llm.set_default_model(alias), llm.get_default_embedding_model(alias), llm.set_default_embedding_model(). #553
  • Support for OpenAI's new o1 family of preview models, llm -m o1-preview "prompt" and llm -m o1-mini "prompt". These models are currently only available to tier 5 OpenAI API users, though this may change in the future. #570

0.15

18 Jul 19:33
Compare
Choose a tag to compare
  • Support for OpenAI's new GPT-4o mini model: llm -m gpt-4o-mini 'rave about pelicans in French' #536
  • gpt-4o-mini is now the default model if you do not specify your own default, replacing GPT-3.5 Turbo. GPT-4o mini is both cheaper and better than GPT-3.5 Turbo.
  • Fixed a bug where llm logs -q 'flourish' -m haiku could not combine both the -q search query and the -m model specifier. #515

0.14

13 May 20:40
Compare
Choose a tag to compare
  • Support for OpenAI's new GPT-4o model: llm -m gpt-4o 'say hi in Spanish' #490
  • The gpt-4-turbo alias is now a model ID, which indicates the latest version of OpenAI's GPT-4 Turbo text and image model. Your existing logs.db database may contain records under the previous model ID of gpt-4-turbo-preview. #493
  • New llm logs -r/--response option for outputting just the last captured response, without wrapping it in Markdown and accompanying it with the prompt. #431
  • Nine new {ref}plugins <plugin-directory> since version 0.13:

0.13.1

27 Jan 00:28
8021e12
Compare
Choose a tag to compare
  • Fix for No module named 'readline' error on Windows. #407

0.13

26 Jan 22:34
Compare
Choose a tag to compare

See also LLM 0.13: The annotated release notes.

  • Added support for new OpenAI embedding models: 3-small and 3-large and three variants of those with different dimension sizes, 3-small-512, 3-large-256 and 3-large-1024. See OpenAI embedding models for details. #394
  • The default gpt-4-turbo model alias now points to gpt-4-turbo-preview, which uses the most recent OpenAI GPT-4 turbo model (currently gpt-4-0125-preview). #396
  • New OpenAI model aliases gpt-4-1106-preview and gpt-4-0125-preview.
  • OpenAI models now support a -o json_object 1 option which will cause their output to be returned as a valid JSON object. #373
  • New plugins since the last release include llm-mistral, llm-gemini, llm-ollama and llm-bedrock-meta.
  • The keys.json file for storing API keys is now created with 600 file permissions. #351
  • Documented a pattern for installing plugins that depend on PyTorch using the Homebrew version of LLM, despite Homebrew using Python 3.12 when PyTorch have not yet released a stable package for that Python version. #397
  • Underlying OpenAI Python library has been upgraded to >1.0. It is possible this could cause compatibility issues with LLM plugins that also depend on that library. #325
  • Arrow keys now work inside the llm chat command. #376
  • LLM_OPENAI_SHOW_RESPONSES=1 environment variable now outputs much more detailed information about the HTTP request and response made to OpenAI (and OpenAI-compatible) APIs. #404
  • Dropped support for Python 3.7.

0.12

06 Nov 21:33
Compare
Choose a tag to compare
  • Support for the new GPT-4 Turbo model from OpenAI. Try it using llm chat -m gpt-4-turbo or llm chat -m 4t. #323
  • New -o seed 1 option for OpenAI models which sets a seed that can attempt to evaluate the prompt deterministically. #324

0.11.2

06 Nov 20:08
Compare
Choose a tag to compare
  • Pin to version of OpenAI Python library prior to 1.0 to avoid breaking. #327