diff --git a/docs/blog/posts/openai-multimodal.md b/docs/blog/posts/openai-multimodal.md index 4d7a9cb9c..49f3706b3 100644 --- a/docs/blog/posts/openai-multimodal.md +++ b/docs/blog/posts/openai-multimodal.md @@ -5,7 +5,7 @@ categories: - OpenAI - Audio comments: true -date: 2025-10-17 +date: 2024-10-17 description: Explore the new audio capabilities in OpenAI's Chat Completions API using the gpt-4o-audio-preview model. draft: false tags: @@ -33,7 +33,7 @@ The new audio support in the Chat Completions API offers several compelling feat To demonstrate how to use this new functionality, let's look at a simple example using the `instructor` library: -"""python +```python from openai import OpenAI from pydantic import BaseModel import instructor @@ -64,7 +64,7 @@ resp = client.chat.completions.create( print(resp) # Expected output: Person(name='Jason', age=20) -""" +``` In this example, we're using the `gpt-4o-audio-preview` model to extract information from an audio file. The API processes the audio input and returns structured data (a Person object with name and age) based on the content of the audio. diff --git a/docs/blog/posts/youtube-flashcards.md b/docs/blog/posts/youtube-flashcards.md index c0057d372..b12f3d5f3 100644 --- a/docs/blog/posts/youtube-flashcards.md +++ b/docs/blog/posts/youtube-flashcards.md @@ -23,7 +23,7 @@ Flashcards help break down complex topics and learn anything from biology to a n language or lines for a play. This blog will show how to use LLMs to generate flashcards and kickstart your learning! -**Instructor** lets us get structured outputs from LLMs reliably, and **Burr** helps +**Instructor** lets us get structured outputs from LLMs reliably, and [Burr](https://github.com/dagworks-inc/burr) helps create an LLM application that's easy to understand and debug. It comes with **Burr UI**, a free, open-source, and local-first tool for observability, annotations, and more!