Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions examples/File_Search_Responses.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
"id": "2dfbaf53-32de-4b8c-bd1c-d27371a87f81",
"metadata": {},
"source": [
"# Using File Search tool in the Responses API\n",
"# Using file search tool in the Responses API\n",
"\n",
"RAG can be overwhelming, searching amongst PDF file shouldn't be complicated. One of the most adopted options as of now is parsing your PDF, defining your chunking strategies, uploading those chunks to a storage provider, running embeddings on those chunks of texts and storing those embeddings in a vector database. That's only for the set-up, we're not yet at the step retrieving content in our LLM workflow which would also require multiple steps.\n",
"\n",
"This is where File Search — a hosted tool you can use in the Response API — comes in. It allows you to search your knowledge base and generate an answer based on the retrieved content. In this cookbook, we'll upload those PDFs to a vector store on OpenAI and use File Search to fetch additional context from this vector store to answer the questions we generated in the first step. Then, we'll initially create a small set of questions based on PDFs extracted from OpenAI's blog ([openai.com/news](https://openai.com/news)).\n",
"This is where file search — a hosted tool you can use in the Responses API — comes in. It allows you to search your knowledge base and generate an answer based on the retrieved content. In this cookbook, we'll upload those PDFs to a vector store on OpenAI and use file search to fetch additional context from this vector store to answer the questions we generated in the first step. Then, we'll initially create a small set of questions based on PDFs extracted from OpenAI's blog ([openai.com/news](https://openai.com/news)).\n",
"\n",
"_File Search was previously available on the Assistants API, it is now available on a stateless API that is Responses and benefits from new features (e.g: metadata filtering)_\n",
"_File search was previously available on the Assistants API, it is now available on a stateless API that is Responses and benefits from new features (e.g: metadata filtering)_\n",
"\n",
"### Set up"
]
Expand Down Expand Up @@ -213,7 +213,7 @@
"\n",
"### Integrating search results with LLM in a single API call\n",
"\n",
"However instead of querying the vector store and then passing the data into the Response or Chat Completion API call, an even more convenient way to use this search results in an LLM query would be to plug use file_search tool as part of OpenAI Responses API."
"However instead of querying the vector store and then passing the data into the Responses or Chat Completion API call, an even more convenient way to use this search results in an LLM query would be to plug use file_search tool as part of OpenAI Responses API."
]
},
{
Expand Down Expand Up @@ -660,7 +660,7 @@
"- Understand how chunks of texts are retrieved, ranked and used as part of the Response API\n",
"- Measure accuracy, precision, retrieval, MRR and MAP on the dataset of evaluations previously generated\n",
"\n",
"By using File search with Responses, you can simplify RAG architecture and leverage this in a single API call using the new Responses API. File storage, embeddings, & retrieval all integrated in one tool!\n"
"By using file search with Responses, you can simplify RAG architecture and leverage this in a single API call using the new Responses API. File storage, embeddings, & retrieval all integrated in one tool!\n"
]
}
],
Expand Down