Skip to content

Commit

Permalink
Updated FAQ so that it reflect PR 835 (Azure-Samples#868)
Browse files Browse the repository at this point in the history
* Updated FAQ so that it reflect PR 835

* Update README.md

* Update README.md

---------

Co-authored-by: Pamela Fox <pamela.fox@gmail.com>
  • Loading branch information
MaciejLitwiniec and pamelafox authored Oct 26, 2023
1 parent d7bbf9f commit 94be632
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,8 @@ Chunking allows us to limit the amount of information we send to OpenAI due to t
<details><a id="ingestion-more-pdfs"></a>
<summary>How can we upload additional PDFs without redeploying everything?</summary>

To upload more PDFs, put them in the data/ folder and run `./scripts/prepdocs.sh` or `./scripts/prepdocs.ps1`. To avoid reuploading existing docs, move them out of the data folder. You could also implement checks to see whats been uploaded before; our code doesn't yet have such checks.
To upload more PDFs, put them in the data/ folder and run `./scripts/prepdocs.sh` or `./scripts/prepdocs.ps1`.
A [recent change](https://github.com/Azure-Samples/azure-search-openai-demo/pull/835) added checks to see what's been uploaded before. The prepdocs script now writes an .md5 file with an MD5 hash of each file that gets uploaded. Whenever the prepdocs script is re-run, that hash is checked against the current hash and the file is skipped if it hasn't changed.
</details>

<details><a id="compare-samples"></a>
Expand Down

0 comments on commit 94be632

Please sign in to comment.