Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NDJSON/CSV methods to add and update documents #215

Closed
13 tasks
curquiza opened this issue Oct 19, 2021 · 4 comments · Fixed by #408
Closed
13 tasks

NDJSON/CSV methods to add and update documents #215

curquiza opened this issue Oct 19, 2021 · 4 comments · Fixed by #408
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@curquiza
Copy link
Member

⚠️ This issue is generated, it means the nameing might be done differently in this package (ex: add_documents_json instead of addDocumentsJson). Keep the already existing way of naming in this package to stay idiomatic with the language and this repository.

📣 We strongly recommend doing multiple PRs to solve all the points of this issue

MeiliSearch v0.23.0 introduces two changes:

  • new valid formats to push data files, additionally to the JSON format: CSV and NDJSON formats.
  • it enforces the Content-type header for every route requiring a payload (POST and PUT routes)

Here are the expected changes to completely close the issue:

  • Currently, the SDKs always send Content-Type: application/json to every request. Only the POST and PUT requests should send the Content-Type: application/json and not the DELETE and GET ones.

  • Add the following methods and 🔥 the associated tests 🔥 to ADD the documents. Depending on the format type (csv or ndjson) the SDK should send Content-Type: application/x-dnjson or Content-Type: text/csv)

    • addDocumentsJson(string docs, string primaryKey)
    • addDocumentsCsv(string docs, string primaryKey)
    • addDocumentsCsvInBatches(string docs, int batchSize, string primaryKey)
    • addDocumentsNdjson(string docs, string primaryKey)
    • addDocumentsNdjsonInBatches(string docs, int batchSize, string primaryKey)
  • Add the following methods and 🔥 the associated tests 🔥 to UPDATE the documents. Depending on the format type (csv or ndjson) the SDK should send Content-Type: application/x-dnjson or Content-Type: text/csv)

    • updateDocumentsJson(string docs, string primaryKey)
    • updateDocumentsCsv(string docs, string primaryKey)
    • updateDocumentsCsvInBatches(string docs, int batchSize, string primaryKey)
    • updateDocumentsNdjson(string docs, string primaryKey)
    • updateDocumentsNdjsonInBatches(string docs, int batchSize, string primaryKey)

docs are the documents sent as String
primaryKey is the primary key of the index
batchSize is the size of the batch. Example: you can send 2000 documents in raw String in docs and ask for a batchSize of 1000, so your documents will be sent to MeiliSearch in two batches.

Example of PRs:


Related to: meilisearch/integration-guides#146

If this issue is partially/completely implemented, feel free to let us know.

@penthaapatel
Copy link
Contributor

Currently, the SDKs always send Content-Type: application/json to every request. Only the POST and PUT requests should send the Content-Type: application/json and not the DELETE and GET ones.

Hi @curquiza! I just created a PR to fix the first subtask of this issue.

bors bot added a commit that referenced this issue Oct 28, 2021
227: Set `Content-Type: application/json` for POST and PUT requests r=alallema a=penthaapatel

Fixes Subtask 1 as described in - #215 

Summary - 

1. Added new string field `contentType` in `internalRequest` struct
2. Added new string constants for different possible `Content-Type` headers
3. POST and PUT requests now have `Content-Type: application/json` header
4. GET and DELETE requests do not have `Content-Type` header

Co-authored-by: Penthaa Patel <penthaapatel@gmail.com>
@theag3nt
Copy link
Contributor

Hi @curquiza!

If I understood correctly the handling of batchSize parameter is to be implemented on the client side. In case of NDJSON this should be trivial, but for CSV it might be a bit more complicated.

A couple of questions regarding this:

  • Is a header mandatory for CSV files?
    I assume it is, because otherwise it might not be possible to tell which value belongs to which field.
  • If CSV header is mandatory, is this requirement documented anywhere? Is it checked on the server side?
    I've only found mentions of the new supported formats in the OpenAPI spec.

If the server already makes sure that all CSVs have a header row, then I think the client can use the first line of the input CSV as header, split the rest of the lines according to the batchSize and prepend the header to each batch.

@alallema
Copy link
Contributor

alallema commented Dec 8, 2021

Hi @theag3nt,
I'm sorry I didn't answer you sooner, I missed it.

Is a header mandatory for CSV files?

Yes, every document should be formatted like a CSV file with a CSV header.

If CSV header is mandatory, is this requirement documented anywhere? Is it checked on the server side?

There are no specific requirements, except that the document/file must be in CSV format.
An example of request in curl:

curl \
  -X POST 'http://localhost:7700/indexes/movies/documents' \
  -H 'Content-Type: text/csv' \
  --data--binary '
    "id","label","price:number","colors","description"\n
    "1","hoodie","19.99","purple","Hey, you will rock at summer time."
  '

I think the client can use the first line of the input CSV as header, split the rest of the lines according to the batchSize and prepend the header to each batch.

I totally agree with you, it seems to be the best way.

@Azanul
Copy link
Contributor

Azanul commented Jan 25, 2023

@curquiza please update the issue, I think point 1 has been completed by #227 and 2.2, 2.3, 2.4 & 2.5 have been completed by #235 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants