Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Parallel & Batch Ingestion #12457

Closed
chishui opened this issue Feb 26, 2024 · 54 comments
Closed

[RFC] Parallel & Batch Ingestion #12457

chishui opened this issue Feb 26, 2024 · 54 comments
Assignees
Labels
enhancement Enhancement or improvement to existing feature or request ingest-pipeline RFC Issues requesting major changes Roadmap:Cost/Performance/Scale Project-wide roadmap label v2.15.0 Issues and PRs related to version 2.15.0

Comments

@chishui
Copy link
Contributor

chishui commented Feb 26, 2024

Is your feature request related to a problem? Please describe

Problem Statements

Today, users can utilize bulk API to ingest multiple documents in a single request. All documents from this request are handled by one ingest node and on this node, if there's any ingest pipeline configured, documents are processed by pipeline one at a time in a sequential order (ref). The ingest pipeline is constituted by a collection of processors and processor is the computing unit of a pipeline. Most of the processors are pretty light weighted such as append, uppercase, lowercase, and to process multiple documents one after another or to process them in parallel would make no observable difference. But for time-consuming processors such as neural search processors, which by their nature, require more time to compute, being able to run them in parallel could save user some valuable ingest time. Apart from ingestion time, processors like neural search, can benefit from processing batch documents together as it can reduce the requests to remote ML services via batch APIs to maximally avoid hitting rate limit restriction. (Feature request: opensearch-project/ml-commons#1840, rate limit example from OpenAI: https://platform.openai.com/docs/guides/rate-limits)

Due to the lack of parallel ingestion and batch ingestion capabilities in ingest flow, we propose below solution to address them.

Describe the solution you'd like

Proposed Features

1. Batch Ingestion

An ingest pipeline is constructed by a list of processors and a single document could flow through each processor one by one before it can be stored into index. Currently, both pipeline and processor can only handle one document each time and even if with bulk API, documents are iterated and handled in sequential order. As shown in figure 1, to ingest doc1, it would firstly flow through ingest pipeline 1, then through pipeline 2. Then, the next document would go through both pipeline.

ingest-Page-1

To support batch processing of documents, we'll add a batchExecute API in ingest pipeline and processors which take multiple documents as input parameters. We will provide a default implementation in Processor interface to iteratively call existingexecute API to process document one by one so that most of the processors don't need to make change and only if there's necessity for them to batch process documents (e.g. text embedding processor), they can have their own implementation, otherwise, even receiving documents altogether, they default to process them one by one.

To batch process documents, user need to use bulk API. We'll add two optional parameters for bulk API for user to enable batch feature and set batch size. Based on maximum_batch_size value, documents are split into batches.

Since in bulk API, different documents could be ingested to different indexes, indexes could use the same pipelines but in different order, e.g. index “movies” uses pipeline P1 as default pipeline, P2 as final pipeline; index “musics” uses P2 as default pipeline and P1 as final pipeline. To avoid over-complexity of handling cross indexes batching (topology sorting), we would batch documents in index level.

2. Parallel Ingestion

Apart from batch ingestion, we also propose to have parallel ingestion to accompany with batch ingestion to boost the ingestion performance. When user enables parallel ingestion, based on batch size, documents from bulk API will be split into batches, then, batches are processed in parallel with threads managed by thread pool. Although limiting the maximum concurrency of parallel ingestion, thread pool can help us protect host resources to not be exhausted by batch ingestion threads.

ingest-Page-2

Ingest flow logic change

Current logic of the ingestion flow of documents can be shown from the pseudo code below:

for (document in documents) {  
    for (pipeline in pipelines) {  
        for (processor in pipeline.processors) {  
            document = processor.execute(document)  
        }  
    }  
}

We'll change the flow to logic shown below if the pipeline has enable the batch option.

if (enabledBatch) {
    batches = calculateBatches(documents);
    for (batch in batches) {
        for (pipeline in pipelines) {  
            for (processor in pipeline.processors) {  
                documents = processor.batchExecute(documents)  
            }  
        }
    }
} else if (enabledParallelBatch) {
    batches = calculateBatches(documents);
    for (batch in batches) {
        threadpool.execute(()-> {
            for (pipeline in pipelines) {  
                for (processor in pipeline.processors) {  
                    documents = processor.batchExecute(documents)  
                }  
            }
        });
    }
} else {
    // fallback to exsiting ingestion logic
}

Update to Bulk API

We propose new parameters to bulk API, all of them are optional.

Parameter Type Description
batch_ingestion_option String Configure whether to enable batch ingestion. It has three options: none, enable and parallel. By default, it's none. When set it to enable, batch ingestion is enabled, and batches are processed in sequential order. When set it to parallel, batch ingestion is enabled and batches are processes in parallel.
maximum_batch_size Integer The batched document size. Only work when batch ingestion option is set to enable or parallel. It's 1 by default.

3. Split and Redistribute Bulk API

Users tend to use bulk API to ingest many documents which can be very time consuming sometimes. In order to achieve lower ingestion time, they have to use multiple clients to make multiple bulk requests with smaller document size so that the requests can be distributed to different ingest nodes. To offload the burden from user side, we can support the split and redistribute work from server side and help distribute the ingest load more evenly.
Note: although brought up here, we think it's better to discuss this topic in a separate RFC doc which will be published later.

Related component

Indexing:Performance

Describe alternatives you've considered

No response

Additional context

No response

@chishui chishui added enhancement Enhancement or improvement to existing feature or request untriaged labels Feb 26, 2024
@peternied peternied added RFC Issues requesting major changes Indexing Indexing, Bulk Indexing and anything related to indexing and removed untriaged labels Feb 28, 2024
@peternied
Copy link
Member

peternied commented Feb 28, 2024

[Triage - attendees 1 2 3 4 5]
@chishui Thanks for creating this RFC, it looks like this could be related to [1] [2]

@chishui
Copy link
Contributor Author

chishui commented Feb 29, 2024

@peternied Yes, it looks like the proposed feature 3 in this RFC has very similar idea with the streaming API especially the coordinator part to load balancing the ingest load. For feature 3, it just tries to reuse the bulk API.

Feature 1 and 2 are different from streaming API as they focus on parallel and batch ingestion on a single node which would happen post streaming API or feature 3.

@msfroh
Copy link
Collaborator

msfroh commented Feb 29, 2024

@dbwiddis, @joshpalis -- you may be interested in this, as you've been thinking about parallel execution for search pipelines. For ingest pipelines, the use-case is a little bit more "natural", because we already do parallel execution of _bulk request (at least across shards).

@chishui, can you confirm where exactly the parallel/batch execution would run? A bulk request is received on one node (that serves as coordinator for the request), then the underlying DocWriteRequests get fanned out to the shards. Does this logic run on the coordinator or on the individual shards? I can never remember where IngestService acts.

@chishui
Copy link
Contributor Author

chishui commented Mar 1, 2024

@msfroh, the "parallel/batch execution" would be run on the ingest pipeline side. The DocWriteRequests are first processed by ingest pipeline and its processors on a single ingest node, then the processed documents are fanned out to shards to be indexed. To answer your question, the logic would be run on the coordinator.

@chishui
Copy link
Contributor Author

chishui commented Mar 5, 2024

Additional information about parallel ingestion:

Performance:

Light-weighted processors - no improvement

We benchmarked the performance on some light weighted processors (lowercase + append) with current solution and parallelized batch solution, we don't see improvement on either latency or throughput which is aligned with our expectation that they are already very fast and parallelization wouldn't help and could bring some additional overhead.

ML processors - already in async

ML processors are the processors doing heavy lifting work, but they actually put the predict logic in a thread (code) which brings the ingestion of that document to async.

Reasons to have parallel ingestion

  1. A general solution: The parallel ingestion proposed here does the parallelization on document level, any time-consuming processors either existing today or introduced later can benefit from the parallelization directly without needing to make any changes.
  2. Maximum concurrency: Today, if processors makes their logic async, then only itself and the following processors will be run in a separate thread, all previous processors are still run in a same thread synchronously. Parallel ingestion can make the whole ingestion flow of a document in parallel to achieve maximum concurrency.
  3. Give user controls: It provides users flexibility to control concurrency level through batch size or user can even disable parallel ingestion through request parameter.
  4. Less dev efforts and resource usage if other processors want to achieve concurrency: Today, if some processor wants to achieve concurrency, they have to implement their own concurrency logic and they may also need to create their own thread-pool. It's not necessary as for a single document, processor has to be run one by one and causes wasting of resources and leads to overhead when thread switching.

Reasons not to have parallel ingestion

  1. There is no urgent need or immediate gain.

@model-collapse
Copy link

Scenario for batch processor in neural search document ingestion:
Since OpenSearch 2.7, ml-commons released its remote connector, allowing opensearch to connect with remote inference endpoint. However, ml-commons can take a list of strings as input but only supports to invoke the inference API on each input text one by one. The pipeline is like follows:
pipeline1
Intuitively, to enable the batch API of many 3rdparty LLM inference provider such as openAI and cohere, we can let ml-commons pass thru the list of strings as "a batch" to the API. Like this:
pipeline2
However, this kind of approach cannot fully leverage the GPU computation power because of two reasons: 1) The batch size is sticked with how many fields are being picked by the processor, but in fact each API have their suggested batch size such as 32 or 128. 2) In deep learning for NLP, text in a batch should have similar lengths in order to obtain highest GPU efficiency, but intuitively we will regard the text from different fields will have diverse length.
The best option is to implement a "batched" processor and recompose the "batches" by collecting texts from the same field. See following:
folding

Alternative Approach
There is one approach called "Dynamic Batching" which holds flushable queues in ml-commons. Each queues will gather the text input from the requests to ml-commons with the similar lengths. When timeout or the queue is full, the queue is flushed and the batch API of inference service is invoked. The con of this approach is that ml-commons will have quite big memory consumption to hold the queues, and timeout queue's implementation is more risky (dead locks, blocking calls) than batched processors.

Why we need the batch API?
The computation model of GPU is using block-wise SIMD (single instruction with multiple data). In AI, inferencing model by stacking input tensors together (as a batch) will effectively increase the GPU utilization. This approach is a more economic choice than using single request API.

@gaobinlong
Copy link
Collaborator

@reta , could you also help to take a look at this RFC, thanks!

@reta
Copy link
Collaborator

reta commented Mar 12, 2024

Thanks @gaobinlong

@reta , could you also help to take a look at this RFC, thanks!

@msfroh @model-collapse @chishui I think the idea of enhancing ingest processors with batch processing is sound in general but it may have unintended consequences, due to complexity of bulk APIs in particular:

  • for example, bulk support scripts and upserts, and combination of those ... changing the ingestion sequence could lead to very surprising results (by and large, bulk API has to provide some guarantees on document processing)
  • also, picking up the parallelism and batching becomes a nightmare (in my opinion), just today picking the right batch for bulk is very difficult, but adding yet more parallelization / internal batching would make it much harder

Making the ingestion API streaming based (apologies again for bribing for #3000) is fundamentally a different approach to ingestion - we would be able to vary the ingestion based on how fast the documents could be ingested at this moment of time, without introducing the complexity of batch / parallelism management.

@nknize I think you mind be eager to chime in here :)

@model-collapse
Copy link

Thanks @gaobinlong

@reta , could you also help to take a look at this RFC, thanks!

@msfroh @model-collapse @chishui I think the idea of enhancing ingest processors with batch processing is sound in general but it may have unintended consequences, due to complexity of bulk APIs in particular:

  • for example, bulk support scripts and upserts, and combination of those ... changing the ingestion sequence could lead to very surprising results (by and large, bulk API has to provide some guarantees on document processing)
  • also, picking up the parallelism and batching becomes a nightmare (in my opinion), just today picking the right batch for bulk is very difficult, but adding yet more parallelization / internal batching would make it much harder

Making the ingestion API streaming based (apologies again for bribing for #3000) is fundamentally a different approach to ingestion - we would be able to vary the ingestion based on how fast the documents could be ingested at this moment of time, without introducing the complexity of batch / parallelism management.

@nknize I think you mind be eager to chime in here :)

Thanks for the comment. For machine learning inference, making use of batched inference API will significantly increase the GPU utilization and reduce the ingestion time. Thus batch is very important thing. You pointed out that "picking the right batch for bulk is very difficult, but adding yet more parallelization / internal batching would make it much harder". Can you elaborate more on that and give your suggestions on how to make ingestion faster?

@chishui
Copy link
Contributor Author

chishui commented Mar 13, 2024

@reta thanks for the feedbacks

bulk support scripts and upserts, and combination of those ... changing the ingestion sequence could lead to very surprising results

The proposal only targets the ingest pipeline & its processor part, it won't touch the indexing part. Even documents are processed in a batch manner, these things are still ensured:

  1. for a single document, it'll be processed by processors sequentially in the same order as the processor order defined in pipeline.
  2. Only when all documents in a bulk request have been processed by ingest pipeline, they are dispatched to be indexed on shards which is the same with current logic.

Either the action is index or update, upsert or script, they would be processed by ingest pipeline in the same way. I don't see the proposal will cause "changing the ingestion sequence", please let me know if I miss a piece of the puzzle.

@chishui
Copy link
Contributor Author

chishui commented Mar 13, 2024

Due to the aforementioned reasons about "parallel ingestion", we won't have immediate gain from delivering the feature, we have decided to deprioritize the “parallel ingestion” part of this RFC and mainly focus on the "batch ingestion".

@reta
Copy link
Collaborator

reta commented Mar 13, 2024

I don't see the proposal will cause "changing the ingestion sequence", please let me know if I miss a piece of the puzzle.

@chishui The parallelization (which is mentioned in this proposal) naturally changes the order which documents are being ingested, does it make sense? I think your last comment is the reflection of that, thank you.

Can you elaborate more on that and give your suggestions on how to make ingestion faster?

@model-collapse the problem with batching (at least how it is implemented currently in OS and what we've seen so far with bulk API) is that choosing the right batch size is difficult, taking into account that there are circuit breakers in place that try to estimate the heap usage etc. (as of the moment of ingestion) and may reject the request sporadically.

@chishui
Copy link
Contributor Author

chishui commented Mar 13, 2024

@reta in ingest flow when documents are processed by ingest pipeline, could one document depend on another? Even for today, text_embedding and sparse_encoding processors have their inference logic run in a thread which makes the document ingestion run in parallel, right? https://github.com/opensearch-project/ml-commons/blob/020207ecd6322fed424d5d54c897be74623db103/plugin/src/main/java/org/opensearch/ml/task/MLPredictTaskRunner.java#L194

@reta
Copy link
Collaborator

reta commented Mar 13, 2024

@reta in ingest flow when documents are processed by ingest pipeline, could one document depend on another?

@chishui yes, in general documents could depend on each other (just think about an example of the documents that are ingested out of any CDC or message broker, where the documents are being constructed as a sequence of changes).

Even for today, text_embedding and sparse_encoding processors have their inference logic run in a thread which makes the document ingestion run in parallel, right? https://github.com/opensearch-project/ml-commons/blob/020207ecd6322fed424d5d54c897be74623db103/plugin/src/main/java/org/opensearch/ml/task/MLPredictTaskRunner.java#L194

This is purely plugin specific logic

@gaobinlong
Copy link
Collaborator

@chishui yes, in general documents could depend on each other (just think about an example of the documents that are ingested out of any CDC or message broker, where the documents are being constructed as a sequence of changes).

In my understanding, in terms of the execution of pipeline, each document in a bulk runs independently, no ingest processor can access other in-flight documents in the same bulk request, so in the process of executing pipelines, maybe a document cannot depend on another? And subsequently, for the processing of indexing(call lucene api to write), we have the write thread_pool, each document is processed in parallel, so the indexing order in a bulk cannot be guaranteed, the client side needs to ensure the indexing order. @reta, correct me if something is wrong, thank you!

@gaobinlong
Copy link
Collaborator

gaobinlong commented Mar 14, 2024

I think executing pipelines run before the indexing process, firstly, we use a single transport thread to execute pipelines for all the documents in a bulk request, and then use the write thread_pool to process the new generated documents in parallel, so it seems that when executing pipelines for the documents, the execution order doesn't matter.

@reta
Copy link
Collaborator

reta commented Mar 14, 2024

Thanks @gaobinlong

In my understanding, in terms of the execution of pipeline, each document in a bulk runs independently, no ingest processor can access other in-flight documents in the same bulk request, so in the process of executing pipelines, maybe a document cannot depend on another?

The documents could logically depend on each other (I am not referring to any sharing that may happen in ingest processor). Since we are talking about bulk ingestion, where document could be indexed / updated / deleted, we certainly don't want to the deletes to be "visible" before documents are indexed.

I think executing pipelines run before the indexing process, firstly, we use a single transport thread to execute pipelines for all the documents in a bulk request, and then use the write thread_pool to process the new generated documents in parallel, so it seems that when executing pipelines for the documents, the execution order doesn't matter.

This part is not clear to me: AFAIK we offload processing of bulk requests (batches) to thread pool, not individual documents. Could you please point out where we parallelize the ingestion of the individual documents in the batch? Thank you

@gaobinlong
Copy link
Collaborator

The documents could logically depend on each other (I am not referring to any sharing that may happen in ingest processor). Since we are talking about bulk ingestion, where document could be indexed / updated / deleted, we certainly don't want to the deletes to be "visible" before documents are indexed.

Yeah, you're correct, but for this RFC, it only focuses on the execution of ingest pipeline which only performs on the coordinate node, just the pre-processing part, not the indexing part, the indexing operations will not happen before the execution of ingest pipeline completes for all the documents in a bulk request.

This part is not clear to me: AFAIK we offload processing of bulk requests (batches) to thread pool, not individual documents. Could you please point out where we parallelize the ingestion of the individual documents in the batch? Thank you

After the execution of ingest pipeline for all documents in a bulk, the coordinate code groups these documents by shard and send them to different shards, each shard processes its documents in parallel, so at least in shard level, we process the documents in a bulk request in parallel. But I think this RFC will not touch the processing logic in each shard which processes the create/update/delete operations for the same document in order, so it's not harmful.

@model-collapse
Copy link

@reta What is your estimation where the circuit breaking will happen? If you mean it will happen in side the batch processor's own process, that could be, because it is impossible to estimate how much memory will be consumed by its code. Therefore, we need to let the users to configure the batch_size in the bulk_api.

@reta
Copy link
Collaborator

reta commented Mar 15, 2024

@reta What is your estimation where the circuit breaking will happen?

@model-collapse there are no estimates the one could make upfront, this is purely operational issue (basically depends on what is going on at the moment)

Therefore, we need to let the users to configure the batch_size in the bulk_api.

Due to previous comment, users have difficulties with that: same batch_size may work now and may not 10m from now (if cluster is under duress). The issue referred there has all the details.

@chishui
Copy link
Contributor Author

chishui commented Mar 21, 2024

Benchmark Results on Batch ingestion with Neural Search Processors

We implemented the PoC of batch ingestion locally and enabled the capability of sending batch documents to remote ML servers. We used "opensearch-benchmark" to benchmark both batch enabled and disabled situation on different ML servers (SageMaker, Cohere, OpenAI) and here are the benchmark results

Benchmark Results

Environment Setup

SageMaker

Environment Setup

  • SageMaker host type: g5.xlarge
  • Processor: Sparse Encoding
  • Benchmark Setup
    • Bulk size: 100
    • client: 1
Metrics no batch batch (batch size=10)
Min Throughput (docs/s) 65.51 260.1
Mean Throughput (docs/s) 93.96 406.12
Median Throughput (docs/s) 93.86 408.92
Max Throughput (docs/s) 99.76 443.08
Latency P50 (ms) 1102.16 249.544
Latency P90 (ms) 1207.51 279.467
Latency P99 (ms) 1297.8 318.965
Total Benchmark Time (s) 3095 770
Error Rate (%) 17.10%1 0

Cohere

Environment Setup

  • Processor: text embedding
  • Benchmark Setup
    • Bulk size: 100
    • client: 1
Metrics no batch batch (batch size=10)
Min Throughput (docs/s) 72.06 74.87
Mean Throughput (docs/s) 80.71 103.7
Median Throughput (docs/s) 80.5 103.25
Max Throughput (docs/s) 83.08 107.19
Latency P50 (ms) 1193.86 963.476
Latency P90 (ms) 1318.48 1193.37
Latency P99 (ms) 1926.17 1485.22
Total Benchmark Time (s) 3756 2975
Error Rate (%) 0.47 0.03

OpenAI

Environment Setup

  • Processor: text embedding
  • model: text-embedding-ada-002
  • Benchmark Setup
    • Bulk size: 100
    • client: 1
Metrics no batch batch (batch size=10)
Min Throughput (docs/s) 49.25 48.62
Mean Throughput (docs/s) 56.71 92.2
Median Throughput (docs/s) 57.53 92.84
Max Throughput (docs/s) 60.22 95.32
Latency P50 (ms) 1491.42 945.633
Latency P90 (ms) 2114.53 1388.97
Latency P99 (ms) 4269.29 2845.97
Total Benchmark Time (s) 5150 3275
Error Rate (%) 0.17 0

Results

  1. Batch ingestion has significant higher throughput and low latency.
  2. Batch ingestion has much lower error rate comparing to non-batch result..

[1]: The errors are coming from SageMaker 4xx response which was also reported in ml-commons issue opensearch-project/ml-commons#2249

@gaobinlong
Copy link
Collaborator

@andrross @sohami could you experts also help to take a look at this RFC, any comments will be appreciated, thank you!

dblock pushed a commit that referenced this issue Apr 30, 2024
* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
dblock pushed a commit that referenced this issue Apr 30, 2024
…13462)

* Support batch ingestion in bulk API (#12457) (#13306)

* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
(cherry picked from commit 1219c56)

* Adjust changelog item position to trigger CI

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
finnegancarroll pushed a commit to finnegancarroll/OpenSearch that referenced this issue May 10, 2024
…earch-project#13306)

* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
deshsidd pushed a commit to deshsidd/OpenSearch that referenced this issue May 17, 2024
…earch-project#13306)

* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
@getsaurabh02 getsaurabh02 added the v2.15.0 Issues and PRs related to version 2.15.0 label May 28, 2024
@andrross andrross added the Roadmap:Cost/Performance/Scale Project-wide roadmap label label May 29, 2024
@andrross
Copy link
Member

andrross commented Jun 3, 2024

I know I'm super late here as this has already been implemented and released in 2.14, but I'm questioning the new batch_size parameter in the bulk request. Why did we push this all the way up for the end user to supply in every single request? Is this something that is expected to vary from one request to the next, and do we think the end user is in the best position to configure this value?

I don't know all the details, but to me it would seem better for the optimal batch size to be chosen by each ingest processor implementation. It could be a setting that a system admin provides to each processor, or each processor could make an automatic decision based on a variety of factors. Regardless, it seems to me that both the system itself or a system administrator is in a better position to choose this batch size than the end user.

Also, the "batch_size" name is quite confusing. The user is already responsible for choosing a batch size, i.e. the number of documents they provide in each bulk request. Now there is a new parameter named "batch_size" that only applies to ingest pipelines, and only actually results in a different behavior if the ingest pipeline they are using happens to support batching.

@chishui
Copy link
Contributor Author

chishui commented Jun 4, 2024

Why did we push this all the way up for the end user to supply in every single request? Is this something that is expected to vary from one request to the next, and do we think the end user is in the best position to configure this value?

To most users they don't need to use this parameter at all. Only users who use remote ML servers to generate embeddings and who are sensitive to ingestion latency and want to optimize the ingestion performance will use it. They may want to tune it to achieve a better performance.

to me it would seem better for the optimal batch size to be chosen by each ingest processor implementation

Agreed, that's how we implement in neural search processors. batch_size controls how many documents are fed into ingest processors, processors determine the actual batch size for their logic.

I think as a request parameter, it provides per-request flexibility to give user fine-grain control. I'm not against a system setting as a default value without request parameter. I think they can live together like pipeline setting which is both a bulk request parameter and also a default setting of the index. If user wants to make it a system setting, we can support then.

@andrross
Copy link
Member

andrross commented Jun 4, 2024

batch_size controls how many documents are fed into ingest processors, processors determine the actual batch size for their logic.

So there are potentially three levels of batching? 1) the user determines how many documents to put into the _bulk request, 2) the user decides how those batches are sub-batched and fed into the ingest processors, and 3) the ingest processors decide how to sub-batch the sub-batches that were fed to them

On the surface it seems this second level of batching might be overly complex and not necessary.

I think as a request parameter, it provides per-request flexibility to give user fine-grain control.

Do you have a concrete use case where this level of flexibility is needed?

@chishui
Copy link
Contributor Author

chishui commented Jun 5, 2024

My previous statement might be inaccurate. The ingest-processors level batch is not always required. Only if batch size matters to their logic they can have it. For example, for text_embedding and sparse_encoding processors, they may connect to a remote model hosted in other services e.g. OpenAI. These services might have a maximum batch size limitation, to not exceeding the limit, text_embedding and sparse_encoding processors could set a maximum batch size (through connector setting) and if it receives docs more than that it can cut them into sub batches.

So from user's perspective, they don't need to consider all three. They could:

  1. reuse the bulk size they are happy with
  2. set maximum batch size in remote model connector to ensure not exceeding the remote server's batch limitation. (one-time thing. And if bulk size is known to be always within the limit, this step can be skipped)
  3. tune batch_size parameter to get good ingest performance, once it's determined, they can reuse.

Do you have a concrete use case where this level of flexibility is needed?

  1. User may want to run benchmark with different batch_size to get the one which leads to the optimal ingestion performance.
  2. Different data may need to be ingested through different model, and these models could have different optimal performance with different batch_size.

@andrross
Copy link
Member

andrross commented Jun 5, 2024

  1. User may want to run benchmark with different batch_size to get the one which leads to the optimal ingestion performance.

If the goal is to find a single optimal value, then a server-side setting is better because you can set it once and you don't have to modify all your ingestion tools to specify a new parameter. You can still benchmark with different values all you want with a server side setting.

  1. Different data may need to be ingested through different model, and these models could have different optimal performance with different batch_size.

If the optimal size depends on the model, then it seems like it should be configured on the server side when you provide whatever configuration is needed to wire up that model? Again, this avoids the need to modify all your ingestion tooling to provide the optimal parameter.

@chishui
Copy link
Contributor Author

chishui commented Jun 6, 2024

A request parameter or a system setting, they don't conflict. If users want to have a system setting, I don't see a reason why we shouldn't.

@andrross
Copy link
Member

andrross commented Jun 6, 2024

A request parameter or a system setting, they don't conflict.

What I don't understand is the need to do the sub-batching that happens prior to documents being fed into the ingest processors (this is what the request parameter controls). Why is this needed or even desirable? It adds complexity and makes for a bad user experience. Why not just pass the whole batch that came in the _bulk request to each ingest processor, and then let each ingest processor make the decision on how to use those batches? If an ingest processor makes a call to OpenAI and needs to break the batch into sub-batches it can do so (and it must do so even with the current design because there is nothing preventing a user from setting a batch_size value larger than the OpenAI limit).

To be clear, I'm advocating for deprecating and removing the batch_size parameter, simplifying the generic ingest service logic to just pass the entire batch that came in the original bulk request, and then implementing the batching inside the ingest processors (this can be a configuration setting or dynamic logic or anything else as appropriate for each ingest processor).

@chishui
Copy link
Contributor Author

chishui commented Jun 7, 2024

batch_size also acts as a switch that user has to explicitly set it to turn on the feature. User can be even unaware of this parameter, then they get the experience they are used to. We don't want to make it a default behavior once user upgrades to new version, they see something different but don't know why.

We could have a cluster setting or a pipeline setting or a processor setting but that also means the more fine grained control we provide to users, more settings they need to make. And if user wants to modify the settings, we don't even have a pipeline or processor update API.

It adds complexity and makes for a bad user experience

I don't think it's a bad user experience. Different choices have their pros and cons. Most users won't even need to be aware of this parameter, it's optional. For ingest latency sensitive users who utilize remote ML models for inferencing, they may want to seek out ways to improve their latency. And they would also want to experiment with different values and maybe different values for different types of documents. These are my assumption, but I know the parameter could give their flexibility to experiment. A system setting may also work, but whenever they want to try with a different batch_size, they'd need to make an additional request to update setting, that't even more work comparing to adding a parameter.

To be clear, I'm advocating for deprecating and removing the batch_size parameter

Still, users have all kinds of requirements, the parameter and the setting don't conflict, one can be a good supplement to the other, it's not one way door.

@andrross
Copy link
Member

andrross commented Jun 7, 2024

For ingest latency sensitive users who utilize remote ML models for inferencing, they may want to seek out ways to improve their latency.

The experience I would like to deliver to these users is that they upgrade to a newer version of OpenSearch and their performance improves because of increased efficiency offered by allowing the inferencing processor to make batched calls to a remote service. In my proposal, this is possible because the batch would be given to the processor, which could have reasonable defaults that often result near-optimal performance. The solution as implemented, with a batch_size parameter on the _bulk request that defaults to 1, precludes that, because the ingest service will never even give a batch to the processor unless the user changes all their ingestion tooling. That's the bad experience. Am I wrong about that?

@reta
Copy link
Collaborator

reta commented Jun 7, 2024

@andrross I am sorry for being late to respond to your concerns

Why did we push this all the way up for the end user to supply in every single request? Is this something that is expected to vary from one request to the next, and do we think the end user is in the best position to configure this value?

I think there are 2 sides to it: it is difficult to come up with optimal batch size because it depends on the cluster state (#12457 (comment)), more specifically in a busy cluster it could trigger circuit breaker due to heap usage. In these regards, having an option to have per-request batch size could help.

The solution as implemented, with a batch_size parameter on the _bulk request that defaults to 1, precludes that, because the ingest service will never even give a batch to the processor unless the user changes all their ingestion tooling. That's the bad experience. Am I wrong about that?

The other side of it is the safe default (which is not necessarily optimal) and I agree that having this option (which works most if not all the time) looks quite beneficial.

@reta
Copy link
Collaborator

reta commented Jun 7, 2024

@chishui so we finally looked into code with @andrross and there is certainly an issue with this batch_size: it has no relation to _bulk whatsoever but only the ingest pipelines. It should not be there are all, the implementation change as per #12457 (comment) would make batching applied and used in the right place (ingestion pipelines), not polluting the _bulk APIs.

@andrross
Copy link
Member

andrross commented Jun 13, 2024

I'm going to close this issue, since the initial feature has been implemented and released. I created a follow up issue for the improvements I'm advocating for, and we can continue the discussion there: #14283

If anyone thinks this needs to stay open please let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request ingest-pipeline RFC Issues requesting major changes Roadmap:Cost/Performance/Scale Project-wide roadmap label v2.15.0 Issues and PRs related to version 2.15.0
Projects
Status: 3.0.0 (TBD)
Status: New
Development

No branches or pull requests