-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIGSEGV on bulk upsert (10 parallel requests of 10'000 documents each) #61667
Comments
Any help is appreciated. If error is related to the JVM, I'll be happy to go report the issue there. |
I tried another versions. Same with A bit different with
|
I also tried It produces warning So, good news: I upserted {
"error":{
"root_cause":[
{
"type":"circuit_breaking_exception",
"reason":"[parent] Data too large, data for [<http_request>] would be [998525888/952.2mb], which is larger than the limit of [986061209/940.3mb], real usage: [990325888/944.4mb], new bytes reserved: [8200000/7.8mb], usages [request=1556536/1.4mb, fielddata=0/0b, in_flight_requests=57400000/54.7mb, accounting=1522440/1.4mb]",
"bytes_wanted":998525888,
"bytes_limit":986061209,
"durability":"TRANSIENT"
}
],
"type":"circuit_breaking_exception",
"reason":"[parent] Data too large, data for [<http_request>] would be [998525888/952.2mb], which is larger than the limit of [986061209/940.3mb], real usage: [990325888/944.4mb], new bytes reserved: [8200000/7.8mb], usages [request=1556536/1.4mb, fielddata=0/0b, in_flight_requests=57400000/54.7mb, accounting=1522440/1.4mb]",
"bytes_wanted":998525888,
"bytes_limit":986061209,
"durability":"TRANSIENT"
},
"status":429
} I don't really understand the origination of Is it me misusing elasticsearch somehow? What is legit way to upsert 10 mln documents? |
hi @neseleznev The sigsev do not necessarily look like JVM bugs but rather like an issue with your system (erroneous RAM looks like the most likely culprit here). I don't think there's anything we can do here and diagnosing this and/or helping with correctly configuring/sizing the circuit breaker is more of a user question I'm afraid. There's an active community in the forum that should be able to help get an answer to your question. As such, I hope you don't mind that I close this. |
Elasticsearch version (
bin/elasticsearch --version
): 7.9.0 and 7.8.1 (docker imagesdocker.elastic.co/elasticsearch/elasticsearch:7.9.0
and...:7.8.1
respectively)Plugins installed: []
JVM version (
java -version
): 14.0.1+7 ― Provided with both docker imagesOS version (
uname -a
if on a Unix-like system): Linux ... 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxDescription of the problem including expected versus actual behavior:
While sending bulk upsert requests, fatal error occurs and container dies.
Steps to reproduce:
I was uploading 10_000_000 documents by chunks of 10_000 in parallel 10 threads (1000 chunks overall).

CPU load was around 800% all the time, which is expected, because I assume 10 threads ideally consume 1000% of CPU.
Suddenly after ~6mln documents inserted, I faced JVM errors and container stopped

Logs:

With 7.8.1 I faced
With 7.9.0 output was a bit different. First there was logs abiut GC degradation I suppose:
but then it failed:

Same in text:
After couple of attempts, I successfully inserted all the 10_000_000 documents
The text was updated successfully, but these errors were encountered: