Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OptoutQueryCache always stacks memory #49668

Closed
dnangelxueaoran opened this issue Nov 28, 2019 · 8 comments
Closed

OptoutQueryCache always stacks memory #49668

dnangelxueaoran opened this issue Nov 28, 2019 · 8 comments
Assignees
Labels
:Distributed Indexing/CRUD A catch all label for issues around indexing, updating and getting a doc by id. Not search.

Comments

@dnangelxueaoran
Copy link

Hello, the version we use is 7.4.2.

During use, the memory of the collection in the OptOutQueryCache of the master node keeps growing until the memory is full and there is no release action. The node also does not go offline.

It's been bothering us for a long time. No matter how we adjust the parameters related to memory and querycache, we can't solve this problem.

This was not a problem with version 6.2.1.

@matriv matriv self-assigned this Nov 28, 2019
@albertzaharovits albertzaharovits added the :Security/Authorization Roles, Privileges, DLS/FLS, RBAC/ABAC label Nov 28, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-security (:Security/Authorization)

@jimczi jimczi added :Distributed Indexing/CRUD A catch all label for issues around indexing, updating and getting a doc by id. Not search. and removed :Security/Authorization Roles, Privileges, DLS/FLS, RBAC/ABAC labels Nov 28, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/CRUD)

@matriv matriv removed their assignment Nov 28, 2019
@original-brownbear
Copy link
Member

@dnangelxueaoran are you seeing errors in the logs related to creating indices (or other errors)?

We solved an issue with this in #48230 where errors in index creation would lead to leaking OptoutQueryCache but maybe we missed a spot.

@dnangelxueaoran
Copy link
Author

dnangelxueaoran commented Nov 29, 2019

@original-brownbear Thank you very much! I will send the debug log to you by mistake, please check if it is the same problem. We have two different error messages, one of which is the same as #48230.

[2019-11-29T09:37:03,594][DEBUG][o.e.a.a.i.t.p.TransportPutIndexTemplateAction] [node-3] failed to put template [cloud-sleuth:span_template] java.lang.IllegalArgumentException: Setting index.mapper.dynamic was removed after version 6.0.0 at org.elasticsearch.index.mapper.MapperService.<init>(MapperService.java:165) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.index.IndexService.<init>(IndexService.java:180) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.index.IndexModule.newIndexService(IndexModule.java:411) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.indices.IndicesService.createIndexService(IndicesService.java:550) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:499) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.validateAndAddTemplate(MetaDataIndexTemplateService.java:235) ~[elasticsearch-7.4.2.jar:7.4.2]


[2019-11-29T09:30:53,680][DEBUG][o.e.a.b.TransportShardBulkAction] [node-4] [metrics-2019-11][1] failed to execute bulk item (index) index {[metrics-2019-11][doc][NWvItG4B2lkBRSrDQXSQ], source[{"@timestamp":"2019-11-29T01:30:53.196Z","name":"http_server_requests","type":"timer","exception":"None","method":"GET","outcome":"SUCCESS","status":"200","uri":"/v1/ns/catalog/instances","count":0,"sum":0.0,"mean":0.0,"max":0.0}]} java.lang.IllegalArgumentException: Rejecting mapping update to [metrics-2019-11] as the final mapping would have more than 1 type: [_doc, doc] at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:272) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:238) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.4.2.jar:7.4.2] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.4.2.jar:7.4.2]

@original-brownbear original-brownbear self-assigned this Nov 29, 2019
@original-brownbear
Copy link
Member

It looks like this is the same issue fixed by #48230 but #48230 wasn't backported as the labelling on the PR suggests which is why this is still failing. Maybe @DaveCTurner can double check to confirm and maybe backport (in case there is a reason this wasn't eventually backported to 7.4).

@dnangelxueaoran
Copy link
Author

@original-brownbear
@DaveCTurner
Thank you very much! Yes, it's similar to #48230, we upgraded 7.XX with 7.3.0 first and upgraded to 7.4.2 two weeks ago.
It is true that there will still be memory leaks.
Please check it.
Do we need to wait for the next release?

@dnangelxueaoran
Copy link
Author

@original-brownbear Will the next version be fixed?

@DaveCTurner
Copy link
Contributor

Apologies, you're right, the fix was not backported to the 7.4 branch. I do not expect any more releases from that branch so I think all we can do here is remove the v7.4.2 label. It is addressed in 7.5.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/CRUD A catch all label for issues around indexing, updating and getting a doc by id. Not search.
Projects
None yet
Development

No branches or pull requests

7 participants