You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, job results processor does not do any retrying on bulk indexing failures. On certain classes of failures, we should attempt to retry after some random, exponential back-off. This, of course, causes some back pressure on the overall processing and should be considered in implementation.
Details of how results are processed and indexed can be seen in org.elasticsearch.xpack.ml.job.process.autodetect.output.AutodetectResultProcessor.
The text was updated successfully, but these errors were encountered:
We should also retry indexing of state documents that fail to index first time due to an overloaded cluster. Failing to index a state document ruins the model snapshot it relates to, and will stop the job restarting with that particular model snapshot, so in many ways is even worse than losing a results document. (The method that needs changing is AutodetectStateProcessor.persist.)
We should also retry indexing of state documents that fail to index first time due to an overloaded cluster. Failing to index a state document ruins the model snapshot it relates to, and will stop the job restarting with that particular model snapshot, so in many ways is even worse than losing a results document. (The method that needs changing is AutodetectStateProcessor.persist.)
Currently, job results processor does not do any retrying on bulk indexing failures. On certain classes of failures, we should attempt to retry after some random, exponential back-off. This, of course, causes some back pressure on the overall processing and should be considered in implementation.
Details of how results are processed and indexed can be seen in
org.elasticsearch.xpack.ml.job.process.autodetect.output.AutodetectResultProcessor
.The text was updated successfully, but these errors were encountered: