-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ProvisionedThroughputExceededException in kinesis client library #4
Comments
Hi Rantav, Thanks for reporting this. As you said, if you see this exception occasionally, and your application (that uses KCL to process Kinesis stream) doesn't fall behind in processing your Kinesis stream data, then it can be considered as a benign exception and can be ignored. We recently updated our documentation to reflect this ( for details see section titled "Read Throttling" at http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-additional-considerations.html ) Let me know if you have further questions. Thanks |
Thank you @kumarumesh so in this case, I'd appreciate it if you could level down the log message to |
Hi Rantav, I got this error as well. This is very important and can cause you a lot of problems! for example if you read less than 1Mb/sec from each shard (the max is 2Mb/sec), but you do it in 7 getRecord requests/sec you will get ProvisionedThroughputExceededException and you'll probably don't know why. It may cause your applications to have delays on the stream To solve this you need to do some fine tuning on your KinesisClientLibConfiguration - you can control the maxRecords in each read and the idleTimeBetweenReadsInMillis. Hope this helps, |
I also support lowering this to the warning level. |
+1 |
I agree this should be lower down to warn level |
+1 For a Warn |
+1 |
Thanks for reporting this. We'll look at handling the throttling exception, and reporting it at a lower logging level. |
* Fixed an issue building JavaDoc for Java 8. * [Issue awslabs#18](awslabs#18) * [PR awslabs#141](awslabs#141) * Reduce Throttling Messages to WARN, unless throttling occurs 6 times consecutively. * [Issue awslabs#4](awslabs#4) * [PR awslabs#140](awslabs#140) * Fixed two bugs occurring in requestShutdown. * Fixed a bug that prevented the worker from shutting down, via requestShutdown, when no leases were held. * [Issue awslabs#128](awslabs#128) * Fixed a bug that could trigger a NullPointerException if leases changed during requestShutdown. * [Issue awslabs#129](awslabs#129) * [PR awslabs#139](awslabs#139) * Upgraded the AWS SDK Version to 1.11.91 * [PR awslabs#138](awslabs#138) * Use an executor returned from `ExecutorService.newFixedThreadPool` instead of constructing it by hand. * [PR awslabs#135](awslabs#135) * Correctly initialize DynamoDB client, when endpoint is explicitly set. * [PR awslabs#142](awslabs#142)
* Fixed an issue building JavaDoc for Java 8. * [Issue #18](#18) * [PR #141](#141) * Reduce Throttling Messages to WARN, unless throttling occurs 6 times consecutively. * [Issue #4](#4) * [PR #140](#140) * Fixed two bugs occurring in requestShutdown. * Fixed a bug that prevented the worker from shutting down, via requestShutdown, when no leases were held. * [Issue #128](#128) * Fixed a bug that could trigger a NullPointerException if leases changed during requestShutdown. * [Issue #129](#129) * [PR #139](#139) * Upgraded the AWS SDK Version to 1.11.91 * [PR #138](#138) * Use an executor returned from `ExecutorService.newFixedThreadPool` instead of constructing it by hand. * [PR #135](#135) * Correctly initialize DynamoDB client, when endpoint is explicitly set. * [PR #142](#142)
This has been fixed in the latest release. It will now be a warning unless it's throttled 6 times consecutively. Additionally the message is now reported from the the ThrottleReporter if you want to filter it out completely. Feel free to reopen if you have any other questions or concerns. |
Hey, is it possible to migrate the implemented functionality to 2.X version also? |
Sometimes I see these errors in the logs.
They don't happen a lot, but they do happen.
I suppose they mean that the kinesis client reads data too fast.
I was under the impression that the client lib is supposed to take care of reading the data in the right pace, am I wrong?
Please advise....
The text was updated successfully, but these errors were encountered: