Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use an explicit lock for shutdown instead of the general lock #501

Merged
merged 1 commit into from
Feb 15, 2019

Conversation

pfifer
Copy link
Contributor

@pfifer pfifer commented Feb 15, 2019

If the Scheduler loses its lease for a shard it will attempt to
shutdown the ShardConsumer processing that shard. When shutting down
the ShardConsumer acquires a lock on this and makes the necessary
state changes.

This becomes an issue if the ShardConsumer is currently processing a
batch of records as processing of the records is done under the
general this lock.

When these two things combine the Scheduler can become stuck waiting
on the record processing to complete.

To fix this the ShardConsumer will now use a specific lock on shutdown
state changes to prevent the Scheduler from becoming blocked.

Allow the shutdown state change future to acquire the lock

When the ShardConsumer is being shutdown we create a future for the
state change originally the future needed to acquire the lock before
attempting to create the future task. This changes it to acquire the
lock while running on another thread, and complete the shutdown then.

Issue #, if available:

Description of changes:

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

If the Scheduler loses its lease for a shard it will attempt to
shutdown the ShardConsumer processing that shard.  When shutting down
the ShardConsumer acquires a lock on `this` and makes the necessary
state changes.

This becomes an issue if the ShardConsumer is currently processing a
batch of records as processing of the records is done under the
general `this` lock.

When these two things combine the Scheduler can become stuck waiting
on the record processing to complete.

To fix this the ShardConsumer will now use a specific lock on shutdown
state changes to prevent the Scheduler from becoming blocked.

Allow the shutdown state change future to acquire the lock

When the ShardConsumer is being shutdown we create a future for the
state change originally the future needed to acquire the lock before
attempting to create the future task.  This changes it to acquire the
lock while running on another thread, and complete the shutdown then.
@pfifer pfifer added this to the 2.1.2 milestone Feb 15, 2019
@sahilpalvia sahilpalvia merged commit c053789 into awslabs:master Feb 15, 2019
pfifer added a commit to pfifer/amazon-kinesis-client that referenced this pull request Feb 18, 2019
https://github.com/awslabs/amazon-kinesis-client/milestone/29
* Fixed handling of the progress detection in the `ShardConsumer` to restart from the last accepted record, instead of the last queued record.
  * awslabs#492
* Fixed handling of exceptions when using polling so that it will no longer treat `SdkException`s as an unexpected exception.
  * awslabs#497
  * awslabs#502
* Fixed a case where lease loss would block the `Scheduler` while waiting for a record processor's `processRecords` method to complete.
  * awslabs#501
sahilpalvia pushed a commit that referenced this pull request Feb 18, 2019
https://github.com/awslabs/amazon-kinesis-client/milestone/29
* Fixed handling of the progress detection in the `ShardConsumer` to restart from the last accepted record, instead of the last queued record.
  * #492
* Fixed handling of exceptions when using polling so that it will no longer treat `SdkException`s as an unexpected exception.
  * #497
  * #502
* Fixed a case where lease loss would block the `Scheduler` while waiting for a record processor's `processRecords` method to complete.
  * #501
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants