Skip to content

Conversation

@huanliwang-db
Copy link
Contributor

The current behavior of the compute method in both StateStoreRDD and ReadStateStoreRDD is: we first get the state store instance and then get the input iterator for the inputRDD.

For RocksDB state store, the running task will acquire and hold the lock for this instance. The retried task or speculative task will fail to acquire the lock and eventually abort the job if there are some network issues. For example, When we shrink the executors, the alive one will try to fetch data from the killed ones because it doesn't know the target location (prefetched from the driver) is dead until it tries to fetch data. The query might be hanging for a long time as the executor will retry spark.shuffle.io.maxRetries=3 times and for each retry wait for spark.shuffle.io.connectionTimeout (default value is 120s) before timeout. In total, the task could be hanging for about 6 minutes. And the retried or speculative tasks won't be able to acquire the lock in this period.

Making lock acquisition happen after retrieving the input iterator should be able to avoid this situation.

What changes were proposed in this pull request?

Making lock acquisition happen after retrieving the input iterator.

Why are the changes needed?

Avoid the failure like the following when there is a network issue

java.lang.IllegalStateException: StateStoreId(opId=0,partId=3,name=default): RocksDB instance could not be acquired by 
[ThreadId: Some(47), task: 3.1 in stage 57, TID 793] as it was not released by [ThreadId: Some(51), task: 3.1 in stage 57, 
TID 342] after 60003 ms. 

Does this PR introduce any user-facing change?

No

How was this patch tested?

existing UT should be good enough

Copy link
Contributor

@HeartSaVioR HeartSaVioR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1
Sounds to me as a nice improvement, specifically RocksDB state store provider. I can't think of side-effects of this change so let me consider this as safe change. Someone can correct me afterwards.

@HeartSaVioR
Copy link
Contributor

Thanks! Merging to master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants