-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After shard splitting, our log is flooded with warning messages "Cannot find the shard given shardId" #55
Comments
I have the same warning. Does anyone know how to fix it? |
We are facing similar issue. Any advice on a solution would be appreciated? @xujiaxj just curious if you thought of anything since the bug filing? |
@amanduggal we modified our logback setting to suppress the warning message logger name="com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy" level="ERROR" |
This is especially annoying when using the KCL to read a dynamodb stream, which claims to split it's shards every 4 hours according to this blog post by one of the DynamoDB engineers at AWS:
|
Just letting people know that we are aware of this. We're looking into fixing this, but I don't have an ETA at this time. |
I ran into this using DynamoDB Streams without explicit shard splitting occurring (just the usual DynamoDB cycling of the shard as @matthewbogner described). FWIW, here is the sequence we encountered that triggered the warnings. With DynamoDB Streams this occurs pretty often--at any given point in time there's almost always at least one of our servers in this state where it's logging these warnings every 2 seconds. We've had to turn off WARN for KinesisProxy and ProcessTask. Assume a DynamoDB stream with shard S1 and two stream workers A and B using the KCL (we aren't using the KPL):
|
Been testing the stack and looking at the sharding and been noticing these errors, although everything continues to appear to work. Forgive my newness to the technology, but is this something that we should be concerned about? |
Unsure why this is labelled an enhancement? |
Any updates on this? If I understood correctly from @shawnsmith's analysis, the solution is to refresh cached shard list on lease steal? |
From 2016:
Is this still the case? |
Just copying this over from the linked issue. @pfifer Do you have any updates or insight here?
And I found an AWS Dev forum link related to this issue here: https://forums.aws.amazon.com/thread.jspa?messageID=913872 |
@pfifer any ideas? any updates? any anything? |
…rics_configs Ltr 1 periodic auditor metrics configs
@pfifer any updates on this? |
chgenvulgfjlejltgvglhecbucrihrcbbclfj |
@joshua-kim was that a yubikey press? :-P Otherwise, can you please elaborate on why the issue is being closed and how to solve/prevent it? |
@igracia Sorry, yes that was a Yubikey press. I was referencing this issue when looking into another cached shard map issue in a fork of 1.6; I'm curious though, are you still seeing this on the latest 2.x/1.x releases? The latest releases are no longer using ListShards in most cases, so I'm curious to see if this bug is still present. |
Thanks @joshua-kim! We have several consumers using the DynamoDB Streams Kinesis adapter on a shingle shard, and still getting this with the following versions
Bumping those versions makes it all stop working, so we're stuck with them for the time being. Also, as per this issue in dynamodb-streams-kinesis-adapter, we can't use v2. Any suggestions would be appreciated! |
Same problem 6 years later 😞 I'm using amazon-kinesis-client 1.13.3 with dynamodb-streams-kinesis-adapter 1.5.3 |
The KCL dev flow has been quite stable in the many years I've been using it.
|
ShardSyncTask is to run either on worker initialization or when the worker detects one of its assigned shards completes. In the event of shard split, if however the child shard falls on a worker that's not previously processing the parent shard, this worker will not run the ShardSyncTask because none of its previously assigned shards have completed.
Meanwhile, the lease coordinator has timer tasks to sync up with the Dynamo table to assign itself shards to process.
So we end up with the worker start processing the child shard while at the same time, keeps logging a warning message from line 208 of KinesisProxy:
LOG.warn("Cannot find the shard given the shardId " + shardId);
As far as I understand, the shard info is needed only for de-aggregation to discard user records that are supposed to be re-routed to other shards during resharding. So we are not experiencing dropped records or sth severe, it's just the flooding of our log, and maybe some duplicates as we are using KPL aggregation on the producer side.
The text was updated successfully, but these errors were encountered: