Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(stream): make consumer decrement pending number when message is acknowledged. #2352

Merged
merged 5 commits into from
Jun 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions src/types/redis_stream.cc
Original file line number Diff line number Diff line change
Expand Up @@ -339,20 +339,42 @@ rocksdb::Status Stream::DeletePelEntries(const Slice &stream_name, const std::st
WriteBatchLogData log_data(kRedisStream);
batch->PutLogData(log_data.Encode());

std::map<std::string, uint64_t> consumer_acknowledges;
for (const auto &id : entry_ids) {
std::string entry_key = internalPelKeyFromGroupAndEntryId(ns_key, metadata, group_name, id);
std::string value;
s = storage_->Get(rocksdb::ReadOptions(), stream_cf_handle_, entry_key, &value);
if (!s.ok() && !s.IsNotFound()) {
return s;
}
if (s.ok()) {
*acknowledged += 1;
batch->Delete(stream_cf_handle_, entry_key);

// increment ack for each related consumer
auto pel_entry = decodeStreamPelEntryValue(value);
consumer_acknowledges[pel_entry.consumer_name]++;
}
}
if (*acknowledged > 0) {
StreamConsumerGroupMetadata group_metadata = decodeStreamConsumerGroupMetadataValue(get_group_value);
group_metadata.pending_number -= *acknowledged;
std::string group_value = encodeStreamConsumerGroupMetadataValue(group_metadata);
batch->Put(stream_cf_handle_, group_key, group_value);

for (const auto &[consumer_name, ack_count] : consumer_acknowledges) {
auto consumer_meta_key = internalKeyFromConsumerName(ns_key, metadata, group_name, consumer_name);
std::string consumer_meta_original;
s = storage_->Get(rocksdb::ReadOptions(), stream_cf_handle_, consumer_meta_key, &consumer_meta_original);
if (!s.ok() && !s.IsNotFound()) {
return s;
}
if (s.ok()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if !s.ok() && !s.IsNotFound()?

Copy link
Contributor Author

@LindaSummer LindaSummer Jun 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very for your correction.

It should be an internal error and I think that I need to return the error. This function's return type is a rocksdb::Status so I don't need to wrap it to another format.

Do I need to add golang integration test for this error handling case? This may be hard to produce a case for it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I need to add golang integration test for this error handling case?

Currently it's a bit hard to modify this, especially in integration tests. We can just handle this without testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it! I have corrected my wording above.

This function returns rocksdb::Status and I think I should return the status immediately in the function when !s.ok() && !s.IsNotFound().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There're two gets here, right? if (s.ok()) in the loop should also adds this checks(Though errors merely happens)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your correction!

Sorry for missing another error handling. I have added it with a new commit.

auto consumer_metadata = decodeStreamConsumerMetadataValue(consumer_meta_original);
consumer_metadata.pending_number -= ack_count;
batch->Put(stream_cf_handle_, consumer_meta_key, encodeStreamConsumerMetadataValue(consumer_metadata));
}
}
}
return storage_->Write(storage_->DefaultWriteOptions(), batch->GetWriteBatch());
}
Expand Down
35 changes: 35 additions & 0 deletions tests/gocase/unit/type/stream/stream_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1064,6 +1064,41 @@ func TestStreamOffset(t *testing.T) {
require.Equal(t, consumer3, r1[0].Name)
})

t.Run("XINFO after delete pending message and related consumer, for issue #2350", func(t *testing.T) {
streamName := "test-stream-2350"
groupName := "test-group-2350"
consumerName := "test-consumer-2350"
require.NoError(t, rdb.XGroupCreateMkStream(ctx, streamName, groupName, "$").Err())
require.NoError(t, rdb.XAdd(ctx, &redis.XAddArgs{
Stream: streamName,
ID: "*",
Values: []string{"testing", "overflow"},
}).Err())
readRsp := rdb.XReadGroup(ctx, &redis.XReadGroupArgs{
Group: groupName,
Consumer: consumerName,
Streams: []string{streamName, ">"},
Count: 1,
NoAck: false,
})
require.NoError(t, readRsp.Err())
require.Len(t, readRsp.Val(), 1)
streamRsp := readRsp.Val()[0]
require.Len(t, streamRsp.Messages, 1)
msgID := streamRsp.Messages[0]
require.NoError(t, rdb.XAck(ctx, streamName, groupName, msgID.ID).Err())
require.NoError(t, rdb.XGroupDelConsumer(ctx, streamName, groupName, consumerName).Err())
infoRsp := rdb.XInfoGroups(ctx, streamName)
require.NoError(t, infoRsp.Err())
infoGroups := infoRsp.Val()
require.Len(t, infoGroups, 1)
infoGroup := infoGroups[0]
require.Equal(t, groupName, infoGroup.Name)
require.Equal(t, int64(0), infoGroup.Consumers)
require.Equal(t, int64(0), infoGroup.Pending)
require.Equal(t, msgID.ID, infoGroup.LastDeliveredID)
})

t.Run("XREAD After XGroupCreate and XGroupCreateConsumer, for issue #2109", func(t *testing.T) {
streamName := "test-stream"
group := "group"
Expand Down
Loading