You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is more of a feature request derived from this confluent-kafka-go issue: confluentinc/confluent-kafka-go#16 (comment)
Raising here for awareness, I'll prob have a go at a PR to add this.
Please let me know if this needs to be in a specific format!
Description
Sarama uses the FNV-1a hashing algorithm as its default partitioner. This means that anyone migrating from Sarama to confluent-kafka-go will either have to implement the partitioner themselves (making an extra metadata round trip call to get number of partitions) or change algorithms to one of the librdkafka supported algorithms (potentially causing short term out of order processing by downstream consumers).
I propose adding the FNV-1a hash as a supported partitioner in librdkafka, similar to how Murmur2 was added for Java compatibility.
The text was updated successfully, but these errors were encountered:
Adds a new partitioner using the FNV-1a hashing algorithm, with some
tweaks to match Sarama's default hashing partitioner behaviour.
Main use case is for users switching from Sarama to librdkafka
(or confluent-kafka-go) and wanting to maintain ordering guarantees.
Adds a new partitioner using the FNV-1a hashing algorithm, with some
tweaks to match Sarama's default hashing partitioner behaviour.
Main use case is for users switching from Sarama to librdkafka
(or confluent-kafka-go) and wanting to maintain ordering guarantees.
This issue is more of a feature request derived from this confluent-kafka-go issue: confluentinc/confluent-kafka-go#16 (comment)
Raising here for awareness, I'll prob have a go at a PR to add this.
Please let me know if this needs to be in a specific format!
Description
Sarama uses the FNV-1a hashing algorithm as its default partitioner. This means that anyone migrating from Sarama to confluent-kafka-go will either have to implement the partitioner themselves (making an extra metadata round trip call to get number of partitions) or change algorithms to one of the librdkafka supported algorithms (potentially causing short term out of order processing by downstream consumers).
I propose adding the FNV-1a hash as a supported partitioner in librdkafka, similar to how Murmur2 was added for Java compatibility.
The text was updated successfully, but these errors were encountered: