Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better consumer offset handling #378

Closed
wants to merge 0 commits into from

Conversation

wvanbergen
Copy link
Contributor

  • Add Client.GetOffsetRange to get the available range of offsets for a partition in one request. I had to make some changes to OffsetRequest and OffsetResponse to make it possible to ask for more than one offset per partition, but no changes to the public API were necessary.
  • When calling ConsumePartition, always check whether the offset is within the offset range. This means we now have to do an OffsetRequest for every ConsumePartition call, even if the offset is provided. The good news is that the method will immediately return an error and never start a goroutine, instead of starting the goroutine and returning an error in the Errors() channel which you can easily ignore.

@Shopify/kafka

@wvanbergen wvanbergen force-pushed the better_offset_range_handling branch 3 times, most recently from 28bd4ba to fc1a1ee Compare March 21, 2015 19:02
@eapache
Copy link
Contributor

eapache commented Mar 21, 2015

make it possible to ask for more than one offset per partition

Is that even possible? It's never explicitly stated, but I'd always assumed based on the layout of the protocol that it wasn't really supported to include multiple blocks for a single partition.

@wvanbergen
Copy link
Contributor Author

Apparently it works in Kafka 0.8.1 and 0.8.2. The OffsetRequest documentation is especially unclear, but this led me to believe that is was supported:

The response contains the starting offset of each segment for the requested partition as well as the "log end offset" i.e. the offset of the next message that would be appended to the given partition.

You can even do more than two, as long as you request the timestamps in descending order.

@wvanbergen wvanbergen force-pushed the better_offset_range_handling branch from fc1a1ee to f2e95e4 Compare March 21, 2015 20:34
@@ -292,6 +297,42 @@ func (client *client) RefreshMetadata(topics ...string) error {
return client.tryRefreshMetadata(topics, client.conf.Metadata.Retry.Max)
}

func (client *client) GetOffsetRange(topic string, partitionID int32) (int64, int64, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do all this work when you could have just made two GetOffset calls?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 request vs. 2 requests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it only happens once on startup, doesn't seem like a big deal (premature optimization and all that)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I find this something that is generally useful. E.g., for monitoring purposes, you'd also want to know the available range. Of course, we could do two requests for this purpose as well.

@wvanbergen wvanbergen closed this Apr 11, 2015
@wvanbergen wvanbergen force-pushed the better_offset_range_handling branch from f2e95e4 to d779673 Compare April 11, 2015 18:17
@wvanbergen
Copy link
Contributor Author

WTF :/ Github marked this as merged after I rebased.

@wvanbergen wvanbergen deleted the better_offset_range_handling branch April 11, 2015 18:18
@wvanbergen wvanbergen restored the better_offset_range_handling branch April 11, 2015 18:18
@wvanbergen
Copy link
Contributor Author

Closing in favor of #418

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants