Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consumers not keeping up with partition data #3

Open
davidbirdsong opened this issue Oct 25, 2012 · 3 comments
Open

consumers not keeping up with partition data #3

davidbirdsong opened this issue Oct 25, 2012 · 3 comments

Comments

@davidbirdsong
Copy link

me again.

i'm trying to get to the bottom of a consumer group that has not been able to catch up with what is in the broker no matter how many consumers i add. i'd like to compare what each broker's current offset is against what the consumer is currently working on to prove that the consumers aren't catching up.

could you suggest how to expose those values? i'm combing through the code right now, but any help would be appreciated.

@jrydberg
Copy link
Owner

the offsets are written to zookeeper in this function:

https://github.com/jrydberg/gevent-kafka/blob/master/gevent_kafka/consumer.py#L159

it never catches up? even if you stop producing?

@davidbirdsong
Copy link
Author

i can't turn off the producer. my consumer is in a separate consumer group
than an existing consumer (hadoop based.)

i'll play around with comparing the offsets that record the consumer state,
but is it obvious to get the tip of the broker offset to compare the
consumer against?

On Thu, Oct 25, 2012 at 12:33 PM, Johan Rydberg notifications@github.comwrote:

the offsets are written to zookeeper in this function:

https://github.com/jrydberg/gevent-kafka/blob/master/gevent_kafka/consumer.py#L159

it never catches up? even if you stop producing?


Reply to this email directly or view it on GitHubhttps://github.com//issues/3#issuecomment-9790970.

@jrydberg
Copy link
Owner

i'm not sure really, but i would do some simple calculations based on the
filenames of
the datafiles on the broker, and their size. the files are named after
their offset, so
the name plus the size should give you the tip.

also, check this:

https://github.com/kafka-dev/kafka/blob/master/core/src/main/scala/kafka/tools/GetOffsetShell.scala

On Thu, Oct 25, 2012 at 9:46 PM, david birdsong notifications@github.comwrote:

i can't turn off the producer. my consumer is in a separate consumer group
than an existing consumer (hadoop based.)

i'll play around with comparing the offsets that record the consumer
state,
but is it obvious to get the tip of the broker offset to compare the
consumer against?

On Thu, Oct 25, 2012 at 12:33 PM, Johan Rydberg notifications@github.comwrote:

the offsets are written to zookeeper in this function:

https://github.com/jrydberg/gevent-kafka/blob/master/gevent_kafka/consumer.py#L159

it never catches up? even if you stop producing?


Reply to this email directly or view it on GitHub<
https://github.com/jrydberg/gevent-kafka/issues/3#issuecomment-9790970>.


Reply to this email directly or view it on GitHubhttps://github.com//issues/3#issuecomment-9791413.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants