-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance and testing pass #42
Conversation
1. Producer now has a closer mapping to the underlying RdKafka produce method. No longer does object enumeration 2. Consumer now copies memory instead of creating a reference to the message in the underlying buffer. Should allow better free'ing 3. Do version checking to support flush and additional constants 4. Export constants in C++ Land and enumerate the object on init
Intermediary class was doing little more than wrapping the object so it could be checked for fatal errors. Instead, consume can encapsulate message deleting when it is an error and return a baton. Baton has a valid message inside when it is not an error. Then, we can leverage a converter which copies the data over so the message can be safely deleted. This should allow for graceful disconnects.
cbda47d
to
b44332d
Compare
|
1. Got rid of the original produce method and other deprecated messages that no one was using. 2. RdKafka::err2str now has an exported library function so the internal error class can use it to turn any old integer into an error string. This is currently implemented for the synchronous produce method, which now returns the same int that the librdkafka return did. 3. Updated benchmark and it is showing around a 50% improvement over the old async + object enumeration implementation
Updated TopicWritable to use the new produce method The way that stream backpressure works now that the method is synchronous: You will be able to increase the amount the queue takes through the Other change of note: consumed message data is no longer Also, at this point, converted end to end tests to use mocha, as the big thing stopping it was the graceful disconnect issue which appears to be as gone as it will be. NOTE: sometimes, shutting down can take up to 10-15 seconds. I believe these are outgoing sockets still timing out in the background. It generally hangs that long on error'd requests, but it should still exit out eventually. |
05494e5
to
8228a83
Compare
8228a83
to
3d7f1bb
Compare
You might be interested in the new .._F_MSG_BLOCK flag: Re message.message: standard Kafka semantics is actually message.value, so you might want to go with that rather than .payload. Re slow shutdown: |
@edenhill I did take note of that Re: Re: slow shutdown: I'll take a look today with debug config on and see what I can see. Thanks! |
3a34a67
to
5439073
Compare
New Changes
|
@edenhill From what I can gather it's another GC related issue. If a created topic object is not cleaned up when |
New Changes
|
New Changes
|
35678bd
to
2a6b7e8
Compare
1. Added missing consumer method `subscription` 2. Added missing consumer method `position` 3. Added missing consumer method `committed` 4. Began refactor of topic readable to make it a first class citizen 5. subscribe is now sync and throws 6. unsubscribe is now sync and throws
2a6b7e8
to
a9c7fdc
Compare
Lots of changes here and I'm not done. Just publishing this initially as a PR so it can get some visibility.