Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send fix message before logging it #36

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

charlesbr1
Copy link
Contributor

Only updated the "boolean send(String messageString)" method.
Do not wait for the logger to perform its task before sending a message.

-> As a side effect, the logged fix message may now appears after other messages from the network stack.

Avoid printing the fix message to send two time in case of error (no responder).
Removed the synchronization as it is contention prone, and not really needed : it should not really prevent synch issue with either setResponder(...) or disconnect(...) methods.

Do not wait for the logger to perform its task before sending a message.
Avoid printing the fix message to send two time in case of error (no responder).
Removed the synchronization as it is contention prone, and not really needed : it should not really prevent synch issue with either setResponder(...) or disconnect(...) methods.
@charlesbr1
Copy link
Contributor Author

I added the volatile keyword.
Just to be clear, all these pull requests won't directly make the things going faster. And not this one in particular. The only one that directly improve the speed is "Message generation length & checksum optimization #39".
The latency will be improved by the fact that less minor GC will happens, and only that. Hence this is the worst latency measures that are improved.

@guidomedina
Copy link

A lock has a potential of causing a context switch while a volatile doesn't, a lock is at least twice expensive than a volatile counterpart, specially for only reading the value

Edit: But yeah, for most systems it will be a micro-optimization TBH.

@guidomedina
Copy link

I know that the value can change between reads, I thought you would figure I meant only the whole if-then-else section:

final Responder responder = this.responder;
try {
  return responder != null ? responder.send(message) : false;
} finally {
  getLog().onOutgoing(messageString);
}

And don't worry about the try, the compiler will optimize that out, also the != is first because the option to be more probable or the one that matters will be faster if it is treated 1st, have you measured performance with JMH? If so try my theory and you will see, don't guess, measure, measure.

@charlesbr1
Copy link
Contributor Author

A context switch may happens at anytime because of interrupts and others potential jvm gc stop the world barriers. And even without theses events the value may be updated betweens two reads.

Le 22 oct. 2015 à 11:16, Guido Medina notifications@github.com a écrit :

You are wrong, a lock has a potential of causing a context switch while a volatile doesn't, a lock is at least twice expensive than a volatile counter part, specially for only reading the value.


Reply to this email directly or view it on GitHub.

@charlesbr1
Copy link
Contributor Author

Oh yes, I agree

Le 22 oct. 2015 à 11:30, Guido Medina notifications@github.com a écrit :

I know that the value can change between reads, I thought you would figure I meant only the whole if-then-else section:

final Responder responder = this.responder;
try {
return responder != null ? responder.send(message) : false;
} finally {
getLog().onOutgoing(messageString);
}

Reply to this email directly or view it on GitHub.

@guidomedina
Copy link

A context switch might happen for many reasons, specially the ones caused by the CPU scheduler but why should we add up to it?

If we can avoid it by coding properly, I don't mean pre-optimize but sometimes it is obvious and trivial the use of atomic modifiers vs synchronization, and in this particular case it is trivial and obvious, well; except for the part of caching the value in every method that uses it (final Responder responder = this.responder)

@guidomedina
Copy link

Try –XX:+UseG1GC to have a very consistent GC, you will see a big difference and won't need a page long of CMS parameters.

@charlesbr1
Copy link
Contributor Author

G1 and CMS are bad for low latency. The best we can have on hotspot is parrallel GC, with only minor collections.

Le 22 oct. 2015 à 11:42, Guido Medina notifications@github.com a écrit :

Try –XX:+UseG1GC to have a very consistent GC, you will see a bit difference and won't need a page long of CMS parameters.


Reply to this email directly or view it on GitHub.

@guidomedina
Copy link

Do you know that as a fact? G1GC is both parallel and concurrent. It is an active GC with the lowest time on stop world pauses, I have used it for few years now with Java 7 and up and I think is the best to avoid pauses, consistency > low latency even for low latency applications, if you have a chance of pausing for 1 or 2 secs then that is worse than possibly adding 1ms from time to time.

@guidomedina
Copy link

by consistency I'm also referring to predictability, other GC algorithms are unpredictable making them bad for low latency where your highest time has to be less than X ms, that's where G1GC excels.

@nitsanw
Copy link

nitsanw commented Oct 22, 2015

@charlesbr1 @guidomedina G1/CMS/etc all have Stop the World young generation collections. The only collector with a concurrent young generation collector is C4 which is part of the Zing VM. It is in wide use in finance and other industries where predictable and low latencies are required.

@charlesbr1
Copy link
Contributor Author

When you only have minor GC, they may be very predictable depending on your application, something like one pause every n minutes.
parallel GC is faster on minor collections, yes this is a fact.
We have less than one second of pause by day, anyway the only way to solve big pauses issues is not by tunning the GC but by writing garbage less code.
Tunning the GC help when the code is cleaned first.

Le 22 oct. 2015 à 14:43, Guido Medina notifications@github.com a écrit :

by consistency I'm also referring to predictability, other GC algorithms are unpredictable making them bad for low latency where your highest time has to be less than X ms, that's where G1GC excels.


Reply to this email directly or view it on GitHub #36 (comment).

@charlesbr1
Copy link
Contributor Author

@nitsan : Sure, I said I was talking about hotspot. Btw, I love your work ;)

Le 22 oct. 2015 à 15:40, Nitsan Wakart notifications@github.com a écrit :

@charlesbr1 https://github.com/charlesbr1 @guidomedina https://github.com/guidomedina G1/CMS/etc all have Stop the World young generation collections. The only collector with a concurrent young generation collector is C4 which is part of the Zing VM. It is in wide use in finance and other industries where predictable and low latencies are required.


Reply to this email directly or view it on GitHub #36 (comment).

@charlesbr1
Copy link
Contributor Author

@guidomedina : I hadn't the code below my eyes, but you're changing the original external behaviour this way. Your logged message will be different when responder is null.
This change already alter the behaviour as this log entry will appears after potential sub layers logs entries, if any. May be confusing but I think it's acceptable. Nevertheless I use to avoid modifying the external behaviour as much as possible, so it's preferable to let the information that responder was null as before.

better testing != null
@charlesbr1
Copy link
Contributor Author

@guidomedina wasn't clear on my cellphone, check != null is better I agree; I updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants