-
Notifications
You must be signed in to change notification settings - Fork 580
Note about performance vs scalability
Avoid the word "performance", because non-blocking I/O is about "scalability". Those terms are often used interchangeably, but they shouldn't be.
This make sense. But doesn't your application perform poorly when it blocks unintentionally? For me, when I don't know about non-blocking, performance is the first place I look.
I can see that ultimately scalability is the right word, but until I know better I don't realize that I need to be considering scalability when I'm just trying to get merely a second web request to respond in a reasonable amount of time and will glaze right over anything talking about scalability. In general, I don't think, people don't consider they need to start scaling their program after 1 test case already or even after 10 or 100. I generally assume that scalability is a concept reserved for the enterprise and that my development or production run of even 10 to 20 users doesn't need to already scale.
I recall you mentioning something similar to me a year ago when my production app for 10-20 users failed and you talked about scalability. It makes perfect sense now, but at the time I didn't know how to deal with that. You even gave me the straight up answer (use -w large -c 1) but I didn't understand it and I read that message about 100 times. When you said to set concurrency to 1 I kept thinking that I need my app to support concurrency beyond 1 because I need multiple people to access it at the same time. I kind of understood that this is what the more workers were for, but -- more on this later -- more processes is not generally what one wants to see. :) Now I realize that setting concurrency to 1 means specifying that each process will handle only one concurrent connection and that subsequent connections should be redirected to available processes.
The only thing I had to compare to was apache and behind the scenes apache would crank out additional processes if needed. I hardly knew -- if at all -- that it was doing so let alone why.
As a sys admin, with most network services that I support, I'm used to the documentation talking about scalability of 100s or 1000s of users, not just 10. 1 ntpd process, 1 dhcpd process, 1 cupsd process... 1 of each has been serving 100s of users well for me for years. I generally freak out if I see a dozen or more of the same process running (only exception I regularly encounter is samba) and think something must be wrong so it was against every good sense I had to intentionally crank up the number of processes. And, of course clearly, if my mojo app didn't block, it too would serve 100s of users well with just a single process.
In the end, all I could see is that my app wasn't performing well. 10 people in a room were getting timeouts. I'm not Google over here, scalability is a word reserved for folks like them, I thought at the time...
Just my two cents on perspective and taking into consideration who will be looking for such related information: people who don't yet have the right information. :)