-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Inaccurate response time measurement #668
Comments
👋 Thanks for opening your first issue! If you're reporting a 🐞 bug, please make sure To help make this a smooth process, please be sure you have first read the |
Hmm, that's quite strange. Insomnia does do some basic rounding but I can't seem to reproduce any issues on my end. I just tried with a simple Node.js server and can consistently see results between 1 and 5 milliseconds. Also, you can see the unrounded number by mousing over the response time. To get the time, Insomnia queries libcurl for |
@gschier Interesting. Upon hovering, I see the values 0.000 ms, 15.000 ms, 16.000 ms and 31.000 ms respectively. This seems to further indicate some kind of rounding/imprecision going on. Regarding curl, this is what the Timeline tab says: The machine is running Windows 7 x64. |
@gschier I think I found the culprit. Curl appears to use
To obtain more precise timestamps, one is supposed to use
So this might be a limitation in curl. It seems like there was a discussion about this in curl-land at some point, but no change to the timing method in the end: http://curl-library.cool.haxx.narkive.com/J3gzPyvE/gettickcount-vs-queryperformancecounter. |
Nice find! Also worth noting that |
going to close this for now as it seems like there is nothing we can do, unfortunately. |
@gschier :( What about measuring the time in Insomnia instead of using the timing information from curl? Unless the overhead of executing curl is in the same ballpark as the imprecision of |
Looks like JS timing suffers from the same pitfalls on Windows. nodejs/node-v0.x-archive#5327 I also did a quick test with JS-land timing (on Mac) and found that it could be up to five milliseconds different than the number Curl gave. In my opinion, this is not accurate enough to justify the switch. |
@gschier Thanks for looking into this. I checked both I can't reproduce the first issue you linked. It's about Windows XP anyway, and Insomnia only support Windows 7+, right? A lot of things happened in the Windows world between XP and 7, so I personally wouldn't be concerned about this. Regarding the second issue: It seems like it is about an inaccurate system clock in a virtual machine, or something like that. I'm not sure whether it is actually relevant to this issue? I guess the 0-5 ms difference you are seeing compared to curl might be the overhead of executing curl itself? I would personally prefer that this overhead is included in the timing, rather than the timing being rounded to ~15 ms, and sometimes even rounded down to the impossible value of 0 ms. |
It was unfair of me to close this issue so early so I've reopened this issue to continue the discussion. I also tested the Here are my thoughts on the issue at this time: I just tested the accuracy of Curl's timing on a Windows install and can confirm results of either 0ms or 15ms. This means that the response time is inaccurate by an average value of 7.5 milliseconds. Now, if we move timing to JS-land, the value may be more precise but not necessarily more accurate. From my earlier brief experiment, I saw up to 5ms difference between Curl's value and Note that this simple test was only performed on my machine for a few dozen requests. There's no way of knowing whether this would be more or less accurate on different hardware/OS configurations. In addition, since this is a Windows-only problem, maintaining a Windows-only JS-land timer adds technical debt. I understand that this is a problem for you and other Windows users and I wish there was a good solution. However, it just doesn't seem like the pros outweigh the cons. |
@gschier Thanks for letting us have a discussion about this! Good point about the 7.5 ms. But the worst case of ~15 ms is worth keeping in mind too, I think. I can't speak much about your experiment, but I can provide a few more thoughts on why I think this is not an insignificant issue: With the prominent display of response time in the GUI in the ms unit, I think it is reasonable to assume that some Insomnia users (besides me) will use these timings for performance measurements of their API/webserver/... If a response takes, for example, 500 ms, then I guess I don't particularly care much about Insomnia measuring up to ~15 ms / ~3% off. But if a response takes, for example, 30 ms, then I do care if Insomnia is ~15 ms / ~50% off. And response times on the web are generally pretty fast these days... Then, of course, there is the corner case of 0 ms, which by itself seems quite buggy. It had me do a "WTF" when I first saw it, at least. So a fix would definitely be appreciated. Regarding your concern about fixing this with Windows-specific code: Switching to Node timing for all platforms would be one option, and would not require maintaining Windows-specific code. As you mentioned yourself, it only seems to change the general measurement accuracy by a few ms. Even if you decide to only do it for Windows, I think it is quite reasonable to maintain a few lines of Node timing code in order to fix this issue. But Insomnia is not my project, of course, so that is up to you :). |
Just adding another thought: I personally spent quite a bit of time debugging my API code to track down the perceived issue of the response time sometimes doubling from ~15 ms to ~31 ms. I thought it was some kind of bug with a timer/queue/lock that caused responses to only be answered every 15 ms. I would never have suspected my measurement tool (Insomnia) to be at fault, had I not ran into the 0 ms response time. If you decide not to fix this issue, perhaps you can at least add a disclaimer to the timer in the GUI that indicates that the measurement is not done with 1 ms accuracy as one could easily expect when the time is given in ms. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Errmmm. That was quite annoying, bot. @gschier I can't reopen this :/. Can you? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Still no thanks, bot. (@gschier FYI this is the last time I will bother trying to keep this bot from closing the issues I have submitted. And the last time I intend to submit new ones, for that sake. I don't want to waste any more of my time babysitting them because of this bot...) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Just to chime in, my personal use case is measuring performance of a localhost web server. Having a timer resolution in chunks of 16ms is not sufficient for those purposes, and makes it difficult to tell what is and isn't an improvement. The delta is really what I'm looking for. |
Of all things, this has got to be something that we get as right as possible if we're going to show it. I'd consider this a core-competency of the tool. I spent some time in this area of the codebase recently, and I'd like to reopen discussion on what the problem is and what possible solutions might look like. We can use I'd also like to hear any ideas on how we could test whatever solution we come up with (using jest). Maybe after some "on-the-same-page"-getting, we can figure out when/where/how to prioritize this issue. |
@dimitropoulos thanks a lot for reviving this. As I see it, regardless of whether Not sure how your testing is done and such, but you would probably have to test on a Windows box with the OS timer resolution set to something "high" (like the problematic ~16 ms mentioned above). Testing a super fast call should then take 1-10 ms or something like that, and not 0 or > 10 ms. But maybe this is overkill, and you could just implement the fix and test it manually once :). |
I'm currently using Insomnia to debug a performance issue in my API, and noticed that the response time measurement is quite inaccurate. In particular, it appears like some kind of rounding in the order of ~15 ms is performed, and additionally that the response time is missing altogether sometimes (which may or may not be related).
My observations are as follows:
When pinging the server, the roundtrip to the server is stable at 11-12 ms, with the occasional arbitrarily slower result. This seems reasonable.
When repeatedly calling the endpoint through Chrome, the response time is stable around 16-17 ms, with the occasional arbitrarily slower result. This also seems reasonable, considering the additional overhead of HTTPS, the webserver, the API code, ...
When repeatedly calling the endpoint through Insomnia, the response time is stable around 15-16 ms, but occasionally is either 0 ms or 31 ms. Never 1-14 ms, and never 17-30 ms. This does not seem reasonable. 0 ms should be impossible given the 11-12 ms roundtrip to the server. Considering the 31 ms results as well, it seems like there is some kind of rounding in the order of ~15 ms going on.
The text was updated successfully, but these errors were encountered: