Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Ping measurements also during load #142

Open
zR-JB opened this issue Jul 26, 2024 · 6 comments
Open

Feature Request: Ping measurements also during load #142

zR-JB opened this issue Jul 26, 2024 · 6 comments

Comments

@zR-JB
Copy link

zR-JB commented Jul 26, 2024

Maybe never stop the ping measurement, and let it run continuously also during the upload and download phase. And then have a simple graph, or just the average during the different load scenarios. With this, one could investigate buffer bloat issues very easy!

Even ookla with speedtest.net had this feature for quite some time now:
image

@openspeedtest
Copy link
Owner

I'll address this in a future version, but there are known limitations as outlined in #33. Otherwise, we'd need to overhaul the entire setup and potentially use something like webtransfer. I'll seriously consider this for the next major rewrite, which will support multiple protocols.

@zR-JB
Copy link
Author

zR-JB commented Jul 26, 2024

When looking at what is transferred during the speedtest.net test, I noticed that the downloads and uploads are also XHR, and the ping measurements are done via a websocket, so I think you are right.

image

@zR-JB
Copy link
Author

zR-JB commented Jul 26, 2024

So I did a bit more testing a made small POC with a simple node.js echo websocket server. And then performance.now() on the client side. Turns out Firefox does by default also only provide millisecond accuracy, even with performance.now(). Chromium based browsers, at least, have 100 μs. I think there is still a bit much overhead for extremely accurate results.

websocket_latency_test.zip

@zR-JB
Copy link
Author

zR-JB commented Jul 26, 2024

So here is what I found:

  1. The Browser Web socket Implementation adds around 0.1 ms of latency compared to a native rust websocket client when running on the same machine
  2. Using a Rust websocket-server instead of a node.js server only saves about 0.02 ms
  3. When being in the same network but another machine, with native rust <->rust websockets I get around 0.3 ms (between 0.24 and 0.4)
  4. With this setup and the browser implementation client and rust websocket server, I get around 0.9ms.
  5. Real Latency via ICMP, in this scenario, is about 0.2 ms.
  6. So this is not ideal, but I am not sure if any other protocol can achieve anything better than that

@openspeedtest
Copy link
Owner

When you add browser extensions, web filters, antivirus software, and other busy tabs to the mix, you'll see a much higher RTT.

@zR-JB
Copy link
Author

zR-JB commented Jul 26, 2024

Yes, I think a websocket approach to measure latency similar to:

grafik
is probably still the best compromise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants