Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(enginenetx): refine the happy-eyeballs algorithm #1296

Merged
merged 2 commits into from
Sep 22, 2023

Conversation

bassosimone
Copy link
Contributor

@bassosimone bassosimone commented Sep 22, 2023

We want to pack attempts in parallel, which we also did before when the interval between attempts was linear.

We need to take into account possible congestion, so we should push back exponentially, even though the common case for us is probably censorship (but it is better to do the right thing anyway).

So, let's scale exponentially until we reach 30s. After that, it's fine to keep attempts evenly spaces, because 30s is quite definitely a huge interval if we're reasoning in internet time.

Also, change the base value used for TLS handshaking to be 900ms rather than 300ms, because a TLS handshake is ~3 round trips.

Part of ooni/probe#2531

We want to pack attempts in parallel, which we also did before
when the interval between attempts was linear.

We need to take into account possible congestion, so we should push
back exponentially, even though the common case for us is probably
censorship (we it is better to do the right thing anyway).

So, let's scale exponentially until we reach 30s. After that, it's
fine to keep attempts evenly spaces, because that is quite a huge
interval if we're reasoning in internet time.

Also, change the base value used for TLS handshaking to be 900ms
rather than 300ms, because a TLS handshake is ~3 round trips.

Part of ooni/probe#2531
@bassosimone bassosimone merged commit 7b5806f into master Sep 22, 2023
@bassosimone bassosimone deleted the issue/2531-small branch September 22, 2023 12:16
Murphy-OrangeMud pushed a commit to Murphy-OrangeMud/probe-cli that referenced this pull request Feb 13, 2024
We want to pack attempts in parallel, which we also did before when the
interval between attempts was linear.

We need to take into account possible congestion, so we should push back
exponentially, even though the common case for us is probably censorship
(but it is better to do the right thing anyway).

So, let's scale exponentially until we reach 30s. After that, it's fine
to keep attempts evenly spaces, because 30s is quite definitely a huge
interval if we're reasoning in internet time.

Also, change the base value used for TLS handshaking to be 900ms rather
than 300ms, because a TLS handshake is ~3 round trips.

Part of ooni/probe#2531
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant