Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RethinkDNS Plus quite often switches back to RethinkDNS Basic (default) #170

Closed
mhusar opened this issue Dec 20, 2020 · 3 comments
Closed

Comments

@mhusar
Copy link

mhusar commented Dec 20, 2020

This is quite similar to issue #140, but somehow different. I’m a RethinkDNS user since today. So I configured RethinkDNS Plus with presets Privacy Lite and Aggressive (19 block lists). It works for some time. But after at most one hour the app switches back to RethinkDNS Basic.

I’m always connected to one of your Cloudflare servers in Frankfurt, Germany (162.158.83.186). With both configurations I see low latencies and really high latencies (up to 735 ms). Could be a high server load or anything else. I don’t have any issues with my internet connection or problems with high latencies on any website. Everything works just fine.

A possible solution for me as a user would be, to be able to switch the default DNS configuration to RethinkDNS Plus. This would prevent RethinkDNS Basic to be used as a fallback.

Besides this problem RethinkDNS works really fine. A great piece of software. I have just one last question: Which block lists does RethinkDNS Basic use? I’m not able to see which are in use on the configure page at bravedns.com.

Thanks,
Marcus

@ignoramous
Copy link
Collaborator

ignoramous commented Dec 20, 2020

I haven't personally seen this (switch to default DoH from another DoH), but will give the code responsible a cursory look to see if we did something stupid there.

Re: User settable default server: Yes, we will add this at some point.

Re: Occasional latency jump: Multiple things.

  1. Long timeouts. Today, timeout for a DNS request is set to 30s. Too high, but that means, we wait a lot lot longer than required for answers.

  2. There are some "setup" related things (mostly around blocklists) that need to happen on the server (our servers don't have disks on them, only ephemeral storage, and so that means we may have to download blocklists into the "sandbox" if it isn't in the RAM already). And unfortunately, that setup is expensive (100ms to 500ms) and could be the cause of absurdly higher latencies if a request hits the server at the most inopportune time. This isn't an edge case, but it shouldn't be too frequent though it could be very well happen with regular cadance. We are looking for ways to minimize this impact, and are continually improving. The server-side resolver code, which is super messy, would be open sourced soon.

  3. Per our tests, uncached domains (domains the resolver has never seen before in the latest past) do take 200ms to 700ms (end to end) for resolution. So, this wouldn't be surprising. For example, when running a test on dnsleaktest.com, you'd observe how the queries take longer (which is because dnsleaktest generates cache-busting DNS queries which have to be recursively resolved, taking more time than it usually would).

I hope that was clear enough.

Thanks for your bug report. Let me know if you have anymore questions or suggestions! :)

@mhusar
Copy link
Author

mhusar commented Dec 21, 2020

Yesterday I could reproduce this problem easily. But since yesterday in the late evening I didn’t have one occurrence at all. At first, I thought, disabling on-device logging could have helped me. But after I activated logging a few hours ago I couldn’t see any problem. Latency was low all day, with just a few exceptions. It could be case 2 or 3 or something else. Nothing to worry about.

It’s possible that I had some minor packet loss I didn’t notice. Since Germany is in lockdown and on a Sunday most people were at home using the internet this could be the case. Maybe I can reproduce problems tonight. During Christmas or next weekend I could be “lucky”, too.

@ignoramous
Copy link
Collaborator

Substantial refactor with #282 likely has fixed this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants