Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very slow and increasing "Average upstream response time" #6818

Closed
4 tasks done
netizeni opened this issue Mar 12, 2024 · 132 comments · Fixed by AdguardTeam/urlfilter#46
Closed
4 tasks done

Very slow and increasing "Average upstream response time" #6818

netizeni opened this issue Mar 12, 2024 · 132 comments · Fixed by AdguardTeam/urlfilter#46

Comments

@netizeni
Copy link

Prerequisites

Platform (OS and CPU architecture)

Custom (please mention in the description)

Installation

Other (please mention in the description)

Setup

On one machine

AdGuard Home version

v0.107.45

Action

I used to use DoH of various DNS services and recently noticed it takes quite a while to load websites, so I decided to switch to old regular DNS, hoping to speed it up, but it didn't happen. Once added, upstream DNS starts increasing the average response time for more than 10x.

On the same machine where AGH is installed, running dnsperftest script multiple times a day, returns more or less consistent results:

                     test1   test2   test3   test4   test5   test6   test7   test8   test9   test10  Average
76.76.2.41           35 ms   35 ms   35 ms   39 ms   35 ms   39 ms   39 ms   47 ms   39 ms   35 ms     37.80  //from resolv.conf
9.9.9.9              3 ms    3 ms    47 ms   3 ms    11 ms   3 ms    7 ms    3 ms    3 ms    11 ms     9.40  //from resolv.conf
quad9                3 ms    3 ms    7 ms    3 ms    3 ms    3 ms    3 ms    3 ms    7 ms    11 ms     4.60
google               11 ms   11 ms   27 ms   11 ms   7 ms    27 ms   15 ms   59 ms   11 ms   27 ms     20.60
norton               31 ms   27 ms   31 ms   27 ms   27 ms   27 ms   27 ms   27 ms   27 ms   23 ms     27.40
neustar              31 ms   35 ms   31 ms   31 ms   35 ms   35 ms   35 ms   31 ms   31 ms   35 ms     33.00
level3               27 ms   31 ms   31 ms   63 ms   31 ms   51 ms   31 ms   31 ms   27 ms   31 ms     35.40
cleanbrowsing        35 ms   35 ms   39 ms   35 ms   35 ms   39 ms   39 ms   39 ms   35 ms   43 ms     37.40
nextdns              39 ms   39 ms   39 ms   35 ms   35 ms   39 ms   35 ms   39 ms   39 ms   39 ms     37.80
opendns              35 ms   35 ms   39 ms   39 ms   39 ms   35 ms   35 ms   51 ms   35 ms   35 ms     37.80
comodo               39 ms   35 ms   39 ms   35 ms   39 ms   35 ms   43 ms   39 ms   39 ms   39 ms     38.20
freenom              35 ms   31 ms   63 ms   31 ms   75 ms   27 ms   31 ms   83 ms   35 ms   83 ms     49.40
yandex               71 ms   67 ms   71 ms   67 ms   71 ms   71 ms   71 ms   71 ms   71 ms   67 ms     69.80
adguard              155 ms  127 ms  139 ms  175 ms  119 ms  139 ms  123 ms  155 ms  131 ms  135 ms    139.80
cloudflare           1000 ms 1000 ms 1000 ms 1000 ms 1000 ms 1000 ms 1000 ms 1000 ms 1000 ms 1000 ms   1000.00

While the current "Average upstream response time" in AGH looks like this (and progressively increases more and more):

1

When I'm using VPN and its DNS, website are loading noticeably faster. Is there something to change in AdGuard Home DNS settings shown below which should hopefully speed up the response time?

dns:
  bind_hosts:
    - 0.0.0.0
  port: 53
  anonymize_client_ip: false
  ratelimit: 150
  ratelimit_subnet_len_ipv4: 24
  ratelimit_subnet_len_ipv6: 56
  ratelimit_whitelist: []
  refuse_any: true
  upstream_dns:
    - 76.76.2.41
    - 76.76.2.32
    - 193.110.81.0
    - 9.9.9.9
  upstream_dns_file: ""
  bootstrap_dns:
    - 76.76.10.32
    - 76.76.10.41
  fallback_dns:
    - 9.9.9.9
  upstream_mode: load_balance
  fastest_timeout: 1s
  allowed_clients: []
  disallowed_clients: []
  blocked_hosts:
    - version.bind
    - id.server
    - hostname.bind
  trusted_proxies:
    - 127.0.0.0/8
    - ::1/128
  cache_size: 134217728
  cache_ttl_min: 0
  cache_ttl_max: 0
  cache_optimistic: true
  bogus_nxdomain: []
  aaaa_disabled: false
  enable_dnssec: false
  edns_client_subnet:
    custom_ip: ""
    enabled: false
    use_custom: false
  max_goroutines: 300
  handle_ddr: true
  ipset: []
  ipset_file: ""
  bootstrap_prefer_ipv6: false
  upstream_timeout: 10s
  private_networks: []
  use_private_ptr_resolvers: true
  local_ptr_upstreams: []
  use_dns64: false
  dns64_prefixes: []
  serve_http3: false
  use_http3_upstreams: false
  serve_plain_dns: true

Expected result

Lower "Average upstream response time" over time and faster responses.

Actual result

"Average upstream response time" getting increased over time. Websites take quite a while to load.

Additional information and/or screenshots

AdGuard Home is installed on RPi 3B+ running DietPi (debian based).

@bobloadmire
Copy link

bobloadmire commented Mar 16, 2024

I have this exact same issue, especially with cloudflare. I had to remove 1.1.1.1 completely, but i'm still having issues using only google and opendns. running on rpi4

@whyisthisbroken
Copy link

whyisthisbroken commented Mar 16, 2024

i use unbound and dnscrypt proxy upstream to quad9 resolver.
but maybe you can test following changes:

ratelimit: 0
refuse_any: false
upstream_mode: parallel
fastest_timeout: 1s
enable_dnssec: true
max_goroutines: 500
handle_ddr: true
upstream_timeout: 2s

Optional:
bootstrap_prefer_ipv6: true

I don't know ControlD and its service - but maybe you could test it without it.
Your ControlD settings use OISD and Hagezi's blacklist - do you also have these lists in AGH?
The third DNS service is dns0 – the nextdns eu brother, also with filter lists...

It's all doubled twice. - Its not bad, but for testing we should use only one DNS Service to exclude the problem....
Which filter lists do you use in AGH?

My results for the DNS Service you use:

root@HomeNetDNS:~# mtr -r -w -c4 193.110.81.0
Start: 2024-03-16T08:53:35+0100
HOST: HomeNetDNS                                Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- Fritzbox                                   0.0%     4    0.7   0.8   0.6   0.9   0.1
  2.|-- 100.124.1.76                               0.0%     4    8.8   8.7   6.4  12.0   2.4
  3.|-- 100.127.1.133                              0.0%     4    6.8   6.7   5.6   8.5   1.3
  4.|-- 100.127.1.132                              0.0%     4    6.2   6.1   4.4   7.7   1.3
  5.|-- 185.22.46.129                              0.0%     4    5.9  20.9   5.9  37.3  16.9
  6.|-- ae3-1337.bbr02.anx63.ams.nl.anexia-it.net  0.0%     4   18.4  17.4  15.8  19.4   1.8
  7.|-- ae1-10.bbr01.anx63.ams.nl.anexia-it.net    0.0%     4   16.3  16.1  15.0  17.7   1.2
  8.|-- dns0.eu                                    0.0%     4   14.4  15.6  14.4  17.3   1.3
root@HomeNetDNS:~# mtr -r -w -c4 76.76.2.41
Start: 2024-03-16T08:55:00+0100
HOST: HomeNetDNS    Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- Fritzbox       0.0%     4    0.7   0.8   0.7   0.9   0.1
  2.|-- 100.124.1.76   0.0%     4    6.8   7.2   5.3   8.5   1.5
  3.|-- 100.127.1.132  0.0%     4    6.4   7.0   5.5   8.4   1.3
  4.|-- 185.22.46.129  0.0%     4    8.3   8.1   6.4   9.2   1.2
  5.|-- ???           100.0     4    0.0   0.0   0.0   0.0   0.0
  6.|-- ???           100.0     4    0.0   0.0   0.0   0.0   0.0
  7.|-- 76.76.2.41     0.0%     4   12.1  12.5  11.1  13.6   1.1
root@HomeNetDNS:~# mtr -r -w -c4 76.76.2.32
Start: 2024-03-16T08:55:18+0100
HOST: HomeNetDNS    Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- Fritzbox       0.0%     4    0.9   0.9   0.8   1.2   0.2
  2.|-- 100.124.1.76   0.0%     4    7.6   8.5   7.0  10.3   1.5
  3.|-- 100.127.1.131  0.0%     4    5.5   6.7   4.8   9.4   2.0
  4.|-- 100.127.1.132  0.0%     4    4.6   6.8   4.6   9.2   2.0
  5.|-- 185.22.46.145  0.0%     4    8.2   7.9   6.3   9.7   1.4
  6.|-- ???           100.0     4    0.0   0.0   0.0   0.0   0.0
  7.|-- ???           100.0     4    0.0   0.0   0.0   0.0   0.0
  8.|-- 76.76.2.32     0.0%     4   11.6  10.9   9.1  12.7   1.6
root@HomeNetDNS:~# mtr -r -w -c4 9.9.9.9
Start: 2024-03-16T08:56:15+0100
HOST: HomeNetDNS                   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- Fritzbox                      0.0%     4    1.1   0.9   0.6   1.1   0.3
  2.|-- 100.124.1.76                  0.0%     4    5.9   7.1   5.9   8.5   1.1
  3.|-- 100.127.1.133                 0.0%     4    5.4   5.8   4.3   7.1   1.2
  4.|-- 100.127.1.132                 0.0%     4    6.7   6.7   5.8   7.8   0.9
  5.|-- 185.22.46.145                 0.0%     4    8.4   7.2   5.7   8.4   1.1
  6.|-- as42.dusseldorf.megaport.com  0.0%     4    9.1  10.4   9.1  12.0   1.2
  7.|-- dns9.quad9.net                0.0%     4    9.5  11.0   9.5  12.4   1.2

@netizeni
Copy link
Author

netizeni commented Mar 17, 2024

@whyisthisbroken I will check those settings, thanks. Do you use quad9 DoH and should upstream_mode: parallel speed it up a bit comparing to load balance?

Your ControlD settings use OISD and Hagezi's blacklist - do you also have these lists in AGH?
The third DNS service is dns0 – the nextdns eu brother, also with filter lists...

Yes, I do. Even though it might be redundant, I tested these upstream DNS with and without filter lists and the processing time was the same, so I decided to use those with filter lists anyway.

After setting blocking_mode: null_ip and leaving ControlD and Quad9 for a couple of days (removed dns0.eu), seems like the situation is a bit better.

Now in General statistics "Average processing time" is 8ms, although "Average upstream response time" is still quite big

76.76.2.41:53   220 ms
76.76.2.32:53   217 ms
9.9.9.9:53      196 ms

I assume this means the most of DNS queries are served from cache, hence why this 8ms, while, when necessary, querying upstream gives these big numbers above?

Also, one semi-related question. One client has always-on VPN and all the traffic goes through it, meanwhile AGH logs every minute or so a query from that client to www.google.com Type: A, Plain DNS. How is that possible?

@dMopp
Copy link

dMopp commented Mar 18, 2024

Can confirm the issue. Have to use my local root resolver. 1.1.1.1 (and ipv6) are terrible slow from adguard.

@Morlince
Copy link

I think I'm having the same issue. It's improved with some of the suggestions here, but something still seems off.

@dansseg
Copy link

dansseg commented Mar 20, 2024

Same issue here. Restarting the docker container helps for couple of hours...

@netizeni
Copy link
Author

It's definitely abnormal and constantly increasing. The latest results for "Average upstream response time":

76.76.2.41:53   1021 ms
76.76.2.32:53   1006 ms
9.9.9.9:53      750 ms

@Cebeerre
Copy link

Cebeerre commented Mar 25, 2024

It's definitely abnormal and constantly increasing. The latest results for "Average upstream response time":

76.76.2.41:53   1021 ms
76.76.2.32:53   1006 ms
9.9.9.9:53      750 ms

Where are you located ? Keep in mind that even though ControlD or Quad9 are both using unicast, the peerings used by your ISP might affect quite a lot ...

@Cebeerre Cebeerre added the waiting for data Waiting for users to provide more data. label Mar 25, 2024
@netizeni
Copy link
Author

Where are you located ?

I'm in Europe and there are multiple quad9 server locations near me, which dnsperftest script confirms as it shows 3ms on average.

Unfortunately, with AGH that's not the case considering from my last reply "Average upstream response time" now increased to:

76.76.2.32:53   5856 ms
9.9.9.9:53      5320 ms
76.76.2.41:53   4763 ms

@Cebeerre
Copy link

You maybe already did, but could you run an extended test on https://www.dnsleaktest.com/ ? to check which locations are actually answering your requests ?

@netizeni
Copy link
Author

Sure.

176.58.88.155  |  lhr-h05.int.controld.com.  |  NetActuate
176.58.88.250  |  lhr-h04.int.controld.com.  |  NetActuate
66.185.117.242  |  res100.vie.rrdns.pch.net.  |  WoodyNet
66.185.117.243  |  res200.vie.rrdns.pch.net.  |  WoodyNet
66.185.117.244  |  res300.vie.rrdns.pch.net.  |  WoodyNet

Based on the domain of last three, seems like they are in the same city. One more thing I noticed, in the past couple of days average upstream response time decreased, but only slightly, as numbers are still huge:

76.76.2.32:53   5145 ms
9.9.9.9:53      4596 ms
76.76.2.41:53   3775 ms

Meanwhile, "Average processing time" increased from 8-9ms to now 153ms.

@dMopp
Copy link

dMopp commented Mar 29, 2024

I have the issue with my ISP DNS as well… Using my unbound (and let using it my isp dns) didn’t have these issue. So it’s a adguard thing since some versions.

@Akira46
Copy link

Akira46 commented Apr 1, 2024

I also got the same problem

@netizeni
Copy link
Author

netizeni commented Apr 2, 2024

@Cebeerre would you mind removing "waiting for data" label and add "bug" label, please? All these replies beside mine imply it's definitely a bug.

@Cebeerre
Copy link

Cebeerre commented Apr 2, 2024

Hi @netizeni

In order tag an issue as a bug, reproducibility is key. The previous replies, apart from saying "me too", are not actually adding any additional information that might help.

This is my own AGH instance using the NS you provided after running for some hours:

image

Have you tried to clear the statistics and see if maybe it was just a very bad connectivity period (AGH timesouts by default at 10ms) ?

@dMopp
Copy link

dMopp commented Apr 2, 2024

As mentioned, I can add the exact same DNS servers to my opnsense instead of agh and have no issue. It doesn’t matter if i use my isp dns or Cloudflare.

@Cebeerre
Copy link

Cebeerre commented Apr 2, 2024

Have you tried to see if there are specific queries that are making the average increase ?

cat querylog.json | jq -r '(.QH + ":" + (.Elapsed | tostring))' | sort -t: -nrk2 | head -20

@netizeni
Copy link
Author

netizeni commented Apr 2, 2024

@Cebeerre I did a couple of times ever since opening this issue, and it starts with those numbers you shared, but eventually increases to these I pasted.

@Cebeerre
Copy link

Cebeerre commented Apr 2, 2024

@Cebeerre I did a couple of times ever since opening this issue, and it starts with those numbers you shared, but eventually increases to these I pasted.

I've seen that you've set a cache size of 134 Mb which honestly looks quite overkill ... Could you please check how much RAM you're actually consuming right now by the AGH process ? Do you have other stuff running on top of this rpi3 ?

@dMopp
Copy link

dMopp commented Apr 3, 2024

cat querylog.json | jq -r '(.QH + ":" + (.Elapsed | tostring))' | sort -t: -nrk2 | head -20

tostring))' | sort -t: -nrk2 | head -20
login.aliexpress.com:10027762671
a1931.dscgi3.akamai.net:10024301298
cdn.smoot.g.aaplimg.com:10024288682
a1931.dscgi3.akamai.net:10021771928
user.17track.net:840082388
35ne6z.tdum.alibaba.com:777572896
35ne6z.tdum.alibaba.com:566845144
www.17track.net:460536210
www.17track.net:456030279
res.17track.net:439784586
t.17track.net:423843345
res.17track.net:422288740
s.17track.net:417543098
t.17track.net:414760661
t.17track.net:407802627
s.17track.net:403705146
res.17track.net:401303256
video-cdn.aliexpress-media.com.queniuak.com:383318976
wvcfg.alicdn.com.danuoyi.tbcache.com:363343607
h5.m.taobao.com:294407903

but currently the numbers are ok with my isp dns… in will try 1.1.1.1

@Cebeerre
Copy link

Cebeerre commented Apr 4, 2024

login.aliexpress.com:10027762671
a1931.dscgi3.akamai.net:10024301298
cdn.smoot.g.aaplimg.com:10024288682
a1931.dscgi3.akamai.net:10021771928

Wow ! All of these ones are over the 10 seconds ! quite odd given that you're using a public resolver and these entries were probably in their cache already ...

I use unbound as a recursive DNS upstream, which tipically "takes more time" and this is what I get for login.aliexpress.com:
image

@dMopp , what kind of hardware are you using ? Am I right assuming it's directly connected by ethernet and you're not using wifi ?

@dMopp
Copy link

dMopp commented Apr 4, 2024

login.aliexpress.com:10027762671
a1931.dscgi3.akamai.net:10024301298
cdn.smoot.g.aaplimg.com:10024288682
a1931.dscgi3.akamai.net:10021771928

Wow ! All of these ones are over the 10 seconds ! quite odd given that you're using a public resolver and these entries were probably in their cache already ...

I use unbound as a recursive DNS upstream, which tipically "takes more time" and this is what I get for login.aliexpress.com: image

@dMopp , what kind of hardware are you using ? Am I right assuming it's directly connected by ethernet and you're not using wifi ?

Hi.

Yes this is extreme. As mentioned, using the same resolver in opnsense (unbound) I don’t see this spikes. I can even add my opnsense as dns for adguard home and it’s fine.

The Hardware:
5700x + 128GB RAM + Intel NIC.

Adguard Home and opnsense are both running in proxmox.

Ah and yes, all wired. ICMP requests are not spiking.

@Cebeerre
Copy link

Cebeerre commented Apr 4, 2024

Adguard Home and opnsense are both running in proxmox.

lxc or vm ? Have you tried to install the AdGuardHome plugin in proxmox itself to see if it makes any difference ?

@dMopp
Copy link

dMopp commented Apr 4, 2024

Opnsense running as a VM with PCIe pass through . AGH as LXC in a Debian container. (And yes, in the past this was fine). And which plugin you mean?

@Cebeerre
Copy link

Cebeerre commented Apr 4, 2024

And which plugin you mean?

https://www.routerperformance.net/opnsense-repo/

@dMopp
Copy link

dMopp commented Apr 4, 2024

Ah, you mean in OPNsense. No, because I don’t want the filtering in the firewall. This would cause some new issues.

@Cebeerre
Copy link

Cebeerre commented Apr 4, 2024

Ah, you mean in OPNsense. No, because I don’t want the filtering in the firewall. This would cause some new issues.

I'm curious about when you said that if you set the unbound instance in OPNSense as an AGH upstream it works fine. It shouldn't make any kind of difference than using a public resolver, what made me think if you've any kind of traffic shaping rules applied in OPNSense and the LXC container fell apart in an upload or download pipe without enough bandwidth ?

@dMopp
Copy link

dMopp commented Apr 4, 2024

Indeed i have traffic shaping, but just 2 Pipes and every traffic is passing them. UDP / 53 Traffic is even high prio in my network (independent from the Source/Destination)

@Cebeerre
Copy link

Cebeerre commented Apr 4, 2024

Indeed i have traffic shaping, but just 2 Pipes and every traffic is passing thum. UDP / 53 Traffic is even high prio in my network (independent from the Source/Destination)

Great, could you please share your entire adguardhome.yaml ?

@NotoriousNico
Copy link

@ainar-g and @schzhn

Please re-open this ticket, as the issue doesn't seem to have been resolved.

@Gontier-Julien
Copy link

Same here & seeing spike up to the 1000ms or more.

@varyform
Copy link

image

This happened few times in past couple of hours. In both docker and macOS versions.

@netizeni
Copy link
Author

@ainar-g and @schzhn

Can you reopen this issue, as obviously it's not solved? Just like people above, I'm experiencing the same problem. Response times are constantly increasing.

I understand this is not a trivial issue, but keeping it closed and ignoring it, won't magically make it go away. This is literally the most commented issue on the whole repository.

@cpuks
Copy link

cpuks commented Nov 25, 2024

In the end we'll have to create new issue describing same problem... it's been 2 weeks now since community asked to reopen issue.

@schzhn schzhn reopened this Nov 25, 2024
@cigarsucker
Copy link

cigarsucker commented Nov 25, 2024

Still experiencing the same increased resolve times here too. Even with logging turned off and stats period return down 6 hours.

Device then requires restart to reduce times however within 12-24 hours issue will return.

Situation occurs on primary device (raspberry pi 4B 1GB) with 40-50k DNS queries and a second device (raspberry pi 3B+) with <5k DNS queries. Previous occurred on a Latte Panda v1 with an Intel Atom z8350 CPU.

Apply changes to the configuration file seem to accelerate the longer DNS resolution times.

adguard pushed a commit that referenced this issue Nov 26, 2024
Updates #6818.

Squashed commit of the following:

commit 3027fc4
Author: Stanislav Chzhen <s.chzhen@adguard.com>
Date:   Mon Nov 25 16:53:33 2024 +0300

    all: fixed goroutine leak
@schzhn
Copy link
Member

schzhn commented Nov 26, 2024

In the edge release, we have fixed the bug that was causing slow upstream responses. Hopefully, it will be the last one.
Could you please check it?

You can also download the binary for your platform from the following link
https://static.adtidy.org/adguardhome/edge/version.json

@beneix
Copy link

beneix commented Nov 27, 2024

@schzhn, for me it's working great with both the edge and beta, just not with v0.107.54.

@cpuks
Copy link

cpuks commented Nov 30, 2024

In the edge release, we have fixed the bug that was causing slow upstream responses. Hopefully, it will be the last one. Could you please check it?

You can also download the binary for your platform from the following link https://static.adtidy.org/adguardhome/edge/version.json

I'm running v0.108.0-a.989+d578c713 for 3 days now on two machines, and it's working great, so probably any changes made regarding this issue can be merged into stable branch.

@nevaran
Copy link

nevaran commented Nov 30, 2024

From my findings the current latest does in fact put a bit of latency, but the biggest culprit seems to be in the DNS settings.

This is with Fastest IP Address setting:
image

Parallel requests with added quic DNS query and Rate limit: 0 (as suggested by some comments) :
image

And Parallel requests with the quic query and Rate limit: 0 on image v0.108.0-b.60:
image

@schzhn
Copy link
Member

schzhn commented Dec 4, 2024

We have released a beta version that includes the fix.

@schzhn schzhn closed this as completed Dec 4, 2024
@gitlaman
Copy link

gitlaman commented Dec 6, 2024

In beta 61 it is not fixed i tried for two days and the response times are increasing, in 59 i had no problem

@beneix
Copy link

beneix commented Dec 6, 2024

For me, running v0.108.0-a.995+3895cfb4 it is working fine.
Screenshot 2024-12-06 143412

@gitlaman
Copy link

gitlaman commented Dec 6, 2024

Where can i find that ver? In releases i see only b and stable releases

@NotoriousNico
Copy link

@gitlaman That version is probably an edge release.

@cpuks
Copy link

cpuks commented Dec 8, 2024

In beta 61 it is not fixed i tried for two days and the response times are increasing, in 59 i had no problem

You're 100% sure as I'm running that .61 beta for 3 days now on two instances are there's no problem at all.

@gitlaman
Copy link

gitlaman commented Dec 10, 2024

As @nevaran i did the same with quic and Rate Limit: 0 i have no problems with 61 beta with over 90k queries.

Screenshot_20241210_211121_Brave

@BinaryFlux8210
Copy link

BinaryFlux8210 commented Dec 10, 2024

Average upstream response time has been steadily increasing, resulting in noticeable slowdowns in DNS query resolution.

This occurs despite no significant changes to my network setup.

I've tried basic troubleshooting steps like updating to the latest version (v0.107.54) and restarting the service, but the issue persists. Below is a screenshot illustrating the increased response times and the current version of the software:

AdGuard_Home-3

@RandomGithubUsername
Copy link

Anyone know when the fix will be out in stable release? My upstream response times are now exceeding 900ms :O

@gitlaman
Copy link

I take my word back, the 61 beta ver is not fixed, i removed the quic server because it was 10000ms, and cleared the statistics, the bellow screenshot after 40k queries shows increase in response times.
Screenshot_20241214_141328_Brave

@beneix
Copy link

beneix commented Dec 16, 2024

Running v0.107.55 and it seems to be working fine, no issues with either high or increasing response times.

@thenebu
Copy link

thenebu commented Dec 16, 2024

Fresh after install and config:

Durchschnittliche Upstream-Antwortzeit
in den letzten 24 Stunden
https://dns.cloudflare.com:443/dns-query
30 ms
https://dns10.quad9.net:443/dns-query
22 ms

few hours later without any change:

https://dns.cloudflare.com:443/dns-query
8007 ms
https://dns10.quad9.net:443/dns-query
3465 ms

Running v0.107.55 in docker (Home Assistant)

@gitlaman
Copy link

Fresh after install and config:

Durchschnittliche Upstream-Antwortzeit
in den letzten 24 Stunden
https://dns.cloudflare.com:443/dns-query
30 ms
https://dns10.quad9.net:443/dns-query
22 ms

few hours later without any change:

https://dns.cloudflare.com:443/dns-query
8007 ms
https://dns10.quad9.net:443/dns-query
3465 ms

Running v0.107.55 in docker (Home Assistant)

Try the beta Version: vo. 108.0-b.59. For me it is working ok, i tried all others except edge versions.

@RandomGithubUsername
Copy link

This issue is back after couple of days on latest version. No offense but I'm starting to doubt the competency of adguard devs.. this tool has one job and its failing at that and its taking so long to fix. Moving back to pihole.

@gitlaman
Copy link

gitlaman commented Dec 16, 2024

Just use te beta that i post in the last reply. It is easy to overwrite the old or new adguard executable, if you need help i can guide you.

@schzhn
Copy link
Member

schzhn commented Dec 17, 2024

If you are using the latest version v0.107.55 and experience a slowdown in DNS resolution, please follow these steps:

  1. Stop AGH.
  2. Enable debug profiling by setting http.pprof.enabled to true in the configuration file.
  3. Start AGH.
  4. Clear stats.
  5. Go to http://127.0.0.1:6060/debug/pprof/ and make a note of the number of goroutines.

The next time you encounter a slowdown, check the number of goroutines again. If you notice that the current number of goroutines is significantly higher than it was initially, please post the content of the following page: http://127.0.0.1:6060/debug/pprof/goroutine?debug=1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.