Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DnsResponseException: Trying to read truncated (invalid) response #52

Closed
ohadschn opened this issue Feb 9, 2020 · 28 comments · Fixed by #62
Closed

DnsResponseException: Trying to read truncated (invalid) response #52

ohadschn opened this issue Feb 9, 2020 · 28 comments · Fixed by #62
Assignees
Milestone

Comments

@ohadschn
Copy link

ohadschn commented Feb 9, 2020

I'm trying to query a nameserver (40.90.4.1:53 (Udp: 512)) for a TXT record and getting the following:

DnsClient.DnsResponseException: Unhandled exception ---> System.IndexOutOfRangeException: Cannot read that many bytes: '43'.
   at DnsClient.DnsDatagramReader.ReadBytes(Int32 length)
   at DnsClient.DnsRecordFactory.ResolveTXTRecord(ResourceRecordInfo info)
   at DnsClient.DnsRecordFactory.GetRecord(ResourceRecordInfo info)
   at DnsClient.DnsMessageHandler.GetResponseMessage(ArraySegment`1 responseData)
   at DnsClient.DnsUdpMessageHandler.Query(IPEndPoint server, DnsRequestMessage request, TimeSpan timeout)
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyCollection`1 servers, DnsMessageHandler handler, DnsRequestMessage request, Boolean useCache, LookupClientAudit continueAudit)
   --- End of inner exception stack trace ---
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyCollection`1 servers, DnsMessageHandler handler, DnsRequestMessage request, Boolean useCache, LookupClientAudit continueAudit)
   at DnsClient.LookupClient.QueryInternal(IReadOnlyCollection`1 servers, DnsQuestion question, Boolean useCache)
   at DnsClient.LookupClient.QueryServer(IReadOnlyCollection`1 servers, String query, QueryType queryType, QueryClass queryClass)
   at DnsClient.LookupClient.QueryServer(IReadOnlyCollection`1 servers, String query, QueryType queryType, QueryClass queryClass)

It happens after a few retries, so I think it might be rate limiting / throttling. In this issue #22 you mentioned that it's a sign of an invalid/incomplete message, so it would be great if you could print the raw response in such cases. This way we have more debugging information to work with other than "bad message".

@ohadschn
Copy link
Author

ohadschn commented Feb 9, 2020

Using DnsQuerySniffer I see the following, maybe the zero record count is the issue?

image

@ohadschn
Copy link
Author

ohadschn commented Feb 9, 2020

Looking at the actual DNS entry, I see there are 15 values in that TXT record (not zero) - so maybe this is an Azure DNS bug?

(the TTL is 1 minute if that makes any difference)

@ohadschn
Copy link
Author

ohadschn commented Feb 9, 2020

OK so it looks like Azure is not at fault, executing the query directly works: nslookup -type=TXT _acme-challenge.azuredns.site 40.90.4.1

@ohadschn
Copy link
Author

ohadschn commented Feb 9, 2020

Just verified, and the same query using DnsClient fails: new LookupClient().QueryServer(new[] { IPAddress.Parse("40.90.4.1") }, $"_acme-challenge.azuredns.site", QueryType.TXT);

Note that the nslookup query looks different in DnsQuerySniffer :
image

Not sure what's going on here, the actual record has about 20 values which are all shown correctly by nslookup, but DnsQuerySniffer only shows 8 of them...

@MichaCo
Copy link
Owner

MichaCo commented Feb 9, 2020

I actually cannot reproduce the error, I'm running

var client = new LookupClient();
client.EnableAuditTrail = true;
client.UseCache = false;

while (true)
{
    try
    {
        var r = client.QueryServer(new[] { IPAddress.Parse("40.90.4.1") }, $"_acme-challenge.azuredns.site", QueryType.TXT);
        Console.WriteLine(r.AuditTrail);
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex);
        throw;
    }
}

For >10 minutes minutes now and no error...

The error you got means that the response wasn't complete, meaning there are bytes missing and the parser bailed out. Which is usually the right thing to do...
Do you run that in a specific setup locally? What framework/version etc, windows/linux?

Regarding those kind of errors, I was hesitant to make those kind of errors transient and have the client retry the request. But I think it's the only thing which makes sense; to run the same query again and hope the network doesn't screw it up again ^^

In the meantime, I guess to make your app more stable, do a retry yourself with a bit delay between each retry in case there is a throttle?

@ohadschn
Copy link
Author

ohadschn commented Feb 9, 2020

No point in retrying, it is very consistent. The reason your requests have been successful is that I've since modified that record and reduced the value count to unblock myself.

My setup is very mainstream (Windows 10, .NET 4.7.2) and reproducing the issue is easy - apparently you just need to query some TXT record with ~20 values. I created the following record for your debugging convenience, and I'll leave it there untouched as long as this bug is open: new LookupClient().QueryServer(new[] { IPAddress.Parse("40.90.4.1") }, $"_acme-challenge-test.azuredns.site", QueryType.TXT);

I get what you're saying about the incomplete message but:

  1. nslookup does get it right.
  2. Bailing out is fine, but ideally you would print out the incomplete message you were able to parse, the raw bytes, anything to help debug the issue.

@MichaCo
Copy link
Owner

MichaCo commented Feb 10, 2020

@ohadschn tldr: I still cannot reproduce it.

I ran a couple long running tests against 40.90.4.1 with different frameworks and using TCP instead of UDP and other settings. Never got any error...

When I test the library locally, I have a server with domains with way more records, too.
Just for this, I added a subdomain with 40, 60, 100 TXT records just to see if anything fails or if the TCP fallback is a problem. But also nothing, no errors...

So, I don't really know anymore what else to try to reproduce what you experience ~~

@ohadschn
Copy link
Author

ohadschn commented Feb 10, 2020

@ohadschn tldr: I still cannot reproduce it.

I ran a couple long running tests against 40.90.4.1 with different frameworks and using TCP instead of UDP and other settings. Never got any error...

When I test the library locally, I have a server with domains with way more records, too.
Just for this, I added a subdomain with 40, 60, 100 TXT records just to see if anything fails or if the TCP fallback is a problem. But also nothing, no errors...

So, I don't really know anymore what else to try to reproduce what you experience ~~

Just to clarify, you used the new TXT record I created _acme-challenge-test.azuredns.site? (note the -test suffix)

Also, how about printing out the parts of the message you were able to decode and the raw bytes for debugging?

@MichaCo
Copy link
Owner

MichaCo commented Feb 10, 2020

Yea, I used _acme-challenge-test which returned 18 TXT records consistently in 1000s of requests.

Regarding printing the error details. That sounds like a great idea but isn't that trivial I think. Have to take a look.

@MichaCo
Copy link
Owner

MichaCo commented Mar 5, 2020

Hi @ohadschn,
I'm trying to a) improve the capability to get trace information from this library in production see #60 and b) I added another retry mechanism to those kind of parser errors in case the response has invalid data.

Could you maybe give the latest beta version from myget and maybe also attach to the log output and let me know what how that works? ;)

Changes are currently in this PR #58

@MichaCo
Copy link
Owner

MichaCo commented Mar 5, 2020

also /closing this one as duplicate of #51

@MichaCo MichaCo closed this as completed Mar 5, 2020
@ohadschn
Copy link
Author

ohadschn commented Mar 6, 2020

Could you maybe give the latest beta version from myget and maybe also attach to the log output and let me know what how that works? ;)

Thanks! Retries didn't help as expected (like I said this is consistent for me) but the error is a bit more detailed now:

Unhandled Exception: DnsClient.DnsResponseException: Unhandled exception ---> DnsClient.DnsResponseParseException: Response parser error, 512 bytes available, tried to read 43 bytes at index 504. Cannot read bytes.
   at DnsClient.DnsDatagramReader.ReadBytes(Int32 length)
   at DnsClient.DnsRecordFactory.ResolveTXTRecord(ResourceRecordInfo info)
   at DnsClient.DnsRecordFactory.GetRecord(ResourceRecordInfo info)
   at DnsClient.DnsMessageHandler.GetResponseMessage(ArraySegment`1 responseData)
   at DnsClient.DnsUdpMessageHandler.Query(IPEndPoint server, DnsRequestMessage request, TimeSpan timeout)
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyCollection`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit continueAudit)
   --- End of inner exception stack trace ---
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyCollection`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit continueAudit)
   at DnsClient.LookupClient.QueryInternal(DnsQuestion question, DnsQuerySettings settings, IReadOnlyCollection`1 useServers)
   at DnsClient.LookupClient.QueryServer(IReadOnlyCollection`1 servers, String query, QueryType queryType, QueryClass queryClass, DnsQueryOptions queryOptions)
   at DnsClient.LookupClient.QueryServer(IReadOnlyCollection`1 servers, String query, QueryType queryType, QueryClass queryClass)
   at TestDns.Program.Main(String[] args) 
  1. Dumping the bytes would be great as we don't really know what index 504 contains.
  2. I'm using OpenDNS, do you think it may have anything to do with this?
  3. Why not publish your beta versions to nuget.org as well (you can flag them as pre-release)? myget is kind of a pain to use: The 'Source' parameter is not respected for the transitive package management based project(s) NuGet/Home#7189

@MichaCo
Copy link
Owner

MichaCo commented Mar 6, 2020

@ohadschn The bytes are as a property ResponseData on the exception

@ohadschn
Copy link
Author

ohadschn commented Mar 6, 2020

Cool, here are the bytes for your inspection:


31,169,129,128,0,1,0,18,0,0,0,1,20,95,97,99,109,101,45,99,104,97,108,108,101,110,103,101,45,116,101,115,116,8,97,122,117,114,101,100,110,115,4,115,105,116,101,0,0,16,0,1,192,12,0,16,0,1,0,0,0,60,0,42,41,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,51,192,12,0,16,0,1,0,0,0,60,0,42,41,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,52,192,12,0,16,0,1,0,0,0,60,0,43,42,87,84,98,112,74,121,122,81,87,112,85,100,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,118,66,73,57,192,12,0,16,0,1,0,0,0,60,0,43,42,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,118,97,192,12,0,16,0,1,0,0,0,60,0,43,42,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,118,115,192,12,0,16,0,1,0,0,0,60,0,43,42,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,118,119,192,12,0,16,0,1,0,0,0,60,0,43,42,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,107,55,86,85,76,82,119,101,192,12,0,16,0,1,0,0,0,60,0,44,43,87,84,98,112,74,121,122,81,87,112,85,112,82,97,45,108,73,95,118,73,95,121,51,95,68,101,75,80,48,77,114,109,121,87,52,55,86,85,76,82,118,66,73,192,12,0,16,0,1,0,0,0,60,0,44,43,87,84,98,112,74,121,122,81

IMHO this printout should be added to the exception's ToString...

@MichaCo
Copy link
Owner

MichaCo commented Mar 7, 2020

Well, obviously this is incomplete data, the header states there are 18 answers.

The data contains 8 of those answers. Reading the 9th answer then fails.

image

The suspicious thing here is that the array is exactly 512 bytes, which is exactly the magic number which defines the limit of a DNS UDP payload in the classic DNS spec (before EDNS0).

There are still a lot of proxies and routers and firewalls artificially truncating the payload.

Per spec the expectation here is that a server sets a header flag indicating the result is truncated.
That flag is getting respected by this library, and then falls back to TCP to run the same request over TCP again...

In your case though, that flag is not set which is actually not a valid response.

My assumption is that you are using some DNS proxy or router or firewall which alters the UDP payload between your machine and the server in Azure. Something just truncates the response from the server. Because, I cannot reproduce it with my network.
If I call your server, it sends the complete result (1071 bytes) over UDP. The EDNS0 options even indicate that the server supports up to 4000 bytes (4000 is pretty unusual though, the standard maximum is 4096).

; EDNS: version: 0, flags:; udp: 4000

If the "middlebox" in your network would just remove the OPT record, the server would probably return a valid truncated result, but this doesn't seem to be the case.

You could try to set the LookupClient's settings to use TCP only, which would not even try the UDP path. At least to see if it works.

I will re-open this issue as it turns out it is in fact something different.
My plan is to add an option to disable EDNS0 and maybe add some fallback to TCP even though the response is invalid. Not sure about the last part yet, I couldn't find any good documentation of what a Client should do exactly in that scenario.

@MichaCo MichaCo reopened this Mar 7, 2020
@MichaCo MichaCo changed the title DnsClient.DnsResponseException: Unhandled exception ---> System.IndexOutOfRangeException: Cannot read that many bytes: '43' DnsResponseException: Trying to read truncated (invalid) response Mar 7, 2020
@MichaCo MichaCo self-assigned this Mar 7, 2020
@ohadschn
Copy link
Author

ohadschn commented Mar 7, 2020

Thank you for looking into this!

You could try to set the LookupClient's settings to use TCP only, which would not even try the UDP path. At least to see if it works.

new LookupClient(new LookupClientOptions {UseTcpOnly = true}) works :)

My assumption is that you are using some DNS proxy or router or firewall which alters the UDP payload between your machine and the server in Azure. Something just truncates the response from the server.

Right again, my router (PepWave Surf SOHO Mk III) is indeed configured to "Forward Outgoing DNS Requests to Local DNS Proxy" (this is a security measure). When I turn it off, the issue disappears. Do you think this is the issue? https://forum.peplink.com/t/edns0-support-in-peplink/130

My plan is to add an option to disable EDNS0 and maybe add some fallback to TCP even though the response is invalid. Not sure about the last part yet, I couldn't find any good documentation of what a Client should do exactly in that scenario.

Apparently this is exactly what nslookup does (didn't notice this before because I was running the less verbose Windows version, the following is the linux output):

nslookup -type=TXT _acme-challenge-test.azuredns.site 40.90.4.1
;; Truncated, retrying in TCP mode.  <<<<-------------
Server:         40.90.4.1
Address:        40.90.4.1#53

_acme-challenge-test.azuredns.site      text = "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI"
_acme-challenge-test.azuredns.site      text = "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvB1"
...

@MichaCo
Copy link
Owner

MichaCo commented Mar 7, 2020

Regarding your router, not really sure there are a couple of issues, e.g. https://forum.peplink.com/t/peplink-dns-dnssec-edns/19085

And seems they did provide firmware updates to comply with the recommendations from dnsflagday in 2019

@ohadschn
Copy link
Author

ohadschn commented Mar 7, 2020

I have the most up to date firmware, released about 3 months ago, and in any case newer than the DNS flag day firmware. So it looks like a bug.

I want to report it, so let me know if I got it right:

  1. The router's DNS forwarding needlessly truncates UDP messages to 512 bytes
  2. Worse, said truncation is not flagged in the response (if you could specify the exact name of the flag you expect that would be great)

@MichaCo
Copy link
Owner

MichaCo commented Mar 7, 2020

It's the TC flag on the header https://tools.ietf.org/html/rfc1035#section-4.1.1

@ohadschn
Copy link
Author

ohadschn commented Mar 7, 2020

I guess on your end you could implement the auto retry (as nslookup apparantly does on both Windows and Linux).

Also, I believe the ResponseData bytes should be printed in the exception's ToString() Regardless...

MichaCo added a commit that referenced this issue Mar 10, 2020
…g. see #60.

+ unit tests for all the good and bad truncation handling added see #52
@MichaCo
Copy link
Owner

MichaCo commented Mar 11, 2020

@ohadschn beta version with the changed from PR #62 is available on nuget.org if you want to give it a try

@ohadschn
Copy link
Author

Thanks! I can confirm it works as expected:

_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvB1"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI2"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI4"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI5"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI6"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI7"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI8"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI9"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUd-lI_vI_y3_DeKP0MrmyWk7VULRvBI9"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRadlI_vI_y3_DeKP0MrmyWk7VULRvBI10"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyW47VULRvBI"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvs"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRva"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULR4"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvw"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULR3"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRwe"
. 0 4000 OPT OPT 4000.

Is there some verbosity flag you want me to switch on to verify in the logs things are working as you expect them to, or is the above result proof enough?

@MichaCo
Copy link
Owner

MichaCo commented Mar 11, 2020

Cool! That's great, thanks.

If you run your tests as a Console app you can use this

DnsClient.Tracing.Source.Switch.Level = SourceLevels.Information;
DnsClient.Tracing.Source.Listeners.Add(new ConsoleTraceListener());

You can also set Source.Level to All to get verbose logs.

There should be some message about the TCP fallback and the bad truncation.

@ohadschn
Copy link
Author

Looks great!

DnsClient Verbose: 0 : [NameServer] Starting to resolve NameServers, skipIPv6SiteLocal:True.
DnsClient Verbose: 0 : [NameServer] Resolved 2 name servers: [208.67.222.222:53 (Udp: 512),208.67.220.220:53 (Udp: 512)].
DnsClient Verbose: 1 : [LookupClient] Begin query 14499 - Qs: 1 Recursion: True OpCode: Query => _acme-challenge-test.azuredns.site IN TXT on [40.90.4.1:53 (Udp: 512)]
DnsClient Verbose: 2 : [LookupClient] TryResolve 14499 => _acme-challenge-test.azuredns.site IN TXT on 40.90.4.1:53 (Udp: 512), try 1/6.
DnsClient Error: 91 : [LookupClient] Query 14499 => _acme-challenge-test.azuredns.site IN TXT error parsing the response. The response seems to be truncated without TC flag set! Re-trying via TCP anyways.
DnsClient.DnsResponseParseException: Response parser error, 512 bytes available, tried to read 43 bytes at index 504.
Cannot read bytes.
[OKOBgAABABIAAAABFF9hY21lLWNoYWxsZW5nZS10ZXN0CGF6dXJlZG5zBHNpdGUAABAAAcAMABAAAQAAADwAKilXVGJwSnl6UVdwVXBSYS1sSV92SV95M19EZUtQME1ybXlXazdWVUxSM8AMABAAAQAAADwAKilXVGJwSnl6UVdwVXBSYS1sSV92SV95M19EZUtQME1ybXlXazdWVUxSNMAMABAAAQAAADwAKypXVGJwSnl6UVdwVWQtbElfdklfeTNfRGVLUDBNcm15V2s3VlVMUnZCSTnADAAQAAEAAAA8ACsqV1RicEp5elFXcFVwUmEtbElfdklfeTNfRGVLUDBNcm15V2s3VlVMUnZhwAwAEAABAAAAPAArKldUYnBKeXpRV3BVcFJhLWxJX3ZJX3kzX0RlS1AwTXJteVdrN1ZVTFJ2c8AMABAAAQAAADwAKypXVGJwSnl6UVdwVXBSYS1sSV92SV95M19EZUtQME1ybXlXazdWVUxSdnfADAAQAAEAAAA8ACsqV1RicEp5elFXcFVwUmEtbElfdklfeTNfRGVLUDBNcm15V2s3VlVMUndlwAwAEAABAAAAPAAsK1dUYnBKeXpRV3BVcFJhLWxJX3ZJX3kzX0RlS1AwTXJteVc0N1ZVTFJ2QknADAAQAAEAAAA8ACwrV1RicEp5elE=].
   at DnsClient.DnsDatagramReader.ReadBytes(Int32 length)
   at DnsClient.DnsRecordFactory.ResolveTXTRecord(ResourceRecordInfo info)
   at DnsClient.DnsRecordFactory.GetRecord(ResourceRecordInfo info)
   at DnsClient.DnsMessageHandler.GetResponseMessage(ArraySegment`1 responseData)
   at DnsClient.DnsUdpMessageHandler.Query(IPEndPoint server, DnsRequestMessage request, TimeSpan timeout)
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit)
DnsClient Verbose: 2 : [LookupClient] TryResolve 21318 => _acme-challenge-test.azuredns.site IN TXT on 40.90.4.1:53 (Udp: 512), try 1/6.
DnsClient Verbose: 10 : [LookupClient] Query 21318 => _acme-challenge-test.azuredns.site IN TXT on 40.90.4.1:53 (Udp: 512) received result with 18 answers.
DnsClient Verbose: 31 : [LookupClient] Response 21318 => _acme-challenge-test.azuredns.site. IN TXT opt record sets buffer of 40.90.4.1:53 (Udp: 4000) to 4000.
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvB1"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI2"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI4"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI5"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI6"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI7"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI8"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvBI9"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUd-lI_vI_y3_DeKP0MrmyWk7VULRvBI9"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRadlI_vI_y3_DeKP0MrmyWk7VULRvBI10"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyW47VULRvBI"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvs"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRva"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULR4"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRvw"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULR3"
_acme-challenge-test.azuredns.site. 60 IN TXT "WTbpJyzQWpUpRa-lI_vI_y3_DeKP0MrmyWk7VULRwe"

One thing that looks strange is that the second resolution attempt still says UDP?

DnsClient Verbose: 2 : [LookupClient] TryResolve 21318 => _acme-challenge-test.azuredns.site IN TXT on 40.90.4.1:53 (Udp: 512), try 1/6.
DnsClient Verbose: 10 : [LookupClient] Query 21318 => _acme-challenge-test.azuredns.site IN TXT on 40.90.4.1:53 (Udp: 512) received result with 18 answers.
DnsClient Verbose: 31 : [LookupClient] Response 21318 => _acme-challenge-test.azuredns.site. IN TXT opt record sets buffer of 40.90.4.1:53 (Udp: 4000) to 4000.

@MichaCo
Copy link
Owner

MichaCo commented Mar 12, 2020

Yeah, the logging just prints the NameServer.ToString() which contains the server's known buffer size.
I'm fine with that ^^

@ohadschn
Copy link
Author

Sure, but perhaps it would be nice to mention somewhere in the query log itself the protocol it's actually using, something like: Query 21318 (TCP) => _acme-challenge-test.azuredns.site

MichaCo added a commit that referenced this issue Mar 14, 2020
* changed how opt records are created and used. Added configuration to disable EDNS and to set the requested buffer size and DnsSec
* Changes the behavior in case of bad responses which were truncated by some middleman proxy or router - fixes #52
* Changing default unknown record handling to preserve the original data so that users can work with those records.
* Reworking error handling see #60
* Adding new setting ContinueOnEmptyResponse #64
@MichaCo MichaCo added this to the 1.3.0 milestone Mar 15, 2020
@MichaCo
Copy link
Owner

MichaCo commented Mar 17, 2020

The changes are now released on NuGet https://www.nuget.org/packages/DnsClient/1.3.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants