-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNSSEC and TCP requests: large response potentially dropped? #314
Comments
hmmh - currently pDNSf does not support TCP requests. Maybe it is related to this fact. But then it also should not log request and response... |
I have a better understanding now. When pDNSf is not active,
When pDNSf is active,
What I believe is happening is that since pDNSf is the actor performing the TCP fallback, when it replies to the app, the app is still in "UDP mode." This means its internal buffers of 1024 bytes may have been an accurate number, therefore receiving a 1139 byte UDP payload is missed, ignored, or improperly parsed by the app. If pDNSf supported TCP for inbound DNS requests for apps, it would have the duty to return TC truncation to the app for the above scenario, so the app can switch to TCP and have appropriately-sized buffers. Given that pDNSf does not support TCP inbound requests, the current handling by pDNSf is likely fine. Being: pDNSf can potentially return data larger than the app's indicated UDP buffer size. Therefore, the app must either use larger buffers by default or handle the case when the response is too large. Neither is ideal nor foolproof. In this scenario, I'm going to recommend the app in question increase its default buffer sizes to the recommended value[1] of 1232 bytes, as the DNSKEY records for <Root> is a request the app makes unconditionally for all users of the app, regardless of the specific domain being DNSSEC-validated. At present, this is an 1139 byte payload. The app currently uses 1024 by default. An ideal solution would be for pDNSf to support TCP inbound, though that's a feature request; not a bug. [1]: https://www.dnsflagday.net/2020/#action-dns-software-vendors which reads:
|
Great analysis - I also think it is as you described. |
After making the change in the outside app to use a buffer size of 1232 instead of 1024, I think there is a bug in pDNSf that's related to this issue:
From my limited debugging, In DNSServer.java, response.setData(new byte[bufSize],response.getOffset(),bufSize-response.getOffset()); This appears to clear the response data fully, and there isn't code that retains the original response data. The Therefore, it appears that any UDP DNS response given to the app after a buffer resize will always return invalid data - any empty buffer. My suggestions:
-- I've tested on Android with a combination of termux app, In termux, dig @127.0.0.1 -p 5300 +dnssec dnskey .
|
I am wondering if |
personalDNSFilter 1.50.55.3 on Android 14.
I am testing a free and open source XMPP client, Cheogram.
Cheogram performs DNSSEC validation when it connects to the user's chosen XMPP server. Cheogram includes an implementation of minidns to fulfill its needs. It is my understanding that Cheogram obtains the DNS servers from the system and communicates with those servers directly.
To perform validation, it requests the relevant DNSKEY and DS records of the XMPP server's domain components. In my environment, the records for the domain and the TLD fit within a UDP DNS packet, with a buffer size of 1024.
However, a DNSKEY query for <Root> will not fit in a UDP request of buffer size 1024. This will lead the DNS client to try a TCP request:
$ dig +dnssec +bufsize=1024 DNSKEY .
This leads to a TCP request with an 1141 byte TCP payload with 1139 byte DNS response.
For some unknown reason, when this request and response is routed through pDNSFilter (in VPN mode), the DNSKEY response for <Root> does not appear to arrive at the client. With pDNSf disabled, the exact same request and response (byte for byte sans checksums) reaches the client.
Working:
Not Working:
pDNSf properly sees the request from the client, displays it in the window log, requests the data from the LAN DNS servers, and receives that data from the LAN DNS servers. When logging is enabled, pDNSf logs the query and responses in the traffic log.
Is there any peculiarity with pDNSf that may cause large TCP DNS responses to get dropped?
The text was updated successfully, but these errors were encountered: