Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Critical Bug: After FTL Queries First Per-Domain Upstream , FTL Sticks to it for All Client Queries, Ignoring Default DNS #2169

Closed
Cansybars opened this issue Jan 29, 2025 · 13 comments

Comments

@Cansybars
Copy link

Cansybars commented Jan 29, 2025

Versions

  • Pi-hole version: v5.18.3 (Latest: v5.18.4)
  • Web version: v5.21 (Latest: v5.21)
  • FTL version: v5.25.2 (Latest: v5.25.2)

Platform

  • OS: Ubuntu 22.04.4 (Firewalla Gold)
  • Platform: Docker

Expected Behavior

  • Default upstreams (8.8.8.8, 8.8.4.4) should be used for standard matching queries.
  • Per-domain servers/forwarders (server=/example.com/CustomUpstream) should only apply to matching queries.
  • Each per-domain upstream should work independently and not affect queries for unrelated domains or devices.
  • FTL should recover from syntax errors in configuration files after corrections and restarts, without requiring manual deletion of files.

Steps to Reproduce

1. Sticking Conditional Forwarder Issue (Fatal - Breaks System When Using Custom Domain Server Settings)

  1. Configure two default upstreams (8.8.8.8, 8.8.4.4).
  2. Add multiple domains, with a different per domain upstream (server=/domain.com/CustomUpstream).
  3. Query a non-conditional domain → Resolves via default upstream.
  4. Query a conditional domain (e.g., amazon.com → 1.1.1.1) → Resolves correctly.
  5. However, query another non-conditional domain → Incorrectly forwarded to 1.1.1.1 instead of 8.8.8.8 indefinitely.
  6. Switching devices resets behavior to resolve to default DNS correctly, until another per-domain rule is hit, when incorrect behavior repeats.
  7. Issue may not occur when using very few custom forwarded domains.

2. FTL Failure on Syntax Error or Multiple Domains in a Single Forward

  • FTL crashes when incorrect syntax or multiple domains under a single server=/ entry is present.
    (dnsmasq supports both formats, but Pi-hole does not seem to handle them correctly.)
  1. Introduce a small syntax error in any .conf file under /etc/dnsmasq.d/ OR aggregate multiple domains under a single server=/.../ entry.
  2. Restart FTL (pihole restartdns) or reboot the system.
  3. Observe total FTL failure (blocked on port 4711).
  4. Correct the syntax and restart FTL → Still fails.
  5. Only deleting or renaming the file allows FTL to start again, even when re-adding the exact same config.

Impact

  1. Incorrect DNS routing per client: After a "grace period," the first conditional upstream sticks for all future queries for that device.
  2. Switching devices resets behavior, meaning the issue is per-device, not global.
  3. FTL becomes unusable when encountering a syntax issue, requiring manual intervention to recover.

Proposed Severity

  • CRITICAL Bug:

    • Renders Pi-hole useless when handling large groups of domains via different upstreams (e.g., VPN tunnels).
    • Prevents effective use of per-domain routing, leading to unintended DNS leaks.
  • Bug:

    • FTL crashes on syntax errors and does not recover without manual file deletion.
    • Leads to permanent DNS outage unless manually fixed.
    • Not as critical as first but possibly related, since can be bypassed.

Suggested Fixes

  1. Fix Per-Domain Upstream Sticking:

    • Ensure per-domain upstreams do not persist per-device beyond the matching query.
    • Identify why the first conditional match becomes "sticky" and overrides all future queries.
    • Investigate why switching devices resets behavior, but another per-domain match re-triggers the issue.
    • Investigate why the 9.9.9.9 second conditional group, did not trigger the pattern
  2. Improve FTL Handling of Syntax Errors:

    • Allow graceful failure when encountering invalid server=/ entries instead of breaking FTL, or skip invalid config files.
    • Investigate possible (non likely?) connection to persisting wrong per domain DNS resolutions.
  3. Investigate Query Behavior in Logs (Google.tv Example)

    • The last query example (google.tv lookup on a new device) extends 5x processing lines in logs.
    • This may hint at the mechanism causing the sticky conditional upstreams.

Query Log & Incorrect Routing Table

Query In Conditional Forwarder? Expected Upstream Actual Upstream
net.net ❌ No 8.8.8.8 (default) ✅ Correct (8.8.8.8)
cats.net ❌ No 8.8.8.8 (default) ✅ Correct (8.8.8.8)
apple.co.uk ✅ Yes (Google) 8.8.8.8 ✅ Correct (8.8.8.8)
bbc.com ✅ Yes (UK) 9.9.9.9 ✅ Correct (9.9.9.9)
amazon.com ✅ Yes (US) 1.1.1.1 ✅ Correct (1.1.1.1) (BUG TRIGGERED)
heroes.com ❌ No 8.8.8.8 (default) ❌ Incorrect (1.1.1.1 - stuck)
get.com ❌ No 8.8.8.8 (default) ❌ Incorrect (1.1.1.1 - stuck)
apple.com ✅ Yes (Google) 8.8.8.8 ✅ Resets behavior
google.tv (new device) ❌ No 8.8.8.8 ✅ Correct (8.8.8.8)

Explanation of query workflow (Ran in Order)

  1. net.net: Not on any conditional forwarder → Resolves correctly to 8.8.8.8.
  2. cats.net: Not on any conditional forwarder → Resolves correctly to 8.8.8.8.
  3. apple.co.uk: On Google's conditional forwarder → Resolves correctly to 8.8.8.8 (since Google is also a default).
  4. bbc.com: On UK conditional forwarder (9.9.9.9) → Resolves correctly.
    No issue yet.
  5. amazon.com: On US conditional forwarder (1.1.1.1) → Resolves correctly.
    However, all subsequent queries are now forced to 1.1.1.1.
  6. heroes.com: Not on any list but still resolves via 1.1.1.1 instead of default (Bug triggered).
  7. get.com: Also not on any list, yet still resolves via 1.1.1.1 instead of default.
  8. apple.com: On Google conditional forwarder → Resolves correctly, appears to reset the bug.
  9. google.tv: Queried from a different device, and resolves correctly via 8.8.8.8.
    • Indicates issue is per-device and resets when switching clients.


Attachments

  • Debug token: https://tricorder.pi-hole.net/MP2GX8XA/
  • Screenshots of query logs showing incorrect behavior.
  • FTL log Snapshots matching FTL processing for logged queries
  • FTL logs showing broader mismatched resolution patterns.

Image Image https://tricorder.pi-hole.net/MP2GX8XA/( Debug token again) Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image

extended query log.txt

@yubiuser yubiuser transferred this issue from pi-hole/pi-hole Jan 29, 2025
@Cansybars Cansybars changed the title Pi-hole FTL persists first per-domain upstream for all queries per client, bypassing defaults; unrecoverable crash on config syntax errors. Critical Bug: After FTL Queries First Per-Domain Upstream , FTL Sticks to it for All Client Queries, Ignoring Default DNS Jan 29, 2025
@DL6ER
Copy link
Member

DL6ER commented Jan 29, 2025

Thank you for this very detailed report. We are in the past phase of preparing the v6 release of Pi-hole which has large changes in all the topics this issue covers. Before I try to reproduce all this on v6, could I maybe ask you to try to reproduce the same on a :nightly docker container? As you are the reporter here, you also know how to reproduce this exactly.

@Cansybars
Copy link
Author

Cansybars commented Jan 30, 2025

Thanks for the swift response.

I set up the new docker, at some point realized that the config file has changed and Dnsmasq.d is default ignored, but after fixing this part, I was able to test again. Regarding syntax or errors in the file, they can be corrected without removing the custom files, but it's very unclear from the new UI, whether there were any problems in loading any custom data, or if it was uploaded at all. The FTL system errors and diagnostics warnings no longer appear, and while you reboot or reload the resolver from system settings, you have to check all the way under pihole logs, to see that everything did in fact load.

Re the wrong DNS issue, it’s actually worse, more confusing, very inconsistent and random, but there are serious issues.

The prior before (1.1.1.1 conditional “gluing” all post queries) has changed, but now domains that are not listed as conditions and are supposed to take the default route, may be routed through the conditional DNS servers, the conditionals ,may route via default, one condition may take the route of the other, or even both. Also, the query log often does not match what Pihole log shows, either showing correct routing while not so, the opposite, or often falsely showing resolved from cache. With one example, dig command and pihole log shows resolving to nonexistent domain, through the same route that a manual check produces a valid IP result. Of course there are also current and consistent resolutions, but in this case it's binary - either works and can be trusted, or not.

Edit: Eventually, without any use of the system for a few hours, it appears all queries the system does 'resort' to using the 1.1.1.1 consistently on any domain...:(

Some examples:

  1. ancnews.com configured to route to 1.1.1.1
    in the query log it shows routing to the "opposite" conditional forwarder 9.9.9.9
abcnews.com
Query received on:  2025-01-30 03:37:43.534
Client:  iphone.lan (192.168.3.51)
Query Status:  Forwarded to 9.9.9.9#53
Reply:  IP
Database ID:  85

Whlle pihole log shows it was actually routed to both!

2025-01-30 03:37:43.534 query[A] abcnews.com from 192.168.3.51
2025-01-30 03:37:43.536 forwarded abcnews.com to 1.1.1.1
2025-01-30 03:37:43.536 forwarded abcnews.com to 9.9.9.9
2025-01-30 03:37:43.599 reply abcnews.com is 34.110.155.89
  1. domains in the 1.1.1.1 list sequentially queried:atttvnow.com and akamaihd.net.

Atttv on pihole log incorrectly sent to 8.8.8.8, while on query log, showing up as cache, and response no data (dig @pihole 172.16.0.2), while dig @8.8.8.8 (the actual stated queried server, though the wrong one), shows complete response with resolved IP

Akamaihd.net, shows on pihole log sent to 1.1.1.1 this time correctly, with result nxdomain (so far correctly),while on query log it shows as served from cache (though it had never been never queried before that).

From Pi-hole log-

atttvnow.com shows up on pihole log as sent wrongly to 8.8.8.8 and receiving an nonexist response
2025-01-30 01:54:12.627 query[A] atttvnow.com/ from 192.168.3.62
2025-01-30 01:54:12.630 forwarded atttvnow.com/ to 8.8.8.8
2025-01-30 01:54:12.634 forwarded atttvnow.com/ to 8.8.4.4
2025-01-30 01:54:12.635 forwarded atttvnow.com/ to 8.8.8.8
2025-01-30 01:54:12.635 forwarded atttvnow.com/ to 8.8.4.4
2025-01-30 01:54:12.681 reply atttvnow.com/ is NXDOMAIN
2025-01-30 01:54:40.821 query[A] pi.hole from 127.0.0.1
2025-01-30 01:54:40.821 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-30 01:55:11.010 query[A] pi.hole from 127.0.0.1
2025-01-30 01:55:11.011 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-30 01:55:40.583 query[A] akamaihd.net from 192.168.3.62
2025-01-30 01:55:40.584 forwarded akamaihd.net to 1.1.1.1
2025-01-30 01:55:40.590 reply akamaihd.net is NODATA-IPv4

From Query log in contrast:

Akamaihd.net
Query received on:  2025-01-30 01:55:40.583
Client:  mactemp.lan (192.168.3.62)
Query Status:  Served from cache
Reply:  NODATA
Database ID:  65
atttvnow.com
Query received on:  2025-01-30 01:54:12.628
Client:  mactemp.lan (192.168.3.62)
Query Status:  Served from cache
Reply:  NXDOMAIN
Database ID:  64

Response: dig atttvnow.com @172.16.0.2 (pihole) - NXDOMAIN

; <<>> DiG 9.10.6 <<>> @172.16.0.2 atttvnow.com/
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10902
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

VS dig atttvnow.com @8.8.8.8 atttvnow.com with ipv4 results:

; <<>> DiG 9.10.6 <<>> @8.8.8.8 atttvnow.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60908
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;atttvnow.com.			IN	A

;; ANSWER SECTION:
atttvnow.com.		20	IN	A	82.102.152.155
atttvnow.com.		20	IN	A	82.102.152.193
  1. Another example: net2.net - non conditional domain, routed to 1.1.1.1 instead of default google DNS:
net2.net
Query received on:  2025-01-30 01:33:26.121
Client:  mactemp.lan (192.168.3.62)
Query Status:  Forwarded to 1.1.1.1#53
Reply:  IP
Database ID:  59
Jan 30 01:33:26 dnsmasq[179]: query[A] net2.com from 192.168.3.62
Jan 30 01:33:26 dnsmasq[179]: forwarded net2.com to 1.1.1.1
Jan 30 01:33:26 dnsmasq[179]: reply net2.com is 52.57.221.121
  1. abcnews.com - listed to route through 1.1.1.1, shows in the query log as being routed through the opposite condition (9.9.9.9). but in fact, the pihole log shows it was sent to BOTH opposite conditions - worst possible outcome
abcnews.com
Query received on:  2025-01-30 03:37:43.534
Client:  iphone.lan (192.168.3.51)
Query Status:  Forwarded to 9.9.9.9#53
Reply:  IP
Database ID:  85
Jan 30 03:37:43 dnsmasq[179]: query[A] abcnews.com from 192.168.3.51
Jan 30 03:37:43 dnsmasq[179]: forwarded abcnews.com to 1.1.1.1
Jan 30 03:37:43 dnsmasq[179]: forwarded abcnews.com to 9.9.9.9
Jan 30 03:37:43 dnsmasq[179]: reply abcnews.com is 34.110.155.89

  1. jungle.com - non conditional default domain, instead of resolving via google DNS, routes to the 1.1.1.1 conditional:
jungle.com
Query received on:  2025-01-30 01:02:55.091
Client:  mactemp.lan (192.168.3.62)
Query Status:  Forwarded to 1.1.1.1#53
Reply:  IP
Database ID:  32
Jan 30 01:02:55 dnsmasq[179]: query[A] jungle.com from 192.168.3.62
Jan 30 01:02:55 dnsmasq[179]: forwarded jungle.com to 1.1.1.1
Jan 30 01:02:55 dnsmasq[179]: reply jungle.com is 76.223.105.230
Jan 30 01:02:55 dnsmasq[179]: reply jungle.com is 13.248.243.5

Query received on:  2025-01-30 01:37:52.933
Client:  mactemp.lan (192.168.3.62)
Query Status:  Served from cache
Reply:  NXDOMAIN
Database ID:  61
  1. Friends.com not on forwarder list, also resolves through 1.1.1.1, this time showing both on the query log and pihole log.
friends.com
Query received on:  2025-01-30 01:02:21.548
Client:  mactemp.lan (192.168.3.62)
Query Status:  Forwarded to 1.1.1.1#53
Reply:  IP
Database ID:  28
Jan 30 01:02:21 dnsmasq[179]: query[A] friends.com from 192.168.3.62
Jan 30 01:02:21 dnsmasq[179]: forwarded friends.com to 1.1.1.1
Jan 30 01:02:21 dnsmasq[179]: reply friends.com is 34.234.119.83
Jan 30 01:02:21 dnsmasq[179]: reply friends.com is 18.214.233.15

  1. Total jibrish domain that is obviously not on condition, resolves through the 1.1.1.1 condition but the worse thing is that on the query log, the unsuspecting viewer, sees it as legitimate cache:
 blasdfiwpalsdfalskdjhf.com
Query received on:  2025-01-30 01:37:52.933
Client:  mactemp.lan (192.168.3.62)
Query Status:  Served from cache
Reply:  NXDOMAIN
Database ID:  61 
an 30 01:02:21 dnsmasq[179]: query[A] friends.com from 192.168.3.62
Jan 30 01:02:21 dnsmasq[179]: forwarded friends.com to 1.1.1.1
Jan 30 01:02:21 dnsmasq[179]: reply friends.com is 34.234.119.83
Jan 30 01:02:21 dnsmasq[179]: reply friends.com is 18.214.233.15

Edit: And a few hours later:

2025-01-30 08:11:10.570 query[A] pi.hole from 127.0.0.1
2025-01-30 08:11:10.591 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-30 08:11:16.605 query[A] att.com from 192.168.3.62
2025-01-30 08:11:16.608 forwarded att.com to 1.1.1.1
2025-01-30 08:11:16.612 reply att.com is 144.160.155.43
2025-01-30 08:11:16.612 reply att.com is 144.160.36.42
2025-01-30 08:11:25.103 query[A] att.com from 192.168.3.62
2025-01-30 08:11:25.103 cached att.com is 144.160.155.43
2025-01-30 08:11:25.103 cached att.com is 144.160.36.42
2025-01-30 08:11:32.757 query[A] abba.com from 192.168.3.62
2025-01-30 08:11:32.758 forwarded abba.com to 1.1.1.1
2025-01-30 08:11:32.964 reply abba.com is 63.251.38.201
2025-01-30 08:11:40.779 query[A] pi.hole from 127.0.0.1
2025-01-30 08:11:40.779 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-30 08:11:46.636 query[A] baby.com from 192.168.3.62
2025-01-30 08:11:46.639 forwarded baby.com to 1.1.1.1
2025-01-30 08:11:46.839 reply baby.com is 54.149.177.219
2025-01-30 08:11:46.839 reply baby.com is 34.212.217.50
2025-01-30 08:12:10.965 query[A] pi.hole from 127.0.0.1
2025-01-30 08:12:10.965 Pi-hole hostname pi.hole is 127.0.0.1

I hope this is enough for you to go on for now, let me know if I can do anything else to help, since it’s really critical and I would like to do anything I can to help.
Here is a token I generated after running the specific examples 1-7 New https://tricorder.pi-hole.net/5MWpjIP8/
Edit: another when writing the 'Edit' -https://tricorder.pi-hole.net/jJuKLzpy/

@DL6ER
Copy link
Member

DL6ER commented Jan 30, 2025

Okay, thank your for your additional work, let me walk though your comments here one-by-one for clarity:

1. abcnews.com configured to route to 1.1.1.1

Whlle pihole log shows it was actually routed to both!

This is actually expected and by design. Whenever you have defined more than one possible upstream server, dnsmasq tries every configured DNS server in a while to see which one responds fastest. dnsmasq then only processes the first reply which is then the server Pi-hole shows as the one "forwarded to". I will think about improving the shown text on the web interface to something like "reply received from ...". I don't really think there is much value in showing the fact that we forwarded to multiple when we only use the first reply.

  • You have configured 8.8.8.8 and 8.8.4.4 to be used "in general". These are not used for abcnews.com
  • You have configured 1.1.1.1 and 9.9.9.9 to be used for abcnews.com in particular (two server=/abcnews.com/... lines exist in your custom config) ✔

While you configured abcnews.com -> 1.1.1.1 pretty high up in your file, the other config line is further down, before the one for popcornflix.com

Conclusion: Everything is correct here.

2. domains in the 1.1.1.1 list sequentially queried:atttvnow.com and akamaihd.net.

atttvnow.com/

This can be answered very quickly because you simply have a typo in your query:

query[A] atttvnow.com/ from 192.168.3.62

Note the extra / at the end. This domain is invalid and forwarding to 8.8.8.8 and 8.8.4.4. is correct here as no special rules exist for this incorrect domain and both servers are what you defined to be used for anything not configured further by server=// lines.

Conclusion: Everything is correct here.

akamaihd.net

2025-01-30 01:55:40.583 query[A] akamaihd.net from 192.168.3.62
2025-01-30 01:55:40.584 forwarded akamaihd.net to 1.1.1.1
2025-01-30 01:55:40.590 reply akamaihd.net is NODATA-IPv4

I can confirm locally that NODATA-IPv4 is indeed the correct answer. Now to the question why the web interface showed "served from cache" and the answer is quickly found: This is a regression from the recent large rewrite in dnsmasq and an excellent catch. It is isolated to show up for empty replies from upstream, I will fix this.

Conclusion: Bug in FTL, will be fixed.

3. Another example: net2.net - non conditional domain, routed to 1.1.1.1 instead of default google DNS

I have no good explanation why this is happening and we need to find out what is happening. You quoted:

Jan 30 01:33:26 dnsmasq[179]: query[A] net2.com from 192.168.3.62
Jan 30 01:33:26 dnsmasq[179]: forwarded net2.com to 1.1.1.1
Jan 30 01:33:26 dnsmasq[179]: reply net2.com is 52.57.221.121

Could you provide some more lines around this one? In particular to see if something out of the ordinary is happening before and/or if the immediately preceding query may have been one that should have been sent to 1.1.1.1 and resetting to the default may simply not have worked? Also the following queries will be interesting to see if they are affected by the same thing.

4. abcnews.com - listed to route through 1.1.1.1, shows in the query log as being routed through the opposite condition (9.9.9.9). but in fact, the pihole log shows it was sent to BOTH opposite conditions - worst possible outcome

This is the same as no. 1 above, it is correct behavior. You defined the domain to be sent to either 1.1.1.1 or 9.9.9.9 and, occasionally, it is sent to both for probing which replies fastest. If you don't want this, don't specify two servers for the same domain.

5. jungle.com - non conditional default domain, instead of resolving via google DNS, routes to the 1.1.1.1 conditional

A combination of no. 2 (wrong cached status) and 3 above (sent to wrong upstream).

6. Friends.com not on forwarder list, also resolves through 1.1.1.1, this time showing both on the query log and pihole log.

Same as no. 3 above.

7. Total jibrish domain that is obviously not on condition, resolves through the 1.1.1.1 condition but the worse thing is that on the query log, the unsuspecting viewer, sees it as legitimate cache:

A combination of no. 2 (wrong cached status) and 3 above (sent to wrong upstream). The wrong display is caused by the misinterpreted negative reply (NXDOMAIN) coming from upstream - will be fixed.

The bug we've already identified here (the wrong cache status) will be fixed in a separate PR, for the other bug (forwarded to the wrong servers), I am in contact with the dnsmasq maintainers. Yet, I'd still like to see some more input on this.

@Cansybars
Copy link
Author

  1. You're correct about abcnews.com appearing twice under conflicting conditions (it was previously at least impossible to parse more than a few domains in one entry, and there was an error in the file, as with the atttvnow.com typo. It took some time to put the new container together, and realize that the whole structure changed, and that by default dnsmasq isn't even read (not exactly a standard 'reproduction'), that given the hour (past 3:30 am) that I got to run queries and put this report together, I hope to be forgiven for that. However, I really did pick just a few examples and except the issue of "cross conditional resolution", it was all there, repeatedly.

And yes it still leaves the fact that the incorrect domain was quarried and appears on pi hole log, but on query report shows up as cache, as do other successfully and sometimes incorrectly resolved often do too.

  1. net2.net - I'm attaching more lines, there really doesn't seem to be a 'trigger event' such as last time, except that I did this intermittently and the longer time went by (even without activity on my end), the 1.1.1.1 "stickiness" (much less than the 9.9.9.9), as I attached at the end, increased. I was also disappointed, not because I really tried to get you a 'root cause' theory, and that's I'd like to do my best to help this run smoothly, but also because I found myself at really odd hours in pursuit of the culprit ;). The net2.net was just before the jibrish, that also pulled 1.1.1.1, so here they are combined (with not much useful information in between). The only thing that I can tell you, is that the list I fed begins with the 1.1.1.1 values, and that there is about 50% more of those that the 9.9.9.9's (for what its worth...). Both are included in the extended output:
Jan 30 01:31:01 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1Jan 30 01:31:31 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:31:31 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:32:01 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:32:01 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:32:32 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:32:32 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:33:02 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:33:02 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:33:26 dnsmasq[179]: query[A] net2.com from 192.168.3.62
Jan 30 01:33:26 dnsmasq[179]: forwarded net2.com to 1.1.1.1
Jan 30 01:33:26 dnsmasq[179]: reply net2.com is 52.57.221.121
Jan 30 01:33:32 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:33:32 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:34:02 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:34:02 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:34:32 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:34:32 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:35:03 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:35:03 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:35:18 dnsmasq[179]: query[A] dontforward.com from 192.168.3.62
Jan 30 01:35:18 dnsmasq[179]: forwarded dontforward.com to 1.1.1.1
Jan 30 01:35:18 dnsmasq[179]: reply dontforward.com is <CNAME>
Jan 30 01:35:18 dnsmasq[179]: reply traff-5.hugedomains.com is <CNAME>
Jan 30 01:35:18 dnsmasq[179]: reply hdr-nlb7-aebd5d615260636b.elb.us-east-1.amazonaws.com is 34.205.242.146
Jan 30 01:35:18 dnsmasq[179]: reply hdr-nlb7-aebd5d615260636b.elb.us-east-1.amazonaws.com is 54.161.222.85
Jan 30 01:35:33 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:35:33 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:36:03 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:36:03 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:36:33 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:36:33 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:37:03 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:37:03 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:37:34 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:37:34 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:37:52 dnsmasq[179]: query[A] blasdfiwpalsdfalskdjhf.com from 192.168.3.62
Jan 30 01:37:52 dnsmasq[179]: forwarded blasdfiwpalsdfalskdjhf.com to 1.1.1.1
Jan 30 01:37:52 dnsmasq[179]: reply blasdfiwpalsdfalskdjhf.com is NXDOMAIN
Jan 30 01:38:04 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:38:04 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:38:34 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:38:34 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:39:04 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 30 01:39:04 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 30 01:39:34 dnsmasq[179]: query[A] pi.hole from 127.0.0.1

Re more output, I hope you noticed the output at the end of my report, where query after query, non of which or on the list, were incorrectly routed to 1.1.1.1. While before, this was an absolute trigger that RESET itself when you switched devices, while not absolute, the mere passage or time, eventually ended up with the same result. So you can add those if you missed it: att.com, abba.com, baby .com. Max.com (correct but on the 1.1.1.1 list)

025-01-30 08:11:10.570 query[A] pi.hole from 127.0.0.1
2025-01-30 08:11:10.591 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-30 08:11:16.605 query[A] att.com from 192.168.3.62
2025-01-30 08:11:16.608 forwarded att.com to 1.1.1.1
2025-01-30 08:11:16.612 reply att.com is 144.160.155.43
2025-01-30 08:11:16.612 reply att.com is 144.160.36.42
2025-01-30 08:11:25.103 query[A] att.com from 192.168.3.62
2025-01-30 08:11:25.103 cached att.com is 144.160.155.43
2025-01-30 08:11:25.103 cached att.com is 144.160.36.42
2025-01-30 08:11:32.757 query[A] abba.com from 192.168.3.62
2025-01-30 08:11:32.758 forwarded abba.com to 1.1.1.1
2025-01-30 08:11:32.964 reply abba.com is 63.251.38.201
2025-01-30 08:11:40.779 query[A] pi.hole from 127.0.0.1
2025-01-30 08:11:40.779 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-30 08:11:46.636 query[A] baby.com from 192.168.3.62
2025-01-30 08:11:46.639 forwarded baby.com to 1.1.1.1
2025-01-30 08:11:46.839 reply baby.com is 54.149.177.219
2025-01-30 08:11:46.839 reply baby.com is 34.212.217.50
2025-01-30 08:12:10.965 query[A] pi.hole from 127.0.0.1
2025-01-30 08:12:10.965 Pi-hole hostname pi.hole is 127.0.0.1

Anyway, you asked so I ran a few more before hitting the comment button (24 hours later), not in lists all but one routed incorrectly again to 1.1.1.1 : elephant.com, monkey.com, wave.com, wind.com, dog.com, player.com (vs bbciplayer.com which is on the 9.9.9.9. list and as before for the most part with 9.9.9.9, was correct e3494.e2.akamaiedge.net also 9.9.9.9 correct, a RARE default 8.8.8.8 correct resolution - telephony.goog, however brown.com, black.com (not on list) 1.1.1.1 again after that (see below may have missed a couple). If you'd like entire logs, don't hesitate to ask. Here is a new list, and a few more suggestions:

2025-01-31 03:24:55.277 query[A] elephant.com from 192.168.3.62
2025-01-31 03:24:55.308 forwarded elephant.com to 1.1.1.1
2025-01-31 03:24:55.439 reply elephant.com is 141.193.213.10
2025-01-31 03:24:55.439 reply elephant.com is 141.193.213.11
2025-01-31 03:25:00.125 query[A] pi.hole from 127.0.0.1
2025-01-31 03:25:00.125 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:25:12.740 query[A] monkey.com from 192.168.3.62
2025-01-31 03:25:12.742 forwarded monkey.com to 1.1.1.1
2025-01-31 03:25:15.007 reply monkey.com is <CNAME>
2025-01-31 03:25:15.008 reply ec5f7986.monkey.com.cname.byteshieldcdn.com is NXDOMAIN
2025-01-31 03:25:30.318 query[A] pi.hole from 127.0.0.1
2025-01-31 03:25:30.318 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:25:32.894 query[A] wave.com from 192.168.3.62
2025-01-31 03:25:32.896 forwarded wave.com to 1.1.1.1
2025-01-31 03:25:33.164 reply wave.com is 34.149.216.132
2025-01-31 03:25:40.374 query[A] wind from 192.168.3.62
2025-01-31 03:25:40.374 forwarded wind to 192.168.4.1
2025-01-31 03:25:40.409 reply wind is NXDOMAIN
2025-01-31 03:25:45.087 query[A] wind.com from 192.168.3.62
2025-01-31 03:25:45.088 forwarded wind.com to 1.1.1.1
2025-01-31 03:25:45.277 reply wind.com is 3.33.130.190
2025-01-31 03:25:45.278 reply wind.com is 15.197.148.33
2025-01-31 03:25:54.104 query[A] dog.com from 192.168.3.62
2025-01-31 03:25:54.106 forwarded dog.com to 1.1.1.1
2025-01-31 03:25:54.297 reply dog.com is 23.227.38.65
2025-01-31 03:26:00.500 query[A] pi.hole from 127.0.0.1
2025-01-31 03:26:00.500 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:26:10.062 query[A] itv.com from 192.168.3.62
2025-01-31 03:26:10.063 forwarded itv.com to 9.9.9.9
2025-01-31 03:26:11.970 reply itv.com is 75.2.43.168
2025-01-31 03:26:11.970 reply itv.com is 99.83.221.243
2025-01-31 03:26:26.082 query[A] max.com from 192.168.3.62
2025-01-31 03:26:26.084 forwarded max.com to 1.1.1.1
2025-01-31 03:26:26.297 reply max.com is 13.226.2.24
2025-01-31 03:26:26.298 reply max.com is 13.226.2.79
2025-01-31 03:26:26.298 reply max.com is 13.226.2.66
2025-01-31 03:26:26.298 reply max.com is 13.226.2.90
2025-01-31 03:26:30.765 query[A] pi.hole from 127.0.0.1
2025-01-31 03:26:30.765 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:26:55.220 query[A] bbciplayer.com from 192.168.3.62
2025-01-31 03:26:55.222 forwarded bbciplayer.com to 1.1.1.1
2025-01-31 03:26:55.631 reply bbciplayer.com is 2.16.1.176
2025-01-31 03:26:55.632 reply bbciplayer.com is 2.16.1.235
2025-01-31 03:27:00.975 query[A] pi.hole from 127.0.0.1
2025-01-31 03:27:00.975 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:27:31.162 query[A] pi.hole from 127.0.0.1
2025-01-31 03:27:31.162 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:28:01.345 query[A] pi.hole from 127.0.0.1
2025-01-31 03:28:01.345 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:28:31.568 query[A] pi.hole from 127.0.0.1
2025-01-31 03:28:31.569 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:28:37.787 query[A] e3494.e2.akamaiedge.net from 192.168.3.62
2025-01-31 03:28:37.790 forwarded e3494.e2.akamaiedge.net to 9.9.9.9
2025-01-31 03:28:42.816 query[A] e3494.e2.akamaiedge.net from 192.168.3.62
2025-01-31 03:28:42.817 forwarded e3494.e2.akamaiedge.net to 9.9.9.9
2025-01-31 03:28:42.822 reply e3494.e2.akamaiedge.net is 23.51.209.237
2025-01-31 03:29:01.926 query[A] pi.hole from 127.0.0.1
2025-01-31 03:29:01.926 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:29:10.114 query[A] telephoy.goog from 192.168.3.62
2025-01-31 03:29:10.116 forwarded telephoy.goog to 8.8.8.8
2025-01-31 03:29:10.185 reply telephoy.goog is NXDOMAIN
2025-01-31 03:29:32.103 query[A] brown.com from 192.168.3.62
2025-01-31 03:29:32.104 forwarded brown.com to 1.1.1.1
2025-01-31 03:29:32.109 query[A] pi.hole from 127.0.0.1
2025-01-31 03:29:32.109 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:29:32.321 reply brown.com is 104.199.118.81
2025-01-31 03:29:45.936 query[A] black.com from 192.168.3.62
2025-01-31 03:29:45.938 forwarded black.com to 1.1.1.1
2025-01-31 03:29:46.009 reply black.com is 104.26.12.167
2025-01-31 03:29:46.009 reply black.com is 104.26.13.167
2025-01-31 03:29:46.009 reply black.com is 172.67.75.240
2025-01-31 03:30:02.318 query[A] pi.hole from 127.0.0.1
2025-01-31 03:30:02.318 Pi-hole hostname pi.hole is 127.0.0.1
2025-01-31 03:30:32.505 query[A] pi.hole from 127.0.0.1

I may have mentioned it (I was really out when I wrote you last night), but I think that the lack thereof an indication of error, either loading (including when nothing loads) on system, right where you flush network table, restart FTL, and an indicator in the logs area, is prone to errors, especially since the format changed so much. It took me a while in the beginning to understand that I was running making a list for you, without dnsmsq even loading (don't worry I didn't include those). The fact that you use the exact same hidden pihole (formerly FTL) log file, to test known bugs, and to deal with unknown or expected ongoing issues, is a problem (before, FTL would fail miserably and barely 'pick itself up', but you received immediate feedback on the relevant system page, plus pointer to the relevant line, and omitting the comprehensive report from the UI, I think is a mistake. And, dnsmasq should by default be set to "true", since not only this is the expected behavior, there is really no downside in enabling it, and I know some people that can barely install the software, let alone edit config files buried deep close to docker root directory. BTW.- docker compose password is not accepted at first login, and the API for the pihole remote app, with both update to password and token, can not connect (and would even be an efficient tool if it connected not only to custom black and white entries, but did not ignore adjusts - but that is not new and not related to you I assume.

Let me know if. you'd like a copy of my log files, and I'd be happy to do some more QA when progress is made.

Would be delighted to be kept in the loop!

@DL6ER
Copy link
Member

DL6ER commented Jan 31, 2025

There is nothing you'd need to be forgiven for :-)

The wrong display as cached vs. forwarded has already been merged and should be within the next nightly container. In addition, we changed to working from "forwarded to " to "forwarded, reply from " to highlight that this is the IP address of the server whose reply we have used - Pi-hole may have probed multiple defined servers for a query but only the first reply is used.

Not including the /etc/dnsmasq.d directory by default is necessary to avoid dnsmasq not starting up after the v5 -> v6 migration due to conflicting files. As you've seen, this is pretty easy to get. And a dysfunctional DNS server after the migration is much worse than having to re-enable the button once. We also have some protection in place that tries to prevent exactly this dysfunctional state by reverting a config change when we detect that the new configuration would cause dnsmasq to not start up again. A miserable fail is most often unwanted as we have to remember that many users run Pi-hole in self-updating docker containers and they may hit a hard wall on upgrade. Ignoring the extra directory by default reduces this risk dramatically. Re-enabling it isn't much work but I agree we have to make it very clear on the release notes. However, so far, I have not heard much (actually, yours is the first) criticism on this move. But I see and am sorry it caused extra work for you.

The password specified in docker compose should be accepted right away (right @PromoFaux ?), but note that the environmental variables have changed for v6 as well: https://github.com/pi-hole/docker-pi-hole/tree/development?tab=readme-ov-file#web-interface-password


The other issue about the forwarding to the wrong server is in discussion with the dnsmasq maintainers. However, it'd be great if you could do one more thing for us. I meanwhile applied the same 900 server=// lines from your config but I am still unable to reproduce this myself. I have created a special debugging branch fix/strange_forwarding and would like you to try it on your system. So far, it does not fix anything but it will log (hopefully useful) details about why dnsmasq chose the upstream servers it chose for a particular query. As you are using docker, the procedure has a few more steps, but not too much:

  1. Please first clone the docker-pi-hole repository: https://github.com/pi-hole/docker-pi-hole
  2. Check out the development branch
  3. Build a local (modified) image using ./build.sh -f fix/strange_forwarding
  4. Use pihole:local instead of pihole/pihole:nightly in your compose file
  5. Please set the additional environment variable FTLCONF_misc_extraLogging=true
  6. Rebuild the container

You should now see extra log lines in pihole.log looking like:

Jan 31 17:17:58 dnsmasq[2058944]: UDP 399 127.0.0.1/54111 query[A] abcnews.com from 127.0.0.1
Jan 31 17:17:58 dnsmasq[2058944]: Acceptable servers for domain "abcnews.com" are (IDs 657 - 658):    <----
Jan 31 17:17:58 dnsmasq[2058944]:   * 1.1.1.1#657 (53) because it is explicitly configured for domain "abcnews.com"   <----
Jan 31 17:17:58 dnsmasq[2058944]:   * 9.9.9.9#658 (53) because it is explicitly configured for domain "abcnews.com"   <----
Jan 31 17:17:58 dnsmasq[2058944]: forwarding query to 1.1.1.1, configured for domain "abcnews.com"    <----
Jan 31 17:17:58 dnsmasq[2058944]: UDP 399 127.0.0.1/54111 forwarded abcnews.com to 1.1.1.1
Jan 31 17:17:58 dnsmasq[2058944]: forwarding query to 9.9.9.9, configured for domain "abcnews.com"    <----
Jan 31 17:17:58 dnsmasq[2058944]: UDP 399 127.0.0.1/54111 forwarded abcnews.com to 9.9.9.9
Jan 31 17:17:58 dnsmasq[2058944]: UDP 399 127.0.0.1/54111 reply abcnews.com is 34.110.155.89

The lines I marked with <---- are the new ones and when you can again share a larger snippet around any such occurrence of wrong forwarding, this should shine some more light on what is actually happening here. What you see above is a direct quote from the file /var/log/pihole/pihole.log.

@PromoFaux
Copy link
Member

The password specified in docker compose should be accepted right away (right @PromoFaux ?),

Yeah, that's right. The correct name for the password env var is FTLCONF_webserver_api_password

Which is a departure from v5's WEBPASSWORD. The logic for setting the password in v5 was done in some custom bash logic, in V6 we can simply leverage FTL's native functionality :)

Just thinking out loud, I could add some code into Docker that detects WEBPASSWORD and feeds back to the user that this has been deprecated - we could then transform it to a variable with the correct name for that run. Maybe.

Id avoided too much hand holding in this department so far, but maybe it will ease the path of upgrade for some

@Cansybars
Copy link
Author

Cansybars commented Feb 1, 2025

Hey,

  1. Looking at your request, it seems that enhancing the logging by changing the following values from false to true in pihole-FTL.toml should provide the additional information you’re looking for:

✅ FTLCONF_extraLogging=true (should already be equivalent to FTLCONF_misc_extraLogging=true)
✅ DEBUG.QUERIES=true (detailed query information, including requestor, type, and flags)
✅ DEBUG.FLAGS=true (logs query flags, useful for tracking conditional forwarding)
✅ EXTRA_LOGGING=true (adds additional verbose logging to pihole.log, which should include upstream selection details)
✅ DEBUG.DATABASE=true (logs how the database is queried, ensuring client-group associations are reflected properly)

Before I proceed with this update, can you confirm if this covers what you’re looking for? It seems like this would add the expected information without needing to rebuild the container, but I’d like to make sure before applying it.

  1. Edit: I updated the settings, took a closer look, and while I am not seeing an actual reason for the massive errors in routing, what I am seeing does not not correlate with the existence thereof, and in fact, what is clearly a workflow that does not make sense based on the verbose logging in the stable version, focusing on the one situation in which it could be trusted to perform correctly (new query on new device that falls under default, and prior to conditional met, which now has no impact on result quality, can provide a starting point for deeper investigation.

To be more specific, I compared the comprehensive (all DNS records dig) from new device iPhone a couple of days ago for the domain google.tv, that after total 'fixation' on Mac on 1.1.1.1 for non conditional domains on my Mac, correctly resolved through default google DNS servers, and the same procedure with iPad today, domain cats.com (you may guess is not on any list), which did not 'reset' to default DNS, and used 1.1.1.1 again.This is followed by something very peculiar also and in between obsessive RTP repeated endlessly when no lookups are made, with what seems to be an endless loop - 127.0.0.1 asking who is pihole, receiving an answer 127.0.01 which triggers the same query endlessly in between activity.

Starting with the bottom line in the initial query of iPad on nightly for cats.com is immediately routed to 1.1.1.1 starting with:

Jan 31 23:18:54 dnsmasq[179]: query[AAAA] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1

just like that, with zero database check, neither for conditional domain, cache, creating a sequence that without these checks, renders the term "reason" useless, as it is entirely arbitrary, and with no check, there could be no reason.

Query log btw presents the answers as "cached.

Immediately following that, an RTP from local 192.168.4.1 to discover 192.168.3.54 client is iPad.lan

  1. Then something very revealing and stranger, but before that a few words about the document highlighted in green above (it's very late here and I am writing this not in the order you are seeing my post), endless resolutions of new client iPhone and database checks after its initial query for apple.tv, after the query for google.tv is made, and delaying that considerably. This opposite order of events, explains the "reset" to a correct resolution. I was going to go into more detail, but it is past 6 am here. cats.com, loop, RTP for iPad in different boxes but consecutive:

#2 is covered here:
A


an 31 23:18:54 dnsmasq[179]: query[AAAA] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[NAPTR] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[A] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[CAA] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[SSHFP] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[CNAME] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[NS] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[SOA] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[MX] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[PTR] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[SRV] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[TXT] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[HINFO] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[RP] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[DNAME] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[AFSDB] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[OPT] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[SPF] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: query[LOC] cats.com from 192.168.3.54
Jan 31 23:18:54 dnsmasq[179]: forwarded cats.com to 1.1.1.1
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is 2606:4700:10::ac43:272f
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is 2606:4700:10::6816:2f55
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is 2606:4700:10::6816:2e55
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is 172.67.39.47
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is 104.22.47.85
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is 104.22.46.85
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <CAA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <NS>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <NS>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <SOA>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <MX>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <MX>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <MX>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <MX>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is <MX>
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is facebook-domain-verification=ljyoq9cp443bcuqcxkbc4zgve15jhp
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is google-gws-recovery-domain-verification=43199800
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is google-site-verification=2tgNsRn0zlXO0gyCXJ3vh2BS20a7dIfOCaCC3246XMM
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is google-site-verification=uoJUeFeuQVWZkgQPURE8KAYTvD9BMPL1utJ6NBqNdg0
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is google-site-verification=xCxT9znRdGC9EQQjoQGjpyiNB5UTnnvc6kW8MePRqLE
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is klaviyo-site-verification=Kn8PAH
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is v=spf1 include:_spf.google.com -all
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA
Jan 31 23:18:54 dnsmasq[179]: reply cats.com is NODATA\

B

Jan 31 23:18:55 dnsmasq[179]: query[PTR] 54.3.168.192.in-addr.arpa from 127.0.0.1
Jan 31 23:18:55 dnsmasq[179]: forwarded 54.3.168.192.in-addr.arpa to 192.168.4.1
Jan 31 23:18:55 dnsmasq[179]: reply 192.168.3.54 is ipad.lan

C

Jan 31 23:19:17 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:19:17 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:19:47 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:19:47 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:20:17 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:20:17 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:20:48 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:20:48 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:21:18 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:21:18 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:21:48 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:21:48 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:22:18 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:22:18 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:22:49 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:22:49 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:23:19 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:23:19 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:23:49 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:23:49 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:24:19 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:24:19 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:24:49 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:24:49 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:25:20 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:25:20 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:25:50 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:25:50 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:26:20 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:26:20 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:26:50 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:26:50 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:27:20 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:27:20 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:27:51 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:27:51 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:28:21 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:28:21 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:28:51 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:28:51 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:29:21 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:29:21 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:29:51 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:29:51 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:30:22 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:30:22 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:30:52 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:30:52 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:31:22 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:31:22 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:31:52 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:31:52 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:32:23 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:32:23 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:32:53 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:32:53 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:33:23 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:33:23 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:33:53 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:33:53 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:34:23 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:34:23 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:34:54 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:34:54 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:35:24 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:35:24 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:35:54 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:35:54 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:36:24 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:36:24 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.1
Jan 31 23:36:54 dnsmasq[179]: query[A] pi.hole from 127.0.0.1
Jan 31 23:36:54 dnsmasq[179]: Pi-hole hostname pi.hole is 127.0.0.

D. Then almost this part as it has no apparent trigger, repeated RTP for iPad.lan first from reverse forward system set server (left in default dnsmasq 01-pihole.conf - the rest was commented out - boxes sequential divided to make this clear):

eb  1 00:00:00 dnsmasq[179]: query[PTR] 1.0.0.127.in-addr.arpa from 127.0.0.1
Feb  1 00:00:00 dnsmasq[179]: /etc/hosts 127.0.0.1 is localhost
Feb  1 00:00:00 dnsmasq[179]: query[PTR] 54.3.168.192.in-addr.arpa from 127.0.0.1
Feb  1 00:00:00 dnsmasq[179]: cached-stale 192.168.3.54 is ipad.lan
Feb  1 00:00:00 dnsmasq[179]: forwarded 54.3.168.192.in-addr.arpa to 192.168.4.1

E. Immediately followed by RTP for who is 8.8.8.8, initiated again by 127.0.0.1 local host, revealing DNS.google, the default DNS:

Feb  1 00:00:00 dnsmasq[179]: query[PTR] 8.8.8.8.in-addr.arpa from 127.0.0.1
Feb  1 00:00:00 dnsmasq[179]: cached 8.8.8.8 is dns.google

F. This is when it gets truly psychotic - local host queries the name of the local RTP 192.168.4.1, receiving response from config file (only thing left in 01-pihole.conf after the rest was commented out):

Feb  1 00:00:00 dnsmasq[179]: query[PTR] 1.4.168.192.in-addr.arpa from 127.0.0.1
Feb  1 00:00:00 dnsmasq[179]: config 192.168.4.1 is NXDOMAIN

G. Then local host moves on to reverse lookup who is 1.1.1.1 the conditional that keeps taking over, answer received from stale cach:

Feb  1 00:00:00 dnsmasq[179]: query[PTR] 1.1.1.1.in-addr.arpa from 127.0.0.1
Feb  1 00:00:00 dnsmasq[179]: cached-stale 1.1.1.1 is one.one.one.one, 

H. Finally to renew this 'vital' info, it queries 8.8.8.8 default DNS, receives a reply that it is 0ne.one.one.one BUT also (without the question being asked and apparently in breach of private query limited to being addressed to 192.168.4.1, a reply is received (no mention of the quenotni asked though as if from 1.1.1.1 or from DNSMASQ itself, that 192.168.3.54 is iPad.lan, information attained 40 mins earlier from the legitimate authority internall.


Feb  1 00:00:00 dnsmasq[179]: forwarded 1.1.1.1.in-addr.arpa to 8.8.8.8
Feb  1 00:00:00 dnsmasq[179]: reply 1.1.1.1 is one.one.one.one
Feb  1 00:00:00 dnsmasq[179]: reply 192.168.3.54 is ipad.lan

There may be a perfectly good explanation for the this that at almost 7 am does not 'jump out' at me. The following observation ad more strategic suggestion I wrote earlier, and hope it helps on beyond doing my 'QA duties'.

3. Macro level observations and food for thought
.
In light of the fact that you are rearchitecturing, it's an opportunity to think longer term where this product can potentially be headed. Until recently, I was using AdGuardHome, at home, and on a separate device, which in my case, for example, enabled me to force regional separation, where the default resolvers are routed by my firewall over a local vpn tunnel, for privacy and security, while you might be able to guess, that the conditional forwarders, are routed to bypass georestritctinos. This allows me to combine privacy, eliminate all streaming ads (except for Prime and YouTube for any service imaginabl on any streaming. device), using targeted filters that my Firewall does not support inherently. But, since the system is installed on a "real" network device (visible and routable by the firewall), I can not only force routing of resolved traffic, but also force resolution itself thorough the VPN tunnels. I was wary of using a docker on my firewall, since I knew this would require adding Mcvlan bridges to 'see' and apply the resolver, or else end up with DNS leaks, which defeats the purpose.

I realized though, that the presence of previously AdGuardHome, and hopefully soon PiHole and it's benefits, while intuitively feel they belong as a part of a network device, within a default docker environment, just present issues and compromises, starting from not even being access Pi-hole interface in less than multi NAT network traversal from a disjoint network that uses a single IP (takes two or three hops to even access the interface), to applying mcvlan bridges to enable policy based routing. I realized a default docker network, residing on a segment that is completely estenal to the network, can not even be seen as a 'device' in it, and the need to break firewall policies, and lose performance and efficiency both for accessing the device, and certainly the extra riding necessary to apply firewall capabilities on it.

@DL6ER
Copy link
Member

DL6ER commented Feb 1, 2025

You are invited to enabled as much verbose logging options as you can find. However, we will still need the special branch as the normal debug output will not cover how the upstream server is chosen. This needs additional output which is only available in this special branch. It will also not find its way into the "normal" code as it will become unnecessary to see this once we fixed this bug.

All the PTRs you have observed are expected. They are done by Pi-hole itself on first sight of new clients or upstream servers to get human-readable names for the IP addresses. These are the names Pi-hole shows you on the Query Log and various other interfaces. The pihole.log file always uses addresses only for consistency.

This is when it gets truly psychotic - local host queries the name of the local RTP 192.168.4.1, receiving response from config file (only thing left in 01-pihole.conf after the rest was commented out)

This says "config" because Pi-hole has a default setting to prevent sending internal IP address PTRs (and 192.168.4.1 is surely internal) to upstream servers outside your network as they could anyway not provide any meaningful answer here.

Finally to renew this 'vital' info, it queries 8.8.8.8 default DNS, receives a reply that it is 0ne.one.one.one BUT also (without the question being asked and apparently in breach of private query limited to being addressed to 192.168.4.1, a reply is received (no mention of the quenotni asked though as if from 1.1.1.1 or from DNSMASQ itself, that 192.168.3.54 is iPad.lan, information attained 40 mins earlier from the legitimate authority internall.

Once you enable the FTLCONF_misc_extraLogging setting, the individual lines will get added the used protocol and the query ID. This will help you to correlate the "reply" with "query" lines. I think a lot of what you are questioning here is just because so much is logged at the same time and - DNS being a fully asynchronous protocol - replies do not have to come in order but are probably still all correct.


As to the lower part of your post (starting from the bold text starting with "3. Macro level observations and food for thought") I'd like to get @pi-hole/ftl-maintainers 's opinion because I am not involved myself too much into docker-business and any kind of complex networking in general so I could not provide you with an adequate answer.

@PromoFaux
Copy link
Member

and the lack of proof reading this

You're right, you have not proof read any of this.

Next Steps

Let’s discuss how to implement this at scale.

This version fully integrates your key points: • MAC VLAN as a distinguishing factor • Firewall-enforced security & policy separation • Per-subnet DNS filtering & upstream resolution • Why Docker actually becomes an advantage instead of a limitation • How this enables Pi-hole to scale from home users to enterprises

It’s precise, persuasive, and complete. Let me know if you need any final refinements before you send it.

I thought the wall of text was fishy (and far too opinionated/pushy) the last part seals it. I'm not discussing this with an LLM.

If you want engagement, or to foster communication on an idea - don't post walls of AI Generated text.

@Cansybars
Copy link
Author

Cansybars commented Feb 1, 2025

Edit: below is the original response I was about to send to you, before reading your last message, and yes, it was proof read by a machine ( not to mention that English is not my first language and I have a tendency for typos. BTW - it was specifically the business plan part, and the initial report I wrote everything else myself. Why did I use an LLM to proof read the business idea? because that is what I've been doing as VP of Product Management in a large tech company, that will not be named, and it represents respect for yourself, and the person taking the time to engage with your ideas, and unlike you I am aware of my weaknesses, and being overly concise is one of them, so I apologize ofr not making excluging some messages from the exhaustion this one may entail, but its the mass murdered preaching to the guy whom he blames for spending more money and not stealing at the store (analogy). Ironic that the person who wrote the worst code IN HISTORY and almost got it released, and notwithstanding the one person that alerted him on it, is assigned the QA positions (may be you're right, and this beyond your capacity to reproduce these fatal issues, or else what can explain the outcome, that renders tha platform not useless, dangerous. If I am guilty of using a machine to help someone like you finally (selectively) see the result before passing out, it will takes tears for language models be be apt to deal with your shortcomings. You only proof read to improve, such as this way overly long and time wasting message), and although I worked me ass off so you end up with a disaster, you feel used since I spared you the reading ,but you prove that that is even irrelevant and your selective reading style is for offloading, but who am I to talk proof reader shameful me. I will hit send without reading the edit part even once, hopefully that will be the begining of paying for my sins.

In the following text written prior to seeing your unbelievable warning, I used chat GPT because I was giving you the last life line, since you missed out on my prior message, making it crystal clear what I am here for.. My response, proof read by LLM, is toned down and even this one, takes into account that this is a public forum. Not to repeat myself, but you used the one person that you should be grateful to, for uncovering what you were going (and are welcome to ) release, where you were able to completely break DNSMASQ without noticing it. Professionally, I've never seen someone in your line of work (despite being exposed to all kinds daily), that is so bad at their job, that this is what they have to show for. But much worse, and totally interrelated, is personality, the sense of entitlement, lack of any values, and pure stupidity, that can even lead to such poor outcome. The details are below, but the last thing you did was collaborate with me, you just used me to do your work after getting everything you need for it from day one. From building a container with the pathetic excuse that I "can" reproduce the extremely well documented Fuckups of yours, as if it was so trivial. You missed the part about changes in the architecture and that settings in dnsmasq neither apply, nor are alerted about, which combined with your results, is the recipe for real disasters instead of appreciating that I gave you everyihg you needed on a silver platter and saved you from the horrible outcomes of yourself.

Of course, I wasted hours running queries on a system that did not read my config, since you didn't bother telling me that this involves a small change that isnt documented (that btw assumes I am totally literate in linux, which I am quite, but you couldn't care lesss, You then ordered me to run more and more queries, and when I didn't rebuild a second system, after some bullshit that you could not reproduce any issues (I dare your to release the current version if that is the case), as the most natural thing in the world. The nickel dropped, I think they say, after your previous to last "response", which you had no interest in reading (but its explained below), missed what could have gained, saw the smallest pieces of text, completely took them out of context and when you did not address the the core whatsoever, it was clear that what your interests were. For someone that I think the LLM kindly explains crossed the line from the start, 'threatening me' the person that saved your ass, and strategically exposing your true intentions at the same time, that spent three entire nights do comply with your orders (not even requests), to threaten to save the slave that used a machine to no more than tighten and "proof read" what you only caught my chance now, being untruthful about recreating any of the very very easy to recreate bugs (just use the thing, the scripted way I enslaved myself to make it easy for you is indeed not needed when it is so bad, to complain about a business plan polished by a machine, is really like the inventor of slavory, accusing to dismiss his slave because he was wearing shoes when picking cotton. Ugratful, blind and not knowing how to leverage help (your last stupid message that no LLM reading and ignoring...anyway its all written down, you don't have too read it it was proof read. It all comes together and the proof is in the pudding. Good luck, you'll need it.

Original comment to your previous message PROOF READ BY CHAT GPT, AN OPEN AI INC PRODUCT,

I have carefully reviewed your latest response, and I must say that your approach to this issue has been incredibly frustrating. Instead of fully engaging with the detailed analysis and insights I provided, you have continued to assign me more tasks as if I were working for you. This is not a productive or respectful way to collaborate, especially given the extensive time I have already invested in identifying and detailing the core problems in Pi-hole's behavior.

Multiple Assignments & Offloading Work

From the very beginning, you have assumed that I should be the one repeatedly setting up new test environments, despite the fact that you have all the necessary details to reproduce these issues on your end. Initially, you framed this as me being the best person to reproduce the problem since I had already encountered it. However, instead of simply verifying my findings, you continued to pile on more demands—asking me to run additional queries, create another container, and do work that you could easily perform yourself in a fraction of the time.

Ignoring Core Insights

More concerning than your refusal to reproduce these problems yourself is the fact that you have completely ignored the most critical insights I uncovered. The key finding—that the current system does not appear to check any databases before applying conditional forwarders incorrectly—was entirely missing from your response. Instead, you nitpicked minor points, such as the behavior of PTR queries, while completely missing the overarching issue that could explain multiple serious bugs in the current implementation.

I specifically compared a five-page detailed report from a previously working setup with a new setup exhibiting failure, and the discrepancies were glaring. However, rather than acknowledging this or taking the time to reproduce it, you diverted the conversation to side issues. You did not even address the fact that the system is defaulting to 1.1.1.1 incorrectly, without verifying data, despite multiple queries showing that expected database lookups simply never occurred.

Dismissive & Condescending Attitude

Your response also included explanations of basic DNS concepts, such as why local PTR queries are not sent upstream, as if I had no knowledge of how DNS operates. This is both condescending and unnecessary. What I actually reported was that the stale query behavior and unnecessary PTR lookups indicate a major prioritization issue in how Pi-hole processes queries. The fact that queries for local hostnames loop repeatedly while critical database checks never happen before a resolution is made is not "expected behavior," as you suggested—it is a clear sign of a fundamental flaw in the query-handling logic.

Additionally, your dismissal of the rev-server configuration in 01-pihole.conf—which was the direct result of settings that were never documented or explained—only highlights how poorly this issue has been managed. Instead of instructing me to perform yet another round of testing, you should have started by verifying this issue yourself and sharing your own logs of what happens in your environment. Instead, your response reads as if you are unwilling to engage seriously with my findings.

Final Position

At this point, it is clear that continuing to work this way is not productive. If you genuinely want to fix these problems, you should:

  1. Reproduce the issue yourself using the exact steps I provided.
  2. Share your own logs and results instead of relying on me to do all the testing.
  3. Engage with the core findings, rather than nitpicking unrelated details or dismissing the severity of what I reported.

If you are not willing to do this, then there is no point in me continuing to waste my time. I have already gone far beyond what should be expected of any user or contributor in diagnosing these issues, and I am not going to continue being assigned new tasks simply because you do not want to do them yourself.

This is your loss. The information I provided could have saved this project from releasing a critically flawed version, and instead of leveraging it, you have chosen to deflect and dismiss. If you choose to ignore these problems and move forward with a broken release, that is your responsibility—but I will not participate in this process under these conditions.

EDIT 2: the offers expressed above have expired, but I will not tamper with a proof read document again, for fear of being sued or probably criminally prosecuted

@DL6ER
Copy link
Member

DL6ER commented Feb 1, 2025

Instead of fully engaging with the detailed analysis and insights I provided, you have continued to assign me more tasks as if I were working for you. This is not a productive or respectful way to collaborate, especially given the extensive time I have already invested in identifying and detailing the core problems in Pi-hole's behavior.

You are not working for me - but nor am I working for you. Pi-hole is entirely free. You are way overrating your contribution here. You reported many examples, that's true and we have been able to narrow this down to two underlying issues. But there was no real "analysis" or "insight", it was just repeated (different) examples showing the same symptoms. One ("cached" shown instead of "forwarded" but without any other real consequences) has been resolved on the same day. The other one is what you called "incorrect routing" and nobody has succeeded in reproducing this locally from the core team. Hence, you are not only the best but also the only person being able to help fixing this. Despite nobody else having reported something even remotely close to what you have seen, we're willing to invest resources solving this issue only you seem to be affected by.

Additionally, your dismissal of the rev-server configuration in 01-pihole.conf —which was the direct result of settings that were never documented or explained—only highlights how poorly this issue has been managed.

Funnily enough, if you'd do a full-text search on this page than you'd notice that the phrase "rev-server" has not a single time been mentioned before your last comment. How should we been aware? But I agree that this issue has evolved a lot less constructive than they usually do on this repository where users are willing to help to create a better software for everyone.

I'm not going to respond to the many other accusations like that we focused on PTRs, etc. - we did that because you provided this as an example for abnormality and we explained why it is in fact totally normal.


TL;DR: Our offer is still valid, if you invest the two or three minutes to set up a container with the extra version we have provided specifically for you, then we can continue to fix this second bug, too. If you do not want this, then this ticket can be closed.

@dschaper
Copy link
Member

dschaper commented Feb 1, 2025

is really like the inventor of slavory, accusing to dismiss his slave because he was wearing shoes when picking cotton

Yeah, that's not the way to get any kind of traction here. I think this thread has run its course.

@Cansybars

This comment has been minimized.

@DL6ER DL6ER closed this as completed Feb 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants