Private DNS Zone with CoreDNS
This is part of my series on renovating my homelabs to ring in the roaring ’20s.
I previously wrote about setting up a PiHole and configured all my devices on the network to use it and get ads blocked at the DNS level. I also set up my WireGuard connected clients to be able to take advantage of that appliance even when I’m traveling - as long as I can establish a secure WireGuard tunnel.
Today I’ll be taking this local/private networking design further: using CoreDNS to serve a private hosted domain name zone so that I can get to my local devices and services via domain names instead of trying to remember IP addresses!
My private zone is going to be lz.example
(for localzone), and once we’re done my PiHole admin service will be reachable at “http://pihole.lz.example/admin".
Sections:
- Basic CoreDNS setup
- Moving the PiHole to a new port
- Advanced CoreDNS zone config
- Split DNS with systemd-resolved
CoreDNS
In my PiHole post, I configured the PiHole to be my whole network’s DNS server so that I could block ads for all my devices. When I did that, I bound the PiHole to 192.168.0.53:53
, the default port for DNS traffic, and had my Gateway give out that IP address (192.168.0.53
) to all my connected devices for their DNS server.
Now that I want to inject CoreDNS in the DNS chain before PiHole, I will move the PiHole to a different port (:153
) and run CoreDNS on :53
so that I don’t have to make any changes elsewhere in my network.
Let’s prepare the CoreDNS container and service so that we can switch with minimal downtime.
CoreDNS Podman container
My coredns.sh
container creation script looks like:
#!/bin/bash
podman create \
--name coredns \
--net host \
-v /etc/coredns:/conf:z \
--entrypoint="/coredns" \
coredns/coredns \
-conf /conf/Corefile
There’s a few odd things going on here, even compared to the docker
equivalent.
-
--net host
I want to bind CoreDNS to some of the IP addresses my server has. The best way to do that is to run it in the host network namespace (skip container networking) and configure the application to bind to IPs and ports explicitly. More on that in the Corefile configuration below. -
--entrypoint="/coredns"
Podman (due to a bug or by design) requires that anentrypoint
is specified for the “trailing args” convention to work.
Indocker
, we’re used to being able to specify args for the entrypoint right after the image name, like:$ docker run library/busybox sleep 3600
.sleep 3600
are arguments to the default image entrypoint (/bin/sh
for busybox). But with Podman, trailing args are IGNORED unless theentrypoint
is also specified at the command line.
This entrypoint is the default entrypoint, I have just restated it to get the args to work, and my args are are… -
-conf /conf/Corefile
I’m mounting in the host directory/etc/coredns
to the container’s/conf
. This arg to CoreDNS tells it to use theCorefile
in/conf
as its configuration.
Management: the unit file
Like before, I also create simple a systemd
service file to manage my container:
# /etc/systemd/system/coredns.service
[Unit]
Description=CoreDNS Podman container
Wants=syslog.service
[Service]
Restart=always
ExecStart=/usr/bin/podman start -a coredns
ExecStop=/usr/bin/podman stop -t 10 coredns
[Install]
WantedBy=multi-user.target
Now we’re set up to run CoreDNS, but we need to create and fill-in the Corefile so that it does something.
CoreDNS configuration: the Corefile
Since I’m just creating a MVP CoreDNS deployment to prevent service interuptions while I change the ports around for the PiHole before I create all the Zone configuration, my initial Corefile is very simple.
I want to bind CoreDNS to port :53
, and forward all requests to my PiHole, which I will be moving to :153
. The Corefile is thus:
#/etc/coredns/Corefile
.:53 { # handle . (everything), bind to port 53
log # enables coredns logging
errors # logs errors
forward . 192.168.0.53:153 # forward . (everything) to PiHole on port 153
}
If I wanted to ensure that there is no DNS outage if the PiHole crashes, I can add additional DNS servers to after it on the forward
line, like
forward . 192.168.0.53:153 1.1.1.1 8.8.8.8
Running CoreDNS
Now we can bring up CoreDNS by creating and starting the container:
$ sudo ./coredns.sh && sudo systemctl enable --now coredns
Then, we can check the status of the service:
$ sudo systemctl status coredns
● coredns.service - CoreDNS Podman container
Loaded: loaded (/etc/systemd/system/coredns.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code)
Process: ExecStart=/usr/bin/podman start -a coredns (code=exited, status=1/FAILURE)
... host restart job, restart counter is at 5.
... host systemd[1]: Stopped CoreDNS Podman container.
... host systemd[1]: coredns.service: Start request repeated too quickly.
... host systemd[1]: coredns.service: Failed with result 'exit-code'.
... host systemd[1]: Failed to start CoreDNS Podman container.
Uh oh. Why is CoreDNS not starting?
$ sudo podman logs coredns
Listen: listen tcp 192.168.0.53:53: bind: address already in use
Listen: listen tcp 192.168.0.53:53: bind: address already in use
Listen: listen tcp 192.168.0.53:53: bind: address already in use
Listen: listen tcp 192.168.0.53:53: bind: address already in use
Listen: listen tcp 192.168.0.53:53: bind: address already in use
Ah. Right…the PiHole is still using that port. Let’s move it.
Moving PiHole to a different port
Here’s my pihole.sh
creation script, from my post on running PiHole.
#!/bin/bash
podman create \
--name pihole \
-p 53:53/tcp \
-p 53:53/udp \
-p 67:67/udp \
-p 80:80/tcp \
-p 443:443/tcp \
-e TZ=America/Chicago \
-v /etc/pihole/pihole:/etc/pihole:z \
-v /etc/pihole/dnsmasq.d:/etc/dnsmasq.d:z \
pihole/pihole:latest
I’m going to make a small change, then we’ll redeploy everything. I’m changing the hostport
on the port forwards from 53
to 153
:
#!/bin/bash
podman create \
--name pihole \
-p 153:53/tcp \
-p 153:53/udp \
-p 67:67/udp \
-p 80:80/tcp \
-p 443:443/tcp \
-e TZ=America/Chicago \
-v /etc/pihole/pihole:/etc/pihole:z \
-v /etc/pihole/dnsmasq.d:/etc/dnsmasq.d:z \
pihole/pihole:latest
Note I don’t need to change the container port, just the hostport.
Now, I can stop and clean up the running PiHole container, redeploy, and start CoreDNS. I want to do this in quick succession so that there’s no DNS outage for the devices on my network.
$ sudo systemctl stop pihole coredns
$ sudo podman rm pihole
$ sudo ./pihole.sh
$ sudo systemctl start pihole coredns
Now, I should see pihole and coredns services running and happy:
$ sudo systemctl status coredns
● coredns.service - CoreDNS Podman container
Loaded: loaded (/etc/systemd/system/coredns.service; enabled; vendor preset: disabled)
Active: active (running)
CGroup: /system.slice/coredns.service
└─1731018 /usr/bin/podman start -a coredns
$ sudo systemctl status pihole
● pihole.service - PiHole Podman container
Loaded: loaded (/etc/systemd/system/pihole.service; enabled; vendor preset: disabled)
Active: active (running)
CGroup: /system.slice/pihole.service
└─3815011 /usr/bin/podman start -a pihole
And I should be able to do the same ad-blocking DNS lookup test via CoreDNS that I previously did directly with the PiHole, since CoreDNS forwards all request to the PiHole!
$ dig doubleclick.net @192.168.0.53
; <<>> DiG 9.11.14-RedHat-9.11.14-2.fc31 <<>> doubleclick.net @192.168.0.53
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58220
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;doubleclick.net. IN A
;; ANSWER SECTION:
doubleclick.net. 2 IN A 0.0.0.0
;; Query time: 0 msec
;; SERVER: 192.168.0.53#53(192.168.0.53)
;; WHEN: Tue Jan 14 13:03:02 CST 2020
;; MSG SIZE rcvd: 75
If I were watching CoreDNS logs ($ sudo podman logs --follow coredns
), I would see that DNS query come through, and I can see in the response that 0.0.0.0
was returned for the lookup meaning everything is working great :)
CoreDNS Zone Config
Now that CoreDNS is running and we’ve moved the PiHole so that they’re all part of our query chain we can do the fun part (and the real subject of this blog) - create a local hosted zone for private DNS name resolution. As a reminder, I’m going to be creating the lz.example
zone for mapping svc.lz.example
style names to devices on my LAN with private IP addresses.
We need to create a BIND-style zone file for CoreDNS to load. This file contains all the records for our local zone. I’m not very familiar with the zonefile syntax, so I heavily referenced (this how-to)[https://blog.idempotent.ca/2018/04/18/run-your-own-home-dns-on-coredns/] on CoreDNS and zone files to build mine.
My zonefile is /etc/coredns/db.example.lz
and contains the following:
$TTL 604800
@ IN SOA dns.lz.example. admin.lz.example. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; Name Servers - NS Records
@ IN NS dns
; Name Servers - A records
dns.lz.example. IN A 192.168.0.53
; Device mappings - A Records
host.lz.example. IN A 192.168.0.200
This is sufficient to map my first record - host.lz.example. -> 192.168.0.200
.
Now we just need to go back to the Corefile and tell CoreDNS how to use this zonefile.
Our Corefile currently contains:
.:53 { # handle . (everything), bind to port 53
log # enables coredns logging
errors # logs errors
forward . 192.168.0.53:153 # forward . (everything) to PiHole on port 153
}
We need to change a couple of things. We need to tell CoreDNS to use our zonefile to add DNS information to it’s context. We also need to tell it to NOT forward queries about our private zone to its upstream DNS servers (the PiHole, Cloudflare, Google, etc). These changes are:
.:53 {
file /conf/db.example.lz lz.example # include zone data from this file
log
errors
forward . 192.168.0.53:153 1.1.1.1 {
except lz.example # don't forward local zone queries
policy sequential
}
}
A quick restart of CoreDNS later… ($ sudo systemctl restart coredns
) we now test local zone DNS resolution:
$ dig host.lz.example @192.158.0.53
; <<>> DiG 9.11.14-RedHat-9.11.14-2.fc31 <<>> host.lz.example @192.168.0.53
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7743
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 24e7a7807f3383d3 (echoed)
;; QUESTION SECTION:
;host.lz.example. IN A
;; ANSWER SECTION:
host.lz.example. 604800 IN A 192.168.0.200
;; AUTHORITY SECTION:
lz.example. 604800 IN NS dns.lz.example.
;; Query time: 0 msec
;; SERVER: 192.168.0.53#53(192.168.0.53)
;; WHEN: Tue Jan 14 13:38:45 CST 2020
;; MSG SIZE rcvd: 119
Success!
Note that Chrome does not like fake TLDs and will try to Search them instead of resolving them. If you’re running webservices on your private zone, always include a trailing slash to force Chrome to resolve: http://host.lz.example/
Split DNS with systemd-resolved
One cool side-effect of the above is that the devices we previously configured to send DNS traffic over WireGuard just work with the new local zones. However, on some devices I don’t want to send all my DNS to my hosted CoreDNS - like if I’m on a corporate network that has it’s own private DNS zone that I need to be able to resolve.
To handle this, we can set up systemd-resolved to forward DNS queries for certain domains to certain resolvers. For example, I want to send queries for *.lz.example
offsite to my CoreDNS server, over WireGuard.
The systemd-resolved
drop-ins go in /etc/systemd/resolved.conf.d
, so let’s create one:
# /etc/systmed/resolved.conf.d/00-example.lz
[Resolve]
DNS=172.16.4.1 # the wireguard peer where coredns is running
Domains=~lz.example
DNSSEC=false
Then if we $ sudo systemctl restart systemd-resolved
, we can dig
just like above and will receive the same response even when we’re not on the LAN!
This lets us navigate to privately hosted services without exposing webservers or anything else to the public internet from inside our LAN, so it’s an awesome system for the security and privacy of our homelab services.
Read the other articles in this series here:
- New Year, New Lab
- #TODO - Epyc EKWB liquid cooled server build
- ZFS on Linux, ZED, and Postfix
- Configuring Postfix with Gmail
- WireGuard VPN mesh
- PiHole and DNS over WireGuard
- Private DNS with CoreDNS
- #TODO - VFIO GPU Passthrough
- #TODO - Networking: Unifi, VLANs, and (Core)DNS localzones over WireGuard
- Rescuing a bad Fedora upgrade via chroot