previous post we deployed the Envoy Gateway API to provide an interface to the internet. With that setup, any workload can be exposed to the public web through two IP addresses (the VPS).
In this post, we take the next step: deploying bind9
to serve an authoritative DNS zone over both UDP and TCP on port 53 via UDPRoute
and TCPRoute
.
Note:
UDPRoute
is still in the experimental channel of Gateway API.
So why bind9
at all? I experimented with CoreDNS
as well, but bind9
brings maturity, stability, and additional features right out of the box. It supports DNSSEC
, DoH
(DNS-over-HTTPS, RFC 8484), and DoT (DNS-over-TLS, RFC 7858). While DoT
and DoH
are mainly relevant for recursive resolvers rather than authoritative ones, there are standards such as RFC 9539 that describe recursive-to-authoritative encryption. This makes it worthwhile to enable them even in an authoritative setup.
We will explore DoH
and DoT
more deeply in a later post, especially in combination with DDR
(Discovery of Designated Resolvers, RFC 9462).
For now, the task is to deploy bind9
. Using Helm would have made this easier, but unfortunately ISC, the official publisher of the bind9
docker image, does not provide a Helm chart. To complicate things further, our VPSs run on ARM chips, which means ISC’s published Docker image cannot be used directly. The workaround is simple: rely on Canonical’s official bind9 image which is also build for ARM architecture and craft our own deployment manifests.
We start with creating a new namespace.
kubectl create ns bind9
Next, we deploy a minimal bind9
configuration for an authoritative DNS zone.
apiVersion: v1
kind: ConfigMap
metadata:
name: bind9-config
namespace: bind9
data:
db.example.com: |
$TTL 60
@ IN SOA ns1.example.com. admin.example.com. (
2025081101 ; Serial (YYYYMMDDnn)
60 ; Refresh
30 ; Retry
604800 ; Expire
60 ) ; Negative Cache TTL
; nameserver on origin
@ IN NS ns1.example.com.
@ IN NS ns2.example.com.
; nameserver 1
ns1 IN A <your-public-ip-vps1>
; nameserver 2
ns2 IN A <your-public-ip-vps2>
; A records for root domain
@ IN A <your-public-ip-vps1>
@ IN A <your-public-ip-vps2>
; wildcard for all undefined subdomains
* IN A <your-public-ip-vps1>
* IN A <your-public-ip-vps2>
named.conf.local: |
zone "example.com" {
type master;
file "/etc/bind/zones/db.example.com";
journal "/var/lib/bind/db.example.com.jnl";
};
named.conf.options: |
options {
directory "/var/cache/bind";
// Disable recursion: authoritative only
recursion no;
allow-query { any; };
// Disable DNSSEC validation (since it's for challenges)
dnssec-validation no;
// Listen on IPv4 interfaces (we consider IPv6 in a later post)
listen-on { any; };
listen-on-v6 { none; };
minimal-responses no;
// Minimal logging for debug if needed
// Uncomment for debugging:
querylog yes;
};
The db.example.com
file defines the basic zone. It begins with a required SOA
record, then sets up A
records for our VPSs (ns1.example.com and ns2.example.com). A wildcard record ensures that any subdomain of example.com
resolves to one of the two VPS addresses:
* IN A <your-public-ip-vps1>
* IN A <your-public-ip-vps2>
This means that everything inside the example.com
zone — such as test.example.com
— will automatically point to the Kubernetes cluster. No extra zone-file changes are needed when deploying new apps. Envoy Gateway handles the routing based on the hostname and forwards traffic to the correct backend service.
The configuration also makes bind9
authoritative only, with recursion no;
, and restricts it to IPv4 for now. We will consider IPv6 and dual-stack support later.
Next, we define a Service
manifest to route traffic to the bind9
pods. Alongside it, we also create a ServiceAccount
. While not strictly necessary at this stage, it is a good practice and lays the groundwork for a potential service mesh integration later.
It’s important to expose both UDP and TCP on port 53. According to RFC 7766 DNS resolvers must support TCP as a fallback to handle cases where UDP alone is insufficient.
apiVersion: v1
kind: ServiceAccount
metadata:
name: bind9
namespace: bind9
---
apiVersion: v1
kind: Service
metadata:
name: bind9
namespace: bind9
spec:
selector:
app: bind9
ports:
- name: udp
protocol: UDP
port: 53
targetPort: 53
- name: tcp
protocol: TCP
port: 53
targetPort: 53
type: ClusterIP
The deployment manifest file looks huge but is fairly simple.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bind9
namespace: bind9
spec:
replicas: 2
selector:
matchLabels:
app: bind9
template:
metadata:
labels:
app: bind9
spec:
serviceAccountName: bind9
containers:
- name: bind9
image: ubuntu/bind9:9.18-24.04_beta
# this is not optimal though
livenessProbe:
tcpSocket:
port: 53
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 53
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 53
protocol: UDP
- containerPort: 53
protocol: TCP
volumeMounts:
- name: journal
mountPath: /var/lib/bind
- name: config-volume
mountPath: /etc/bind/named.conf.options
subPath: named.conf.options
readOnly: true
- name: config-volume
mountPath: /etc/bind/named.conf.local
subPath: named.conf.local
readOnly: true
- name: config-volume
subPath: db.example.com
mountPath: /etc/bind/zones/db.example.com
readOnly: true
securityContext:
# let entrypoint start as root
# it will drop priviledges to bind user later through setuid and setgid
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
add: ["NET_BIND_SERVICE", "SETUID", "SETGID"]
seccompProfile:
type: RuntimeDefault
volumes:
- name: config-volume
configMap:
name: bind9-config
- name: journal
emptyDir: {}
We schedule two pods by default (replicas: 2
). Each pod is equipped with liveness and readiness probes that check the TCP socket. Unfortunately, Kubernetes does not provide native probes for UDP, so this is the best option available at the moment.
Note that this setup is not perfect. A more robust approach would be to run
dig
commands against thebind9
process to verify that it can actually handle DNS queries.
For now, two replicas are sufficient since no zone transfers are required. In the future, especially when integrating
bind9
withcert-manager
, we will need to move to aStatefulSet
and enable zone transfers.
To tighten security, we restrict container capabilities. All capabilities are dropped except for NET_BIND_SERVICE
, SETUID
, and SETGID
. The bind9
container initially runs its entrypoint as root to bind to port 53 for both TCP and UDP (NET_BIND_SERVICE
), and then drops its privileges to the bind9
user using SETUID
and SETGID
. We also set allowPrivilegeEscalation: false
to ensure the process cannot regain elevated rights. The result is a process that operates with minimal and controlled permissions.
Finally, we mount the ConfigMap
containing the configuration and zone files and use an emptyDir
volume to store the journal entries temporarily for the livetime of the pod.
You can now test your authoritative DNS server. Just run dig in a container and query the service.
kubectl run dig-shell -n bind9 -it --rm \
--image=ubuntu:22.04 \
--restart=Never -- bash
Open a shell and run:
apt update && apt install dnsutils
Fire a DNS request:
dig @bind9.bind9.svc.cluster.local example.com
The DNS server should answer your DNS request as follows.
; <<>> DiG 9.18.39-0ubuntu0.24.04.1-Ubuntu <<>> @dns.example.com example.com
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30178
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 2195f6f230a739700100000068c6a4616bfd49ddaeb5b629 (good)
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 60 IN A <public-ip-vps1>
example.com. 60 IN A <public-ip-vps2>
;; AUTHORITY SECTION:
example.com. 60 IN NS ns2.example.com.
example.com. 60 IN NS ns1.example.com.
;; ADDITIONAL SECTION:
ns1.example.com. 60 IN A <public-ip-vps1>
ns2.example.com. 60 IN A <public-ip-vps2>
;; Query time: 43 msec
;; SERVER: <cluster-ip>#53(dns.example.com) (UDP)
;; WHEN: Sun Aug 14 13:18:04 CEST 2025
;; MSG SIZE rcvd: 165
The final step is to expose the DNS server. Since the Envoy Gateway API is already in place, this becomes straightforward. We simply define a UDPRoute
and a TCPRoute
and attach them to the gateway, making our authoritative DNS service available to the outside world.
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
name: example-com-dns-udp-route
namespace: bind9
spec:
parentRefs:
- name: vps1-gateway-dns
namespace: gateway
sectionName: dns-udp
port: 53
- name: vps1-gateway-dns
namespace: gateway
sectionName: dns-udp
port: 53
rules:
- name: dns-udp
backendRefs:
- kind: Service
name: bind9
port: 53
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: example-com-dns-tcp-route
namespace: bind9
spec:
parentRefs:
- name: vps1-gateway-dns
namespace: gateway
sectionName: dns-tcp
port: 53
- name: vps2-gateway-dns
namespace: gateway
sectionName: dns-tcp
port: 53
rules:
- name: dns-tcp
backendRefs:
- kind: Service
name: bind9
port: 53
Once the routes are defined, check their status. This will tell you whether they have been accepted by the gateway.
kubectl get udproute example-com-dns-udp-route -n bind9 -o yaml
In the status output, look for a message like this:
message: Route is accepted
observedGeneration: 1
reason: Accepted
status: "True"
type: Accepted
This indicates that the routes are active and your DNS server is now publicly reachable.
To verify, you can run a simple dig
query from your local machine:
dig @<vps1-public-ip> example.com
If everything is configured correctly, you should receive a proper DNS response.
In the next post, we will deploy cert-manager
to automate the issuance of TLS
certificates using the DNS01
challenge. This will build directly on the authoritative DNS setup we created here. Stay tuned for that deep dive!
Exploring the cutting edge of cloud-native infrastructure — from Kubernetes and service meshes to DNS, security, and beyond — with hands-on insights and practical experiments.
Deploying bind9 inside Kubernetes and exposing it to the world with Envoy Gateway API, UDPRoute, and TCPRoute. A deep dive into authoritative DNS, security hardening, and nerdy RFC goodness.
By Steffen Sassalla, 2025-08-06