previous post, we bootstrapped a k3s cluster on these VPS nodes. Both have their own public IPs bound directly to their NICs. That means we can forward UDP packets directly into Kubernetes and expose DNS zones in a standards-compliant way. Here’s a rough sketch of how the traffic flows:
The Gateway API itself is just a specification. You still need an implementation.
UDPRoute
.UDPRoute
.That’s why we’ll be using Envoy here. The plan is to configure two gateways, one on each VPS, each bound to its own NIC and public IP.
We start by installing the Gateway API CRDs (Custom Resource Definitions):
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/experimental-install.yaml
Next, create a namespace:
kubectl create ns gateway
Envoy also ships its own CRDs. We install them via Helm, explicitly opting into the experimental channel (crds.gatewayAPI.channel=experimental
) and avoiding duplicate CRD installation (crds.gatewayAPI.enabled=false
):
helm template -n gateway eg-crds oci://docker.io/envoyproxy/gateway-crds-helm \
--version v1.4.2 \
--set crds.gatewayAPI.enabled=false \
--set crds.gatewayAPI.channel=experimental \
--set crds.envoyGateway.enabled=true | kubectl apply --server-side -f -
Finally, install the Envoy Gateway controller:
helm install -n gateway eg oci://docker.io/envoyproxy/gateway-helm \
--version v1.4.2 \
--skip-crds
At this point, the cluster is ready to host gateways — but there’s a catch.
When a Gateway is deployed, the controller schedules Envoy pods that handle ingress traffic, e.g., to handle TLS termination. Importantly, these pods must run on the node with the correct NIC and public IP. Otherwise, packets won’t reach them.
That means we need to ensure the gateway-vps1
Envoy pod only runs on VPS1, and the gateway-vps2
pod only on VPS2. We’ll solve this with node labels which we can later use.
kubectl label nodes vps1 node.kubernetes.io/name=vps1
kubectl label nodes vps2 node.kubernetes.io/name=vps2
The EnvoyProxy
CRD lets us influence how Envoy pods are deployed, including scheduling constraints. We’ll use it to apply our nodeSelector
configuration.
Here’s the config for VPS1
:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: gateway-config-vps1-dns
namespace: gateway
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
pod:
# let's schedule the pod on the specific node
# which exposes the ip address
nodeSelector:
node.kubernetes.io/name: vps1
envoyHpa:
minReplicas: 1
maxReplicas: 3
metrics:
- resource:
name: cpu
target:
averageUtilization: 60
type: Utilization
type: Resource
And for VPS2
:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: gateway-config-vps2-dns
namespace: gateway
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
pod:
# let's schedule the pod on the specific node
# which exposes the ip address
nodeSelector:
node.kubernetes.io/name: vps2
envoyHpa:
minReplicas: 1
maxReplicas: 3
metrics:
- resource:
name: cpu
target:
averageUtilization: 60
type: Utilization
type: Resource
Note: I’ve added an HPA (Horizontal Pod Autoscaler) here, so Envoy can scale automatically when traffic grows. So we take advantage of one of Kubernetes’ core features.
A GatewayClass is a cluster-scoped definition describing a family of Gateways. Here’s a minimal one for Envoy:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: envoygateway
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
Now we define the Gateways themselves. Each Gateway references its corresponding EnvoyProxy
resource, binds to the node’s public IP, and opens listeners on both UDP and TCP port 53.
Why both? DNS primarily uses UDP, but falls back to TCP for larger responses or specific edge cases (see RFC7766).
Example for VPS1
:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: vps1-gateway-dns
namespace: gateway
spec:
gatewayClassName: envoygateway
infrastructure:
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: gateway-config-vps1-dns
addresses:
- type: IPAddress
value: <public-ip-vps1>
listeners:
- protocol: UDP
port: 53
name: dns-udp
allowedRoutes:
namespaces:
from: All
- protocol: TCP
port: 53
name: dns-tcp
allowedRoutes:
namespaces:
from: All
And VPS2
mirrors this with its own IP and EnvoyProxy reference:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: vps2-gateway-dns
namespace: gateway
spec:
gatewayClassName: envoygateway
infrastructure:
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: gateway-config-vps2-dns
addresses:
- type: IPAddress
value: <public-ip-vps2>
listeners:
- protocol: UDP
port: 53
name: dns-udp
allowedRoutes:
namespaces:
from: All
- protocol: TCP
port: 53
name: dns-tcp
allowedRoutes:
namespaces:
from: All
Once deployed, you’ll notice two new Envoy pods in the gateway namespace and two LoadBalancer
services with the VPS public IPs attached:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
envoy-gateway-vps1-gateway-dns-32d3fb71 LoadBalancer 10.43.77.235 <your-public-ip> 53:31081/UDP,53:31081/TCP 10s app.kubernetes.io/component=proxy,...
envoy-gateway-vps2-gateway-dns-56a56f77 LoadBalancer 10.43.176.49 <your-public-ip> 53:32651/UDP,53:32651/TCP 10s app.kubernetes.io/component=proxy,...
With the gateways up, the UDP plumbing is ready. In the next post, we’ll deploy bind9
into the cluster and configure it to serve real DNS zones via these Envoy-powered gateways.
This is where things get fun: Kubernetes-native DNS hosting, powered by Gateway API and Envoy.
Exploring the cutting edge of cloud-native infrastructure — from Kubernetes and service meshes to DNS, security, and beyond — with hands-on insights and practical experiments.
Setting up Kubernetes Gateway API with Envoy on k3s on two VPS to serve UDP endpoints (DNS).
By Steffen Sassalla, 2025-07-30