For years, exposing applications in Kubernetes meant reaching for an Ingress controller (often NGINX Ingress). That worked fine for HTTP(S) workloads, but it left gaps when you needed more protocol support. The Kubernetes Gateway API was introduced to bridge those gaps — it’s the spiritual successor to Ingress and supports a much wider range of traffic: HTTP, HTTPS, gRPC, TCP, and even UDP. This makes Gateway API far more versatile than Ingress. It’s not just an HTTP router — it’s a consistent, extensible way to expose workloads of all kinds.
Now, let’s add some practical flavor. I happen to have two idle VPS machines at Netcup. Instead of letting them sit unused, I decided to turn them into a Gateway API playground. A natural first use case? Serving an authoritative DNS zone, which requires exposing UDP endpoints. Perfect timing, since Gateway API recently introduced the UDPRoute resource. Pair that with bind9 running inside the cluster, and we have a nice DNS setup.
Note:
UDPRouteis still in the experimental channel of Gateway API.
In my previous post, we bootstrapped a k3s cluster on these VPS nodes. Both have their own public IPs bound directly to their NICs. That means we can forward UDP packets directly into Kubernetes and expose DNS zones in a standards-compliant way. Here’s a rough sketch of how the traffic flows:

Choosing a Gateway API Implementation
The Gateway API itself is just a specification. You still need an implementation.
- Istio supports Gateway API, but not the experimental channel yet — so no
UDPRoute. - Envoy Gateway (from the Envoy project) tracks new features faster and does support
UDPRoute.
That’s why we’ll be using Envoy here. The plan is to configure two gateways, one on each VPS, each bound to its own NIC and public IP.
Preparing Envoy Gateway API
We start by installing the Gateway API CRDs (Custom Resource Definitions):
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/experimental-install.yaml
Next, create a namespace:
kubectl create ns gateway
Envoy also ships its own CRDs. We install them via Helm, explicitly opting into the experimental channel (crds.gatewayAPI.channel=experimental) and avoiding duplicate CRD installation (crds.gatewayAPI.enabled=false):
helm template -n gateway eg-crds oci://docker.io/envoyproxy/gateway-crds-helm \
--version v1.4.2 \
--set crds.gatewayAPI.enabled=false \
--set crds.gatewayAPI.channel=experimental \
--set crds.envoyGateway.enabled=true | kubectl apply --server-side -f -
Finally, install the Envoy Gateway controller:
helm install -n gateway eg oci://docker.io/envoyproxy/gateway-helm \
--version v1.4.2 \
--skip-crds
At this point, the cluster is ready to host gateways — but there’s a catch.
Pinning Gateways to Nodes with nodeSelector
When a Gateway is deployed, the controller schedules Envoy pods that handle ingress traffic, e.g., to handle TLS termination. Importantly, these pods must run on the node with the correct NIC and public IP. Otherwise, packets won’t reach them.
That means we need to ensure the gateway-vps1 Envoy pod only runs on VPS1, and the gateway-vps2 pod only on VPS2. We’ll solve this with node labels which we can later use.
kubectl label nodes vps1 node.kubernetes.io/name=vps1
kubectl label nodes vps2 node.kubernetes.io/name=vps2
EnvoyProxy Resource
The EnvoyProxy CRD lets us influence how Envoy pods are deployed, including scheduling constraints. We’ll use it to apply our nodeSelector configuration.
Here’s the config for VPS1:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: gateway-config-vps1-dns
namespace: gateway
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
pod:
# let's schedule the pod on the specific node
# which exposes the ip address
nodeSelector:
node.kubernetes.io/name: vps1
envoyHpa:
minReplicas: 1
maxReplicas: 3
metrics:
- resource:
name: cpu
target:
averageUtilization: 60
type: Utilization
type: Resource
And for VPS2:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: gateway-config-vps2-dns
namespace: gateway
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
pod:
# let's schedule the pod on the specific node
# which exposes the ip address
nodeSelector:
node.kubernetes.io/name: vps2
envoyHpa:
minReplicas: 1
maxReplicas: 3
metrics:
- resource:
name: cpu
target:
averageUtilization: 60
type: Utilization
type: Resource
Note: I’ve added an HPA (Horizontal Pod Autoscaler) here, so Envoy can scale automatically when traffic grows. So we take advantage of one of Kubernetes’ core features.
Defining a GatewayClass
A GatewayClass is a cluster-scoped definition describing a family of Gateways. Here’s a minimal one for Envoy:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: envoygateway
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
Deploying Gateways
Now we define the Gateways themselves. Each Gateway references its corresponding EnvoyProxy resource, binds to the node’s public IP, and opens listeners on both UDP and TCP port 53.
Why both? DNS primarily uses UDP, but falls back to TCP for larger responses or specific edge cases (see RFC7766).
Example for VPS1:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: vps1-gateway-dns
namespace: gateway
spec:
gatewayClassName: envoygateway
infrastructure:
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: gateway-config-vps1-dns
addresses:
- type: IPAddress
value: <public-ip-vps1>
listeners:
- protocol: UDP
port: 53
name: dns-udp
allowedRoutes:
namespaces:
from: All
- protocol: TCP
port: 53
name: dns-tcp
allowedRoutes:
namespaces:
from: All
And VPS2 mirrors this with its own IP and EnvoyProxy reference:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: vps2-gateway-dns
namespace: gateway
spec:
gatewayClassName: envoygateway
infrastructure:
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: gateway-config-vps2-dns
addresses:
- type: IPAddress
value: <public-ip-vps2>
listeners:
- protocol: UDP
port: 53
name: dns-udp
allowedRoutes:
namespaces:
from: All
- protocol: TCP
port: 53
name: dns-tcp
allowedRoutes:
namespaces:
from: All
Once deployed, you’ll notice two new Envoy pods in the gateway namespace and two LoadBalancer services with the VPS public IPs attached:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
envoy-gateway-vps1-gateway-dns-32d3fb71 LoadBalancer 10.43.77.235 <your-public-ip> 53:31081/UDP,53:31081/TCP 10s app.kubernetes.io/component=proxy,...
envoy-gateway-vps2-gateway-dns-56a56f77 LoadBalancer 10.43.176.49 <your-public-ip> 53:32651/UDP,53:32651/TCP 10s app.kubernetes.io/component=proxy,...
Next Steps
With the gateways up, the UDP plumbing is ready. In the next post, we’ll deploy bind9 into the cluster and configure it to serve real DNS zones via these Envoy-powered gateways.
This is where things get fun: Kubernetes-native DNS hosting, powered by Gateway API and Envoy.