Wednesday, March 29, 2023
No menu items!
HomeCloud ComputingHow to improve your Kubernetes security posture with GKE Dataplane V2 network...

How to improve your Kubernetes security posture with GKE Dataplane V2 network policies

As more organizations adopt Kubernetes, they also embrace new paradigms for connecting and protecting their workloads. Relying on perimeter defense alone is no longer an effective strategy. With microservice architecture patterns continuing to evolve rapidly, it is imperative that organizations adopt a defense-in-depth strategy to keep their applications and data protected. 

To effectively manage a highly distributed and dynamic system, with an abundance of exposed ports and APIs, organizations need more than traditional network-perimeter firewalls. With a myriad of connections between microservices, a rogue actor could use a compromised container instance to move laterally through the network to attack others, leading to cascading failures and significant data loss. 

Fortunately for those running their microservices on GKE and Anthos, there is GKE Dataplane V2 that provides consistent network policy enforcement, logging, and monitoring without having to install any third-party software add-ons. 

GKE Dataplane V2: the what and how

GKE Dataplane V2 integrates eBPF (extended Berkeley Packet Filter) – a capability that allows applications to execute code in the Linux Kernel space without any changes to the kernel source code or loading a module. By safely extending the capabilities of the kernel, eBPF allows regular user-space applications to package the logic to be executed within the Linux kernel, as bytecode.*. eBPF is a groundbreaking technology that offers several advantages:

Performance – eBPF programs are executed in the kernel, making them much faster than user-space programs.

Security – eBPF programs are sandboxed which ensures the underlying kernel source code remains protected and unchanged. 

Extensible – eBPF is a powerful tool that can be used to create new features and functionality that would not be possible with traditional kernel programming techniques.

GKE Dataplane V2 harnesses the power of eBPF and Cilum (an open-source project based on eBPF) to flexibly and performantly process network packets in-kernel leveraging Kubernetes-specific metadata. With GKE Dataplane V2, eBPF programs in the Kernel are able to route and process packets arriving at a GKE node without relying on kube-proxy and iptables for service routing, resulting in significant network performance improvements. GKE Dataplane V2 also helps improve your cluster(s) security posture with built-in Network Policy enforcement and real-time visibility of network activity.  Network packets are processed in the kernel and annotated actions are reported back to the user-space for logging. 

(You can enable GKE Dataplane V2 when you create new clusters with GKE version 1.20.6-gke.700 and later. See availability here)

Let’s take a look at an example of how we can use network policies in GKE Dataplane V2 to control which Pods receive incoming traffic. By allowing you to limit connections between pods, network policies reduce the blast radius and provide enhanced security. To begin with, we create a GKE cluster with Dataplane V2 enabled using the following command:

code_block[StructValue([(u’code’, u’gcloud container clusters create CLUSTER_NAME \rn –enable-dataplane-v2 \rn –enable-ip-alias \rn –release-channel CHANNEL_NAME \rn –region COMPUTE_REGION’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e12c390>)])]

Clone the GCP repo here and kubectl apply the manifests.  Upon running

code_block[StructValue([(u’code’, u’kubectl run svc’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e1326d0>)])]

you will notice a redis-cluster service running in your cluster.

Diagram 1: Current state of your GKE cluster

Next we’ll demonstrate how a rogue actor gains access to our cluster and launches another redis container image to steal our data and disrupt the service.

code_block[StructValue([(u’code’, u’kubectl run rogue –image=redis –restart=Never –rm -it — bash’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e137ad0>)])]

From the rogue container’s prompt, run

code_block[StructValue([(u’code’, u’[email protected]:/data# redis-cli -h redis-cluster ping’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e13f850>)])]

The response should come back as

code_block[StructValue([(u’code’, u’PONG’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e13fb10>)])]

which confirms that the rogue actor could illegitimately access our service and data.

Diagram 2: GKE cluster with rogue container that can access redis service

Next we configure a NetworkPolicy to allow traffic to redis-service pods only from the backend pods. All other incoming traffic to redis pods should get blocked.

code_block[StructValue([(u’code’, u’kind: NetworkPolicyrnapiVersion: name: redis-allow-from-backendrnspec:rn policyTypes:rn – Ingressrn podSelector:rn matchLabels:rn app: redisrn ingress:rn – from:rn – podSelector:rn matchLabels:rn app: backendrn—‘), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485d67c890>)])]

Next, we apply the above NetworkPolicy to the cluster.

code_block[StructValue([(u’code’, u’kubectl apply -f redis-allow-from-backend.yaml’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e13d650>)])]

Now make the same request again from the rogue container’s prompt

code_block[StructValue([(u’code’, u’kubectl run rogue –image=redis –restart=Never –rm -it — [email protected]:/data# redis-cli -h redis-cluster ping’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485e13d8d0>)])]

You’ll notice that the above request hangs. Exit from the shell.

This diagram below shows the effect of the redis-allow-from-backend policy on two connections to redis-cluster service. The network policy only allows the connection(s) from backend pods. All other incoming requests to the redis-cluster service are denied since no network policy allows such requests.

Diagram 3: GKE cluster with Dataplane V2 Network Policy applied

Now, even though with NetworkPolicy enforcement, the rogue actor was unable to access our data, they could still be silently probing our cluster. Besides preventing unauthorized access, we must also log such activities for alerting and analysis. GKE creates a NetworkLogging object by default in new Dataplane V2 clusters. We can configure network logging settings by editing the NetworkLogging object in our cluster. Using our example above, to log all denied connections, we run

code_block[StructValue([(u’code’, u’kubectl apply -f- << EOFrnkind: NetworkLoggingrnapiVersion: defaultrnspec:rncluster:rnallow:rnlog: falserndelegate: falserndeny:rnlog: truerndelegate: falsernEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e485d65fd10>)])]

Network policy logs are automatically uploaded to Cloud Logging where we can search and analyze our network traffic to spot any malicious or unauthorized network activity (and set up alerts for quick action). 


GKE Dataplane V2 optimizes your cluster networking and provides an easy way to connect and protect your workloads. With network policy enforcement and logging built-in, without relying on any third-party software add-ons, GKE Dataplane V2 enables you to easily secure your network without compromising on innovation. 

Learn more:

GKE Dataplane V2 Overview (Official documentation)

Enabling Dataplane V2 for GKE clusters

Using network policy logging to monitor the traffic flow in your clusters

Get started today with a GKE tutorial

Cloud BlogRead More



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments