Saturday, May 18, 2024
No menu items!
HomeCloud ComputingScalable multi-tenancy management with Config Sync and team scopes

Scalable multi-tenancy management with Config Sync and team scopes

Ensuring application and service teams have the resources they need is crucial for platform administrators. Fleet team management features in Google Kubernetes Engine (GKE) make this easier, allowing each team to function as a separate “tenant” within a fleet. In conjunction with Config Sync, a GitOps service in GKE, platform administrators can streamline resource management for their teams across the fleet.

Specifically, with Config Sync team scopes, platform admins can define fleet-wide and team-specific cluster configurations such as resource quotas and network policies, allowing each application team to manage their own workloads within designated namespaces across clusters.

Let’s walk through a few scenarios.

Separating resources for frontend and backend teams

Let’s say you need to provision resources for frontend and backend teams, each requiring their own tenant space. Using team scopes and fleet namespaces, you can control which teams access specific namespaces on specific member clusters.

For example, the backend team might access their bookstore and shoestore namespaces on us-east-cluster and us-west-cluster clusters, while the frontend team has their frontend-a and frontend-b namespaces on all three member clusters.

Unlocking Dynamic Resource Provisioning with Config Sync
You can enable Config Sync by default at the fleet level using Terraform. Here’s a sample Terraform configuration:

code_block
<ListValue: [StructValue([(‘code’, ‘resource “google_gke_hub_feature” “feature” {rn name = “configmanagement”rn location = “global”rn provider = googlern fleet_default_member_config {rn configmanagement {rn config_sync {rn source_format = “unstructured”rn git {rn sync_repo = “https://github.com/GoogleCloudPlatform/anthos-config-management-samples”rn sync_branch = “main”rn policy_dir = “fleet-tenancy/config”rn secret_type = “none”rn }rn }rn }rn }rn}’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9ddd799910>)])]>

Note: Fleet defaults are only applied to new clusters created in the fleet.

This Terraform configuration enables Config Sync as a default fleet-level feature. It installs Config Sync and instructs it to fetch Kubernetes manifests from a Git repository (specifically, the “main” branch and the “fleet-tenancy/config” folder). This configuration automatically applies to all clusters subsequently created within the fleet. This approach offers a powerful way of configuring manifests across fleet clusters without the need for manual installation and configuration on individual clusters.

Now that you’ve configured Config Sync as a default fleet setting, you might want to sync specific Kubernetes resources to designated namespaces and clusters for each team. Integrating Config Sync with team scopes streamlines this process.

Setting team scope 
Following this example, let’s assume you want to apply a different network policy for the backend team compared to the frontend team. Fleet team management features simplify the process of provisioning and managing infrastructure resources for individual teams, treating each team as a separate “tenant” within the fleet. 

To manage separate tenancy, as shown in the above team scope diagram, first set up team scopes for the backend and frontend teams. This involves defining fleet-level namespaces and adding fleet member clusters to each team scope.

Now, let’s dive into those Kubernetes manifests that Config Sync syncs into the clusters.

Applying team scope in Config Sync
Each fleet namespace in the cluster is automatically labeled with fleet.gke.io/fleet-scope: <scope name>. For example, the backend team scope contains the fleet namespaces bookstore and shoestore, both labeled with fleet.gke.io/fleet-scope: backend

Config Sync’s NamespaceSelector utilizes this label to target specific namespaces within a team scope. Here’s the configuration for the backend team:

code_block
<ListValue: [StructValue([(‘code’, ‘apiVersion: configmanagement.gke.io/v1rnkind: NamespaceSelectorrnmetadata:rn name: backend-scopernspec:rn mode: dynamicrn selector:rn matchLabels:rn fleet.gke.io/fleet-scope: backend’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9ddd799700>)])]>

Applying NetworkPolicies for the backend team
By annotating resources with configmanagement.gke.io/namespace-selector: <NamespaceSelector name>, they’re automatically applied to the right namespaces. Here’s the NetworkPolicy of the backend team:

code_block
<ListValue: [StructValue([(‘code’, ‘apiVersion: networking.k8s.io/v1rnkind: NetworkPolicyrnmetadata:rn name: be-deny-allrn annotations:rn configmanagement.gke.io/namespace-selector: backend-scopernspec:rn ingress:rn – from:rn – podSelector: {}rn podSelector:rn matchLabels: null’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9ddd799bb0>)])]>

This NetworkPolicy is automatically provisioned in the backend team’s bookstore and shoestore namespaces, adapting to fleet changes like adding or removing namespaces and member clusters.

Extending the concept: ResourceQuotas for the frontend team
Here’s how a ResourceQuota is dynamically applied to the frontend team’s namespaces:

code_block
<ListValue: [StructValue([(‘code’, ‘apiVersion: configmanagement.gke.io/v1rnkind: NamespaceSelectorrnmetadata:rn name: frontend-scopernspec:rn mode: dynamicrn selector:rn matchLabels:rn fleet.gke.io/fleet-scope: frontendrn—rnkind: ResourceQuotarnapiVersion: v1rnmetadata:rn name: fe-quotarn annotations:rn configmanagement.gke.io/namespace-selector: frontend-scopernspec:rn hard:rn persistentvolumeclaims: “6”‘), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9ddd799ee0>)])]>

Similarly, this ResourceQuota targets the frontend team’s frontend-a and frontend-b namespaces, dynamically adjusting as the fleet’s namespaces and member clusters evolve.

Delegating resource management with Config Sync: Empowering the backend team
To allow the backend team to manage their own resources within their designated bookstore namespace, you can use Config Sync’s RepoSync, and a slightly different NamespaceSelector.

Targeting a specific fleet namespace
To zero in on the backend team’s bookstore namespace, the following NamespaceSelector targets both the team scope and the namespace name by labels:

code_block
<ListValue: [StructValue([(‘code’, ‘apiVersion: configmanagement.gke.io/v1rnkind: NamespaceSelectorrnmetadata:rn name: backend-bookstorernspec:rn mode: dynamicrn selector:rn matchLabels:rn fleet.gke.io/fleet-scope: backendrn kubernetes.io/metadata.name: bookstore’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9ddd7994f0>)])]>

Introducing RepoSync
Another Config Sync feature is RepoSync, which lets you delegate resource management within a specific namespace. For security reasons, RepoSync has no default access; you must explicitly grant the necessary RBAC permissions to the namespace.

Leveraging the NamespaceSelector, the following RepoSync resource and its respective RoleBinding can be applied dynamically to all bookstore namespaces across the backend team’s member clusters. The RepoSync points it to a repository owned by the backend team:

code_block
<ListValue: [StructValue([(‘code’, ‘kind: RepoSyncrnapiVersion: configsync.gke.io/v1beta1rnmetadata:rn name: repo-syncrn annotations:rn configmanagement.gke.io/namespace-selector: backend-bookstorernspec:rn sourceFormat: unstructuredrn git:rn repo: https://github.com/GoogleCloudPlatform/anthos-config-management-samplesrn branch: mainrn dir: fleet-tenancy/teams/backend/bookstorern auth: nonern—rnkind: RoleBindingrnapiVersion: rbac.authorization.k8s.io/v1rnmetadata:rn name: be-bookstorern annotations:rn configmanagement.gke.io/namespace-selector: backend-bookstorernsubjects:rn- kind: ServiceAccountrn name: ns-reconciler-bookstorern namespace: config-management-systemrnroleRef:rn kind: ClusterRolern name: adminrn apiGroup: rbac.authorization.k8s.io’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9ddd7999d0>)])]>

Note: The .spec.git section would reference the backend team’s repository.

The backend team’s repository contains a ConfigMap. Config Sync ensures that the ConfigMap is applied to the bookstore namespaces across all backend team’s member clusters, supporting a GitOps approach to management.

Easier cross-team resource management

Managing resources across multiple teams within a fleet of clusters can be complex. Google Cloud’s fleet team management features, combined with Config Sync, provide an effective solution to streamline this process.

In this blog, we explored a scenario with frontend and backend teams, each requiring their own tenant spaces and resources (NetworkPolicies, ResourceQuotas, RepoSync). Using Config Sync in conjunction with the fleet management features, we automated the provisioning of these resources, helping to ensure a consistent and scalable setup.

Next steps

Learn how to use Config Sync to sync Kubernetes resources to team scopes and namespaces.

To experiment with this setup, visit the example repository. Config Sync configuration settings are located within the config_sync block of the Terraform google_gke_hub_feature resource.

For simplicity, this example uses a public Git repository. To use a private repository, create a Secret in each cluster to store authentication credentials.

To learn more about Config Sync, see Config Sync overview.

To learn more about fleets, see Fleet management overview.

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments