Friday, December 6, 2024
No menu items!
HomeCloud ComputingDeploy and manage Kubernetes applications with Workflows

Deploy and manage Kubernetes applications with Workflows

Workflows is a versatile service in orchestrating and automating a wide range of use cases: microservices, business processes, Data and ML pipelines, IT operations, and more. It can also be used to automate deployment of containerized applications on Kubernetes Engine (GKE) and this got even easier with the newly released (in preview) Kubernetes API Connector.

The new Kubernetes API connector enables access to GKE services from Workflows and this in turn enables Kubernetes based resource management or orchestration, scheduled Kubernetes jobs, and more.

In this blog post, l show how to use the Kubernetes Engine API connector to create a GKE cluster and then use the new Kubernetes API connector to create a Kubernetes deployment and a service.

Kubernetes Engine API connector vs. Kubernetes API connector

Before we get into details, you might be wondering: What’s the difference between Kubernetes Engine API connector and Kubernetes API connector?

The former is a connector for Kubernetes Engine API and it’s about creating, deleting, or getting information about GKE clusters in Google Cloud. This connector has been available for a while.

The latter is a connector for Kubernetes API and it’s about reading and writing Kubernetes resources such as Kubernetes deployments, services, and more on the GKE cluster. This is a newly released (in preview) connector.

Up until recently, you were able to create GKE clusters pretty easily with Workflows using the connector. However, you were not able to deploy applications in that cluster that easily. Kubernetes API connector fixes that by giving you an easy way to call Kubernetes API from Workflows.

Create a GKE cluster with Kubernetes Engine API connector

To get started, you can create an auto-pilot enabled GKE cluster with Workflows and the connector:

code_block<ListValue: [StructValue([(‘code’, ‘main:rn steps:rn – init:rn assign:rn – project_id: ${sys.get_env(“GOOGLE_CLOUD_PROJECT_ID”)}rn – cluster_location: “us-central1″rn – cluster_id: “workflows-cluster”rn – cluster_full_name: ${“projects/” + project_id + “/locations/” + cluster_location + “/clusters/” + cluster_id}rn – create_k8s_cluster:rn call: googleapis.container.v1.projects.locations.clusters.creatern args:rn body:rn cluster:rn name: ${cluster_id}rn initial_node_count: 1rn autopilot:rn enabled: truern parent: ${“projects/” + project_id + “/locations/” + cluster_location}’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e6250>)])]>

Note that the connector waits for the long-running operation (cluster creation) until it’s finished but you can optionally add a step to check if the cluster is created and running afterwards, as shown here.

Create a Kubernetes deployment with the Kubernetes API connector

Once you have a GKE cluster, you can start scheduling Kubernetes deployments and pods. For example, in Kubernetes documentation, there’s an example of a Kubernetes deployment with 3 nginx pods. You can create the same deployment with Workflows as follows:

code_block<ListValue: [StructValue([(‘code’, ‘- create_deployment:rn call: gke.requestrn args:rn project: ${sys.get_env(“GOOGLE_CLOUD_PROJECT_ID”)}rn cluster_id: ${cluster_id}rn location: ${cluster_location}rn method: “POST”rn path: “/apis/apps/v1/namespaces/default/deployments”rn body:rn kind: Deploymentrn metadata:rn name: nginx-deploymentrn labels:rn app: nginxrn spec:rn replicas: 3rn selector:rn matchLabels:rn app: nginxrn template:rn metadata:rn labels:rn app: nginxrn spec:rn containers:rn – name: nginxrn image: nginx:1.14.2rn ports:rn – containerPort: 80rn result: create_deployment_result’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e6730>)])]>

Notice that it’s using the Kubernetes API connector with the gke.request call to the GKE cluster’s control plane.

Create a Kubernetes service with the Kubernetes API connector

You probably want to expose the deployment and pods to the outside world with a load balancer. That’s what a Kubernetes service does. You need to explore the Kubernetes Service API to figure out what the API call looks like.

This is how you create a Kubernetes service from Workflows for the nginx deployment:

code_block<ListValue: [StructValue([(‘code’, ‘- create_service:rn call: gke.requestrn args:rn project: ${sys.get_env(“GOOGLE_CLOUD_PROJECT_ID”)}rn cluster_id: ${cluster_id}rn location: ${cluster_location}rn method: “POST”rn path: “/api/v1/namespaces/default/services”rn body:rn kind: Servicern apiVersion: v1rn metadata:rn name: nginx-servicern spec:rn ports:rn – name: httprn port: 80rn targetPort: 80rn selector:rn app: nginxrn type: LoadBalancerrn result: create_service_result’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e6b20>)])]>

Deploy and run the workflow

You can see the full workflow.yaml.

Deploy the workflow:

code_block<ListValue: [StructValue([(‘code’, ‘gcloud workflows deploy workflows-kubernetes-engine –source=workflow.yaml’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e66a0>)])]>

Run the workflow:

code_block<ListValue: [StructValue([(‘code’, ‘gcloud workflows run workflows-kubernetes-engine’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e68b0>)])]>

As the workflow is running, you should see the GKE cluster is being created:

Once the cluster is created, Workflows will create the deployment and the service. You can check this with kubectl.

First, authenticate for kubectl:

code_block<ListValue: [StructValue([(‘code’, ‘gcloud container clusters get-credentials workflows-cluster –region us-central1 –project [your-project-id]’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e67f0>)])]>

Now, you should be able to see the running pods:

code_block<ListValue: [StructValue([(‘code’, ‘kubectl get podsrnNAME READY STATUSrnnginx-deployment-74858db79d-b27nk 1/1 Runningrnnginx-deployment-74858db79d-jkttq 1/1 Runningrnnginx-deployment-74858db79d-pp5md 1/1 Running’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e6580>)])]>

You can also see the service:

code_block<ListValue: [StructValue([(‘code’, ‘kubectl get svcrnNAME TYPE CLUSTER-IP EXTERNAL-IPrnkubernetes ClusterIP 34.118.224.1 <none>rnnginx-service LoadBalancer 34.118.238.91 34.28.175.199’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e63385e6100>)])]>

If you go to the external IP, you can also see the nginx landing page, which means our pods are running and the service is public:

There are many ways of creating and managing Kubernetes applications in Google Cloud. In this post, I showed you how to use the newly released Kubernetes API connector and the existing Kubernetes Engine API connector to manage the full lifecycle of Kubernetes applications from Workflows.

For more information, see Access Kubernetes API objects using a connector tutorial and if you have questions or feedback feel free to reach out to me on Twitter @meteatamel.

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments