Saturday, May 21, 2022
No menu items!
HomeDatabase ManagementDeploy Amazon RDS databases for applications in Kubernetes

Deploy Amazon RDS databases for applications in Kubernetes

The Kubernetes container orchestration system provides numerous resources for managing applications in distributed environments. Many of these applications need a searchable storage system for their data that is secure, durable, and performant. Developers want to focus on continuously improving their apps rather than having to worry about the operational functions of their databases. They also need a way to connect to and manage their database directly from Kubernetes.

You can get a flexible application deployment environment with ease of database administration by combining Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Relational Database Service (Amazon RDS). Amazon EKS provides a robust, managed Kubernetes service for deploying applications in all phases of their lifecycle (development, QA/UAT, staging, production). Meanwhile, Amazon RDS lets developers choose their preferred database engine (Amazon Aurora, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, Amazon RDS for MariaDB, Amazon RDS for Oracle, Amazon RDS for SQL Server) for the application, complete with essential production features like security, high availability, automatic backups, enhanced monitoring, and performant storage.

AWS Controllers for Kubernetes (ACK) provides an interface for using other AWS services directly from Kubernetes. To manage Amazon RDS database instances from Kubernetes, we can use ACK. With the ACK service controller for Amazon RDS, you can provision and manage database instances with kubectl and custom resources!

In this post, we walk you through deploying Jira, a project management tool, into a Kubernetes cluster provided by Amazon EKS. We use Amazon RDS for PostgreSQL as the database system for Jira.

Prerequisites

We need a few tools to set up our production-ready Jira deployment. Ensure you have each of the following tools in your working environment:

kubectl
eksctl
AWS Command Line Interface (AWS CLI)
helm

You must have the appropriate AWS Identity and Access Management (IAM) permissions to interact with the different AWS services. For more information, refer to the following:

Actions, resources, and condition keys for Amazon Elastic Kubernetes Service
Actions, resources, and condition keys for Amazon RDS
Amazon VPC policy examples

This post uses shell variables to make it easier to substitute the actual names for your deployment. When you see placeholders like NAME=<your xyz name>, substitute in the name for your environment.

When installed into a Kubernetes cluster, Jira requires Kubernetes 1.19 or later. As of this writing, Jira is only verified to work up to PostgreSQL 13.

Set up the Amazon EKS cluster

First, we need to set up our Amazon EKS cluster. Use eksctl to create an Amazon EKS cluster and ensure that the IAM OIDC provider is enabled:

EKS_CLUSTER_NAME=”<your cluster name>”
REGION=”<your region>”

eksctl create cluster
–name “${EKS_CLUSTER_NAME}”
–region “${REGION}”
–with-oidc
–instance-selector-vcpus 4
–instance-selector-memory “8Gi”

We want to ensure there are enough resources to run the Jira application, so we request 4 vCPUs and 8 GiB of RAM on our nodes. It may take 15-30 minutes to provision the Amazon EKS cluster. When your cluster is ready, try accessing it by running kubectl get nodes:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-0-58.us-east-2.compute.internal Ready <none> 2m17s v1.21.5-eks-9017834
ip-192-168-33-177.us-east-2.compute.internal Ready <none> 2m16s v1.21.5-eks-9017834

Set up shared storage with the Amazon EFS CSI driver

When running in cluster mode, which is typical for a production deployment, Jira needs to use a shared file system. Amazon EKS provides the Amazon EFS CSI driver to give Pods a shared file storage system. For instructions on how to connect an Amazon Elastic File System (Amazon EFS) system to your Kubernetes cluster, refer to Amazon EFS CSI driver.

The Amazon EFS installation guide uses a Kubernetes storage class called efs-sc . This is used for dynamic provisioning of Amazon EFS storage. We reference the efs-sc storage class later in this example.

Allow for external web access with the AWS Load Balancer Controller

We also need a way to access the Jira application externally. We can do this using the AWS Load Balancer Controller. For installation instructions, refer to Load Balancer Controller Installation.

Install the ACK services controller for Amazon RDS

The ACK services controller for Amazon RDS allows us to manage our Amazon RDS for PostgreSQL instance directly from Kubernetes. For installation instructions, refer to Install the ACK service controller for Amazon RDS.

Create a namespace for Jira

We put our Jira application and any associated objects into their own Kubernetes namespace. For simplicity, let’s name our namespace jira:

APP_NAMESPACE=jira
kubectl create ns “${APP_NAMESPACE}”

For convenience, we also manage our Amazon RDS custom resources in the jira namespace.

Set up the database networking

Let’s associate our VPC subnets to a database subnet group. This is a building block to allow for our Pods to securely access any Amazon RDS databases that are provisioned in this Amazon EKS cluster. This is also our first example of how we can interface directly with Amazon RDS from Amazon EKS using the ACK service controller for Amazon RDS.

The following snippet finds the subnets that the Amazon EKS cluster is using. It then generates a Kubernetes manifest for a DBSubnetGroup custom resource with the list of subnets to add to the DB subnet group:

RDS_SUBNET_GROUP_NAME=”<your subnet group name>”
RDS_SUBNET_GROUP_DESCRIPTION=”<your subnet group description>”
EKS_SUBNET_IDS=$(aws ec2 describe-subnets
–filters “Name=vpc-id,Values=${EKS_VPC_ID}”
–query ‘Subnets[*].SubnetId’
–output text
)

cat <<-EOF > db-subnet-groups.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBSubnetGroup
metadata:
name: ${RDS_SUBNET_GROUP_NAME}
namespace: ${APP_NAMESPACE}
spec:
name: ${RDS_SUBNET_GROUP_NAME}
description: ${RDS_SUBNET_GROUP_DESCRIPTION}
subnetIDs:
$(printf ” – %sn” ${EKS_SUBNET_IDS})
tags: []
EOF

kubectl apply -f db-subnet-groups.yaml

When applied, Kubernetes creates this DBSubnetGroup custom resource in the jira namespace. The ACK service controller for Amazon RDS detects the new DBSubnetGroup resource, and then interfaces with the Amazon RDS API to create the subnet group.

We now need to create the security group that allows the Pods in this Amazon EKS cluster to access your provisioned Amazon RDS databases. You can do this with the following commands:

RDS_SECURITY_GROUP_NAME=”<your security group name>”
RDS_SECURITY_GROUP_DESCRIPTION=”<your security group description>”

EKS_CIDR_RANGE=$(aws ec2 describe-vpcs
–vpc-ids $EKS_VPC_ID
–query “Vpcs[].CidrBlock”
–output text
)

RDS_SECURITY_GROUP_ID=$(aws ec2 create-security-group
–group-name “${RDS_SUBNET_GROUP_NAME}”
–description “${RDS_SUBNET_GROUP_DESCRIPTION}”
–vpc-id “${EKS_VPC_ID}”
–output text
)

aws ec2 authorize-security-group-ingress
–group-id “${RDS_SECURITY_GROUP_ID}”
–protocol tcp
–port 5432
–cidr “${EKS_CIDR_RANGE}”

This was a lot, but now we’re ready to deploy our production application.

Provision an Amazon RDS for PostgreSQL database instance

Before we deploy Jira, we set up our Amazon RDS for PostgreSQL database instance.

With the ACK service controller for Amazon RDS, we can provision an Amazon RDS for PostgreSQL database instance using the Kubernetes API. We can do this by creating a DBInstance custom resource. The DBInstance custom resource definition follows the Amazon RDS API, so you can also use that as a reference while constructing your custom resource.

Before we create a DBInstance custom resource, we must first create a Kubernetes Secret that contains the primary database user name and password. Both DBInstance and the Jira installer need to use this Secret. Provide your desired user name and password and create the Secret:

RDS_DB_USERNAME=”<your username>”
RDS_DB_PASSWORD=”<your secure password>”

kubectl create secret generic -n “${APP_NAMESPACE}” jira-postgres-creds
–from-literal=username=”${RDS_DB_USERNAME}”
–from-literal=password=”${RDS_DB_PASSWORD}”

After you create the Secret, clear the password from your environment:

unset RDS_DB_PASSWORD

Now we can create the Amazon RDS database instance! The following manifest provisions a high availability Amazon RDS for PostgreSQL Multi-AZ database, with backups, enhanced monitoring, and encrypted storage:

RDS_DB_INSTANCE_NAME=”jira-db”
RDS_DB_INSTANCE_CLASS=”db.m6g.large”
RDS_DB_STORAGE_SIZE=100

cat <<-EOF > jira-db.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: ${RDS_DB_INSTANCE_NAME}
namespace: ${APP_NAMESPACE}
spec:
allocatedStorage: ${RDS_DB_STORAGE_SIZE}
autoMinorVersionUpgrade: true
backupRetentionPeriod: 7
dbInstanceClass: ${RDS_DB_INSTANCE_CLASS}
dbInstanceIdentifier: ${RDS_DB_INSTANCE_NAME}
dbName: jira
dbSubnetGroupName: ${RDS_SUBNET_GROUP_NAME}
enablePerformanceInsights: true
engine: postgres
engineVersion: “13”
masterUsername: ${RDS_DB_USERNAME}
masterUserPassword:
namespace: ${APP_NAMESPACE}
name: jira-postgres-creds
key: password
multiAZ: true
publiclyAccessible: false
storageEncrypted: true
storageType: gp2
vpcSecurityGroupIDs:
– ${RDS_SECURITY_GROUP_ID}
EOF

kubectl apply -f jira-db.yaml

You can view the details of your Amazon RDS for PostgreSQL instance using kubectl describe dbinstance; for example:

kubectl describe dbinstance -n “${APP_NAMESPACE}” “${RDS_DB_INSTANCE_NAME}

It may take 5-10 minutes for your Amazon RDS for PostgreSQL instance to be ready. To check on its availability, you can use the following command:

kubectl get dbinstance -n “${APP_NAMESPACE}” “${RDS_DB_INSTANCE_NAME}”
-o jsonpath='{.status.dbInstanceStatus}’

The DBInstance custom resource contains detailed information about the current status and other attributes of your Amazon RDS for PostgreSQL instance in the status section. You can view this information using kubectl describe dbinstance; for example:

kubectl describe dbinstance -n “${APP_NAMESPACE}” “${RDS_DB_INSTANCE_NAME}”

For more information, refer to the status documentation for the ACK service controller for Amazon RDS.

When your Amazon RDS for PostgreSQL instance is available, store the values of the PostgreSQL endpoint and port. We need them to connect Jira.

RDS_DB_INSTANCE_HOST=$(kubectl get dbinstance -n “${APP_NAMESPACE}” “${RDS_DB_INSTANCE_NAME}”
-o jsonpath='{.status.endpoint.address}’
)
RDS_DB_INSTANCE_PORT=$(kubectl get dbinstance -n “${APP_NAMESPACE}” “${RDS_DB_INSTANCE_NAME}”
-o jsonpath='{.status.endpoint.port}’
)

Now that your Amazon RDS for PostgreSQL instance is up and running and ready for a production workload, we can connect Jira!

Deploy Jira to Amazon EKS

To install Jira in our Amazon EKS environment, we first need to download the Jira Helm chart. You can do this with the following command:

helm repo add atlassian-data-center
https://atlassian.github.io/data-center-helm-charts
helm repo update

The Jira installation instructions provide details for how to configure your Helm values.yaml file. To utilize the environmental variable we set up earlier, we generate the values.yaml file using the following command:

JIRA_VERSION=”1.1.0″

cat <<-EOF > jira.yaml
# The Jira documentation states to set replicaCount to a higher number after the
# initial configuration from your browser
replicaCount: 1
image:
repository: atlassian/jira-software
pullPolicy: IfNotPresent
serviceAccount:
create: true
database:
type: postgres72
url: jdbc:postgresql://${RDS_DB_INSTANCE_HOST}:${RDS_DB_INSTANCE_PORT}/jira
driver: org.postgresql.Driver
credentials:
secretName: jira-postgres-creds
usernameSecretKey: username
passwordSecretKey: password
volumes:
localHome:
persistentVolumeClaim:
create: true
storageClassName: gp2
resources:
requests:
storage: 1Gi
customVolume: {}
mountPath: “/var/atlassian/application-data/jira”
sharedHome:
persistentVolumeClaim:
create: true
storageClassName: efs-sc
resources:
requests:
storage: 1Gi
customVolume: {}
mountPath: “/var/atlassian/application-data/shared-home”
nfsPermissionFixer:
enabled: true
mountPath: “/shared-home”
command:
additional: []
ingress:
create: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
jira:
service:
port: 80
type: ClusterIP
contextPath:
annotations: {}
securityContext:
fsGroup: 2001
containerSecurityContext: {}
setPermissions: true
ports:
http: 8080
ehcache: 40001
ehcacheobject: 40011
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 30
accessLog:
mountPath: “/opt/atlassian/jira/logs”
localHomeSubPath: “log”
clustering:
enabled: true
license:
secretName:
secretKey: license-key
shutdown:
terminationGracePeriodSeconds: 30
command: “/shutdown-wait.sh”
resources:
jvm:
maxHeap: “768m”
minHeap: “384m”
reservedCodeCache: “512m”
container:
requests:
cpu: “2” # If changing the cpu value update ‘ActiveProcessorCount’ below
memory: “2G”
additionalJvmArgs:
– -XX:ActiveProcessorCount=2
additionalLibraries: []
additionalBundledPlugins: []
additionalVolumeMounts: []
additionalEnvironmentVariables: []
fluentd:
enabled: false
podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}
schedulerName:
additionalContainers: []
additionalInitContainers: []
additionalLabels: {}
additionalFiles: []
EOF

helm install jira atlassian-data-center/jira -n “${APP_NAMESPACE}”
–version=”${JIRA_VERSION}” –values=jira.yaml

This example creates an Ingress using the AWS Load Balancer controller to provide public access to your Jira instance. By default, the Ingress is created without a TLS endpoint. For more information, refer to Setting up end-to-end TLS encryption on Amazon EKS with the new AWS Load Balancer Controller. Based on your security requirements, you may not want to provide public access. If that is the case, don’t create the Ingress.

It may take 2-5 minutes for Jira to initialize. You can get the name of the endpoint to access Jira in your web browser using the following command:

kubectl get ingress -n “${APP_NAMESPACE}” jira
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}’

Copy this value and navigate to it in your web browser. You should now see the Jira initial setup page.

Congratulations, you have now deployed both Jira and its Amazon RDS for PostgreSQL Multi-AZ instance directly in Amazon EKS!

Cleanup

If you want to delete your Jira instance and the AWS Load Balancer instance, you can do so using helm:

helm delete jira -n “${APP_NAMESPACE}”

Artifacts that were not created by the Jira Helm chart are not deleted. You can delete your database instance using the following command:

kubectl delete -f jira-db.yaml

You will also have to remove the Kubernetes Secret containing the Amazon RDS for PostgreSQL user credentials. You can remove this Secret with the following command:

kubectl delete secret -n “${APP_NAMESPACE}” jira-postgresc-reds

To delete the database subnet groups, use the following command:

kubectl delete -f db-subnet-groups.yaml

To uninstall the ACK service controller for Amazon ADS, refer to Uninstall an ACK Controller. Following the example, set the value of SERVICE to rds.

You can delete the AWS Load Balancer controller using the following command:

helm delete aws-load-balancer-controller -n kube-system

You can delete the efs-sc storage class and the Amazon EFS CSI driver with the following commands:

kubectl delete storageclass efs-sc
helm delete aws-efs-csi-driver -n kube-system

To delete your Amazon EKS cluster, refer to Deleting an Amazon EKS cluster.

eksctl delete cluster
–name “${EKS_CLUSTER_NAME}”
–region “${REGION}”

Conclusion

We saw how AWS Controllers for Kubernetes lets you deploy an Amazon RDS for PostgreSQL instance directly from your Amazon EKS environment and connect an application to it. You can use this example for other Amazon RDS database engines—Jira supports Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MySQL, Amazon RDS for Oracle, and Amazon RDS for SQL Server. For more information, see Supported platforms.

This post shows just one example of how you can use ACK service controllers to interface with AWS services like Amazon RDS directly from Kubernetes. We could also build out this example using the ACK Amazon EC2 controller.

AWS Controllers for Kubernetes provides a convenient way to connect your Kubernetes applications to AWS services directly from Kubernetes. Let us know your experience! ACK is open source: you can request new features and report issues on the ACK community GitHub repository or add comments in the comments section of this post.

About the Author

Jonathan Katz is a Principal PMT on the Amazon RDS team and is based in New York. He is a Core Team member of the open source PostgreSQL project and is an active open source contributor.

  

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments