A common challenge customers face as their business grows is providing the same level of service to their end-users. Most often, databases become bottlenecks as usage outgrows capacity. Caching strategies may help improve performance by offloading frequently used data to a cache like Redis. This requires additional overhead in keeping your cache up to date. If you plan on modernizing your application, consider using purpose-built databases to serve the underlying business need. You can take advantage of a durable in-memory database such as Amazon MemoryDB for Redis to store frequently accessed data such as reference data or product catalogs. This offloads the burden from your transactional database, keeps data in a single location, and provides durability, while maintaining the high performance that your end-users expect.
You can modernize your applications using AWS database services, such as Amazon Relational Database Service (Amazon RDS), Amazon Aurora, and MemoryDB for Redis, and overcome the reliability and operational challenges associated with heavy and demanding workloads. In this post, we show you how to migrate data from Amazon Aurora PostgreSQL-Compatible Edition to Amazon MemoryDB using AWS Database Migration Service (AWS DMS). We consider a use case of a retail website that stores the transactional data in Aurora PostgreSQL and caches the product catalog in MemoryDB.
Solution overview
Before we dive deep into the solution, let’s review the concepts of some of the key components used in this solution:
AWS Database Migration Service (AWS DMS) is a service to migrate data between source and target data stores. The source and target data stores can be of the same database engine type or different, and reside either on premises or on the AWS Cloud. One of the requirements to use AWS DMS is that one of the data stores must be on the AWS Cloud.
Amazon Aurora PostgreSQL-Compatible Edition is a fully managed PostgreSQL-compatible relational database engine that combines the speed, reliability, and manageability of Amazon Aurora and cost-effectiveness of an open-source PostgreSQL database.
Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. It’s purpose-built for modern applications created with microservices architectures. MemoryDB is compatible with Redis, a popular open-source data store, enabling you to quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands that you already use. With MemoryDB, all your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput.
This solution aims to improve an application’s performance by offloading the frequently accessed product catalog data from Amazon Aurora PostgreSQL-Compatible Edition to store it durably in an in-memory MemoryDB database. In this solution, we use AWS DMS to perform the one-time data migration of the product catalog data from Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB. After the product catalog data is migrated to MemoryDB, moving forward, the application reads or writes transactional data in Amazon Aurora PostgreSQL-Compatible Edition and the product catalog data in MemoryDB. The following diagram illustrates this architecture.
Prerequisites
Make sure you complete the following prerequisite steps:
Set up the AWS Command Line Interface (AWS CLI) to run commands for interacting with your AWS resources.
Have the appropriate permissions to interact with resources in your AWS account.
Create resources with AWS CloudFormation
The AWS CloudFormation template for this solution deploys the following key resources:
Amazon Aurora PostgreSQL-Compatible Edition
AWS Cloud9
AWS DMS
AWS Key Management Service (AWS KMS)
MemoryDB
AWS Secrets Manager
Use the AWS Pricing Calculator to estimate the cost before you run this solution. The resources deployed are not eligible for the Free Tier, but if you choose the stack defaults, as of February 2023, this solution has an hourly cost of $3.00 in the us-east-1 Region.
To create the resources, complete the following steps:
Clone the GitHub project by running the following commands from your terminal:
Deploy AWS CloudFormation resources with the following code:
Provisioning the resources takes approximately 15–20 minutes to complete. If you plan to use a different stack name, replace DMSPostgreSQLMemoryDB in the install-db-tools.sh script with your stack name.
You can ensure successful stack deployment by going to the AWS CloudFormation console and verifying that the status is CREATE_COMPLETE.
Migrate product catalog data
We want to migrate the product catalog data from a transaction database (Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB). To mimic this scenario, first, we need to stage the data in Amazon Aurora PostgreSQL-Compatible Edition by importing the data from an Amazon Simple Storage Service (Amazon S3) bucket. Then we migrate the data to MemoryDB using AWS DMS.
Stage data in Amazon Aurora PostgreSQL-Compatible Edition
Let’s complete the following steps to stage the data in Amazon Aurora PostgreSQL-Compatible Edition:
On the AWS Cloud9 console, under My environments, select the environment PostgreSQLInstance.
Choose Open in Cloud9 to access the AWS Cloud9 IDE.
In your AWS Cloud9 terminal, run the following command to clone the repository and install the required tools:
Navigate to the aws-dms-postgresql-to-memorydb-migration/scripts folder to install the client tools to access Amazon Aurora PostgreSQL-Compatible Edition and MemoryDB:
The script takes 5 minutes to install all the necessary tools. After the installation, your terminal window should look like the following screenshot.
Initialize the environment variables by running the following command:
Let’s migrate the frequently used product catalog data from Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB. Remember, we are in the aws-dms-postgresql-to-memorydb-migration/scripts folder.
Run the following script to migrate the data:
This script downloads the data from an S3 bucket, creates a product_catalog table, and stages the data in Amazon Aurora PostgreSQL-Compatible Edition. The following screenshot shows the output of a successful run of the script.
Connect to the Aurora PostgreSQL database to validate the data has been staged in the product_catalog table by running the following command:
After successfully connecting to the database, run the following SQL to make sure that the records are successfully copied:
The output on your terminal window should like the following screenshot. After checking the count of records, exit from the SQL command prompt using the command q.
Migrate data to MemoryDB
In this section, we go through the steps to migrate the data from Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB.
On the AWS DMS console, choose Database migration tasks in the navigation pane.
Select the task replicate-products and on the Actions menu, choose Restart/Resume. This DMS task performs a full load of data from Aurora PostgreSQL to MemoryDB. It includes table mappings to copy all the tables and data within the public schema.
Note: We encourage you to review the replication task and endpoints configurations to learn more.
The data migration starts, and the AWS DMS task status is displayed as Running.
While the data is getting copied, you can validate the data migration in MemoryDB by connecting to it using the following command:
Once you are connected to MemoryDB, you can spot check the migration by retrieving a specific record from MemoryDB using the following command:
The output on your terminal should be something like the following screenshot.
Next steps
After you have migrated the frequently used product catalog data to MemoryDB, you need to modify your application code to read/write the transactional data from Amazon Aurora PostgreSQL-Compatible Edition and the product catalog data from MemoryDB. Now the frequently accessed product catalog data may be read in a few microseconds, providing faster application performance and improving the overall end-user experience.
Clean up
To avoid incurring ongoing charges, clean up your infrastructure by deleting the DMSPostgreSQLMemoryDB stack from the AWS CloudFormation console. Alternatively, you can use the following command to delete the cloudformation stack.
Conclusion
The idea of using a relational database alone in an enterprise application may not scale well for all customers. Applications that need to provide microsecond read and single-digit millisecond response can benefit from using MemoryDB. In this post, we demonstrated how you can migrate data from Amazon Aurora PostgreSQL-Compatible to Amazon MemoryDB for Redis using AWS DMS. Frequently looked-up data can be stored in MemoryDB and transactional data in Amazon Aurora PostgreSQL-Compatible Edition. To learn more about MemoryDB and its use cases, refer to Amazon MemoryDB for Redis.
About the authors
Kishore Dhamodaran is a Senior Solutions Architect at AWS. Kishore helps strategic customers with their cloud enterprise strategy and migration journey, leveraging his years of industry and cloud experience.
Prathap Thoguru is a Technical Leader and an Enterprise Solutions Architect at AWS. He’s an AWS certified professional in nine areas and specializes in data and analytics. He helps customers get started on and migrate their on-premises workloads to the AWS Cloud. He holds a Master’s degree in Information Technology from the University of Newcastle, Australia.
Kishore Vinjam is a Partner Solutions Architect focusing on AWS Service Catalog, AWS Control Tower, and AWS Marketplace. He is passionate about working in cloud technologies, working with customers, and building solutions for them. When not working, he likes to spend time with his family, hike, and play volleyball and ping-pong.
Sandeep Kashyap is a Principal Tech Business Development Manager with AWS marketplace. In his role, Sandeep works with customers to help them adopt cloud management best practices such as multi-account frameworks using AWS Services and partner solutions from AWS Marketplace. Sandeep also works with partners to develop Independent Software Vendor Solutions with AWS Services in the management and tools category.
Read MoreAWS Database Blog