Monday, April 29, 2024
No menu items!
HomeDatabase ManagementUnlock on-demand, cost-optimized performance with Amazon ElastiCache Serverless

Unlock on-demand, cost-optimized performance with Amazon ElastiCache Serverless

Amazon ElastiCache is a fully managed in-memory caching service, compatible with popular open source projects, Redis and Memcached. Hundreds of thousands of customers use it to achieve real-time, cost-optimized performance.

We built Amazon ElastiCache Serverless to satisfy some of the most common customer wish-list items, including simplifying the setup process, eliminating capacity planning and management, and providing fast, automatic scaling. With ElastiCache Serverless, you can create a cache in under a minute by just providing a name, without complex capacity planning or management. With ElastiCache Serverless, you pay per use, based on the amount of data in the cache and the requests you execute on the cache. ElastiCache Serverless automatically right-sizes your cache to meet your application performance needs, and eliminates the need for overprovisioning capacity.

In this post, we want to expand on the most common topics we are encountering from customers: how to estimate ElastiCache Serverless for Redis costs in this new pay-per-use pricing model, and how to compare them to running workloads on alternative options.

ElastiCache pricing model

With ElastiCache Serverless, you are billed on two separate dimensions: the data stored in the cache and the requests you run against the cache.

Data storage: You are billed for the data stored in your cache in GB-hours. ElastiCache Serverless continuously monitors the amount of data in your cache by sampling multiple times every minute, and calculates an hourly average to determine the cache’s data storage usage in GB-hours. ElastiCache Serverless always meters for a minimum of 1 GB storage on every cache.

Requests: You are billed for the requests you execute on the cache in ElastiCache Processing Units (ECPUs). ECPUs are a new unit that combine vCPU and network usage of your requests. If your application uses simple SET and GET requests on Strings, then each request that transfers up to 1 KB of data consumes 1 ECPU. While ECPUs are metered in partial units, each request consumes at least one ECPU. For example, a GET request that reads 0.5 KB of data consumes 1 ECPU, and a GET requests that reads 2.3 KB of data consumes 2.3 ECPUs. If your application uses complex data structures like SortedSets or Hashes, cache requests will consume ECPUs in proportion to the vCPU time taken and the amount of data transferred. To learn more about ElastiCache Serverless pricing dimensions, refer to our pricing dimensions. There is no minimum metering for ECPUs; if you do not execute any request, you are not billed for ECPUs.

To get a general sense how much ElastiCache Serverless could cost you per month, consider that storing an average of 10 GB of data in ElastiCache Serverless will cost approximately $900/month in GB-hours ($0.125/GB-hour x 10 GB x 730 hrs in a month = $912.50), and running a workload with an average request rate of 10,000 simple SET/GET requests (each consuming 1 ECPU) per second will cost approximately an additional $90/month ($0.0034/million requests x 10,000 requests/sec x 3,600 seconds in an hour x 730 hrs in a month = $89.35). These price points assume you’re running in the us-east-1 Region, and regional prices will vary. See Amazon ElastiCache pricing for more info.

Estimating costs on ElastiCache Serverless

Now that we have gone over the basics of ElastiCache Serverless pricing model, let us look at how you could estimate the storage and ECPU usage for your workloads.

Data storage

For data storage, if you are using the node-based option for ElastiCache today, the Amazon CloudWatch metric BytesUsedForCache will give you a sense of the logical dataset size. If you’re using other Redis-compatible offerings, you can inspect the output of the Redis info command for the current dataset size. The used_memory_dataset field displays the size of your dataset in bytes (the following screenshot shows an output of the Redis info command).

To estimate the average cache data size by looking at the BytesUsedForCache metric in ElastiCache, complete the following steps:

Open the CloudWatch console, and choose Metrics > All metrics in the navigation pane.
Search for the cluster name for which you want to calculate the cache data size, and select ElastiCache > Cache Cluster ID to see all metrics for that cache.
Search for the BytesUsedForCache metric name. Additionally, search for ‘-001‘ to limit the search to one node per shard. Select all the resulting metrics to add them to the graph.
View the selected metrics by clicking on the Graphed metrics tab.
Set the statistic to Average and period to 1 hour for all the metrics.
Choose Add math > Common > Sum to create a new metric adding up the BytesUsedForCache metric across all shards. Select only the SUM metric, and the plotted graph will now show you the total data storage size across all shards in your cluster.

ElastiCache Processing Units

ECPU usage will vary based upon your workload, the size of the items being accessed, and the computational complexity for serving your request. There’s no one-size-fits-all way to estimate ECPU usage, and ultimately the most precise estimates will come from testing your workload on ElastiCache Serverless. In absence of that, if you assume your workload is comprised of simple read and write requests to a single key (that is, the Redis GET and SET commands) whose value is 1 KB or smaller in size, then each request will consume 1 ECPU. If you are using the node-based option for ElastiCache today, the GetTypeCmds and SetTypeCmds CloudWatch metrics will reflect the number of read and write requests your workload is issuing. If you’re using other Redis-compatible offerings, you can inspect the output of the Redis info command to determine the number of read and write requests. In the commandstats section, the cmdstat_get and cmdstat_set fields will return the number of reads and writes, respectively (see the following screenshot).

To estimate the average simple request rate by calculating the average SET and GET command request rates for ElastiCache, complete the following steps.

Open the CloudWatch console, and choose Metrics > All metrics in the navigation pane.
Search for the cluster name for which you want to calculate the cache data size, and select ElastiCache > Cache Cluster ID to see all metrics for that cache.
Search for the GetTypeCmds metric name and select all metrics from all nodes to add them to the graph.
Search for the SetTypeCmds metric name. Additionally, search for ‘-001‘ to limit the search to one node per shard. Select all the resulting metrics to add them to the graph.
View the selected metrics by opening the Graphed metrics tab.
Set the statistic to Sum and period to 1 hour for all the metrics.
Choose Add math > Common > Sum to create a new metric adding up both metrics across all nodes. Select only the SUM metric, and the plotted graph will now show you the total data storage size across all shards in your cluster.

If you are using more complex Redis commands on in-memory data structures like sorted sets or hash, then estimating the ECPU costs can be trickier. The best way to estimate ECPU costs is by running a test workload in ElastiCache Serverless and observing the ElastiCacheProcessingUnits metric in CloudWatch to measure the number of ECPUs consumed by your application. ElastiCache Serverless normalizes the vCPU time taken by commands to the time taken by simple SET and GET requests. If your Redis command takes more vCPU time, then it consumes proportionally more ECPUs. Commands that consume more vCPU time and transfer more data consume ECPUs based on the higher of the two dimensions. For example, if your application uses the HMGET command, it consumes 3 times the vCPU time as a simple SET/GET command, and if it transfers 3.2 KB of data, it will consume 3.2 ECPU. Alternatively, if it transfers only 2 KB of data, it will consume 3 ECPUs.

Comparing costs between ElastiCache Serverless and self-designed ElastiCache clusters (node-based ElastiCache)

ElastiCache offers two deployment options: serverless caching and self-designed clusters. You should choose serverless caching if you’re creating a cache for a new workload or a workload that is not defined completely yet, if you have unpredictable application traffic, or you want the easiest way to get started with a cache. You can choose to design your own ElastiCache cluster if you’re already running ElastiCache Serverless and want finer grained control over the type of node running Redis, number of nodes, and placement of nodes, you don’t expect your application traffic to fluctuate much, or you can easily forecast your capacity requirements to control costs. To learn more about choosing the appropriate deployment option, refer to Choosing between deployment options.

In this section we compare costs for two different types of workloads.

Example 1: Getting started with a cache in a new application with fluctuating traffic

Imagine you are building an application that requires a cache that provides fast data access to enable a responsive, real-time user experience for an e-commerce website. You estimate that the application caches 10 GB of data most of the time, and grows to 50 GB during peaks for two hours during the day. Your application uses the Redis SET and GET commands to read and write objects of 500 bytes in size. You estimate that your typical request rate is 10,000 requests per second, with daily peaks of 100,000 requests per second for 2 hours of the day. You choose to deploy your workload in the U.S. East (N. Virginia) Region.

Serverless option

Your total charges are calculated as follows:

Data storage charges

Average hourly data storage usage = ((10 GB * 22 hours) + (50 GB * 2 hours))/24 hrs in a day = 13.3 GB-hours
Average hourly data storage charges = 13.3 GB-hours * $0.125 / GB-hour = $1.67/hr

ECPU charges

Because your workload comprises of Redis SET and GET requests, and each request transfers 500 bytes, each request will consume 1 ECPU.

Average hourly ECPU usage = (10,000 ECPUs/sec * 3,600 seconds in an hour * 22 hours + 100,000 ECPUs/sec * 3,600 seconds in an hour * 2 hours)/24 hrs in a day = 63,000,000 ECPUs

Average hourly ECPU charges = (63,000,000/1,000,000) * $0.0034 / million ECPUs = $0.21/hr

Data transfer charges

You access your serverless cache in the Availability Zones  you select, and therefore do not incur any cross-zone data transfer charges.

Total serverless charges

Data storage = $1.67/hr

ECPU charges = $0.21/hr

Total = $1.88/hour or $1,372.40/month.

On-demand nodes option

With this option, you design your cluster using r7g.xlarge nodes. To accommodate your peaks of 50 GB and 100,000 requests per second, and keep a buffer of 20% for unpredictable peaks, you require three shards, each with 19.74 GB of available storage (75% of 26.32 GB, when configuring your own cluster, ElastiCache recommends reserving 25% of the node’s memory for non-data use), for a total storage capacity of 59.22 GB. You use two nodes per shard for high availability. Your total charges are calculated as follows:

On-demand node charges

Cache.r7g.xlarge = $0.437/hr

Total = $0.437 * 6 node cluster = $2.62/hr

Data transfer charges

(10,000 requests/sec * 3,600 secs/hr * 22 hours + 100,000 requests/sec * 3,600 secs/hr * 2 hours)/24 = 63,000,000 requests/hr

Data transferred = 63,000,000 requests/hr * 500 bytes/request = 29.34 GB/hr

Approximately 50% of your data will cross Availability Zones due to Multi-AZ architecture

29.34 GB/hr * 50% * $0.01/GB = $0.14/hr

Total on-demand charges

Node charges = $2.62/hr

Data Transfer charges = $0.14/hr

Total = $2.76/hr or $2,014.80/month.

Summary: As evidenced by this calculation, ElastiCache Serverless offers a simpler, more cost-effective way to operate a cache for fluctuating workloads. In this example, it can result in  approximately 32% lower caching costs when compared to running a self-designed cluster.

Example 2: Deploying a cache for a steady workload

In this use case, you are building an application that requires a Redis-compatible datastore that provides fast data access to enable a responsive, real-time user experience for a healthcare application for a single healthcare provider. You estimate that the application has an average cache dataset size of 20 GB, which occasionally grows to 25 GB in size. Your application accesses this cache using Redis SET and GET commands to read and write objects of 1 KB in size. You estimate that your average request rate is 10,000 requests per second. You see occasional spikes, but your peaks are always under 20,000 requests per second. You choose to deploy your workload in the U.S. East (N. Virginia) Region.

You choose to start building your application with ElastiCache Serverless for its simplicity. You can also choose to run the workload by configuring your own cluster using on-demand nodes. You compare the pricing of both options.

Serverless option

Your total charges are calculated as follows:

Data storage charges

Average hourly data storage usage = 20 GB-hours

Average hourly data storage charges = 20 GB-hours * $0.125 / GB-hour = $2.50/hr

ECPU charges

Because your workload comprises of Redis SET and GET requests, and each request transfers 1KB, each request will consume 1 ECPU.

Average hourly ECPU usage = (10,000 ECPUs/sec * 3,600 seconds in an hour) = 36,000,000 ECPUs

Average hourly ECPU charges = (36,000,000/1,000,000) * $0.0034 / million ECPUs = $0.12/hr

Data transfer charges

You access your serverless cache in the Availability Zones you select, and therefore don’t incur any cross-zone data transfer charges.

Total serverless charges

Data storage = $2.50/hr

ECPU charges = $0.12/hr

Total = $2.62/hour or $1,912.60/month.

On-demand nodes option

For this option, you design your cluster using r7g.large nodes. To accommodate your dataset of 20 GB and occasional spikes to 25 GB, and 10,000-20,000 requests per second, you require three shards, with 9.08 GB each (75% of 13.07 GB) for a total storage size of 27.24 GB. You use two nodes per shard for high availability. Your total charges are calculated as follows:

On-demand node charges

Cache.r7g.large = $0.219

Total = $0.219 * 6 node cluster = $1.314/hr

Data transfer charges

(10,000 requests/sec * 3,600 secs/hr) = 36,000,000 requests/hr

Data transferred = 36,000,000 requests/hr * 1 KB/request = 34.33 GB/hr

Approximately 50% of your data will cross Availability Zones due to Multi-AZ architecture

34.33 GB/hr * 50% * $0.01/GB = $0.17/hr

Total on-demand charges

Node charges = $1.31/hr

Data Transfer charges = $0.17/hr

Total = $1.48/hr or $1,080.40/month.

Summary: Operating a self-designed cluster can be a more cost-effective caching solution for workloads that have minimal fluctuation in traffic. In this example, a self-designed cluster can be 44% more cost-effective compared to ElastiCache Serverless. Note, that if your capacity requirements evolve, you must adjust your cluster cache capacity to avoid running into performance issues and application impact.

Conclusion

When getting started with ElastiCache Serverless, you may be looking to estimate your caching costs, and in this post, we demonstrate how to estimate costs for when considering moving a Redis workload to ElastiCache Serverless. Serverless is the simplest way to get started with a cache in your application. You can start with a highly available, highly scalable cache by providing just a name, without any capacity planning or management. For workloads with unpredictable and fluctuating traffic patterns, ElastiCache Serverless automatically and seamlessly scales to meet application demands. Get started by visiting the ElastiCache console. If you have questions or feedback about ElastiCache Serverless, please email us at [email protected].

About the author

Abhay Saxena is a Product Manager for Amazon MemoryDB for Redis in the In-Memory Databases team at Amazon Web Services. He works with AWS customers to identify needs where customers might benefit from ultra-fast performance from In-memory databases. Prior to joining the MemoryDB team, Abhay has been at Amazon as a Product Manager for over 13 years.

Read MoreAWS Database Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments