Today, we are adding support for Valkey 8.0 on Amazon ElastiCache. ElastiCache version 8.0 for Valkey brings faster scaling for ElastiCache Serverless and memory optimizations for node-based clusters. In this post, we discuss these improvements and how you can benefit from them.
For those unfamiliar, Valkey is an open source, in-memory key-value data store. It is a drop-in replacement for Redis OSS. It is stewarded by the Linux Foundation and rapidly improving with contributions from a vibrant developer community. AWS is actively contributing to Valkey; to learn more about AWS contributions for Valkey, see Amazon ElastiCache and Amazon MemoryDB announce support for Valkey. Last month, we announced ElastiCache version 7.2 for Valkey, with improved pricing. ElastiCache is a fully managed, Valkey-, Memcached-, and Redis OSS-compatible caching service that delivers real-time, cost-optimized performance for modern applications with 99.99% availability. ElastiCache speeds up database and application performance, scaling to millions of operations per second with microsecond read and write response time.
Scaling improvements for ElastiCache Serverless
ElastiCache Serverless enables you to create a cache in under a minute and scale capacity based on application traffic patterns. Previously, right-sizing a cache was a balancing act between cost and performance. You could provision capacity for your peaks with enough buffer, but you paid for the capacity even when not in use. Alternatively, you could provision just enough capacity and manually scale capacity when needed; however, you may run into performance bottlenecks if you see sudden unanticipated bursts in traffic. We launched ElastiCache Serverless to operate a cache for even the most demanding workloads without requiring any capacity planning. Since launch, many customers have asked us to be able to scale their applications even faster for sudden bursts in traffic, for example during a viral live event.
ElastiCache Serverless version 8.0 for Valkey brings improvements to how it handles spiky and fast-scaling workloads. Previously, ElastiCache Serverless could double the supported requests per second (RPS) every 10-12 minutes. With Valkey 8.0, ElastiCache Serverless can now scale from 0 to 5 million requests per second (RPS) in under 13 minutes, doubling the supported RPS every 2–3 minutes. Let’s look at how we achieved these improvements, and then we share some benchmarks.
With Valkey 8.0, AWS contributed significant performance enhancements, notably achieving over 1 million RPS through a new multi-threaded architecture. The new I/O threading implementation allows the main thread and I/O threads to operate concurrently, offloading tasks like reading and parsing commands, writing responses, and polling for I/O events. This design minimizes synchronization mechanisms, maintaining Valkey’s simplicity while enhancing performance. To learn more about the specific improvements in I/O multi-threading in open source Valkey 8.0, see Unlock 1 Million RPS: Experience Triple the Speed with Valkey.
We used these improvements in ElastiCache Serverless version 8.0 for Valkey to handle faster burst scaling and faster scale-out operations. By enabling more cores on-demand and optimizing I/O threading, ElastiCache Serverless can dynamically allocate additional I/O threads to support additional throughput. Furthermore, ElastiCache Serverless can allocate additional I/O threads to handle horizontal scaling, which involves migrating data to new shards. This enhanced scaling ability allows ElastiCache Serverless to handle high-traffic scenarios and sudden spikes more effectively, benefiting applications requiring high throughput and minimal latency.
Benchmark data
To demonstrate how fast you can scale your application workload with ElastiCache Serverless for Valkey 8.0, we picked a typical caching use case. Let’s assume that the application caches 512-byte values, with a roughly 80/20 split between read and write traffic. In our test, we assume that the application request rate grows from 0 RPS to 5 million RPS.
ElastiCache Serverless version 7.2 for Valkey
First, let’s look at the workload running on the previous version, ElastiCache Serverless version 7.2 for Valkey. The following graphs show the requests per second (RPS) executed by the workload on the cache, and the p50 and p99 latency.
As the graph shows, ElastiCache Serverless version 7.2 for Valkey takes approximately 50 minutes to scale to the peak of 5 million RPS, shown in the purple line in the graph, effectively doubling RPS every 10 minutes. During this time, p50 read latency (blue line) remains under 1 millisecond, but p99 latency increases to a maximum of 9 milliseconds (orange and red lines).
ElastiCache Serverless version 8.0 for Valkey
Next, let’s look at the same workload running on ElastiCache Serverless version 8.0 for Valkey.
As can be seen in the graph, ElastiCache Serverless version 8.0 for Valkey scales much faster, taking under 13 minutes to scale from zero to a peak of 5 million RPS, effectively doubling supported RPS every 2 minutes. The latency profile remains consistent; p50 read latency remains under 1 millisecond, and p99 latency increases to 7–8 milliseconds.
Summary of results
The following table summarizes our benchmarking results.
ElastiCache Mode | Time to Reach Peak RPS from Baseline | p50 Read Latency | p99 Read Latency |
ElastiCache Serverless version 7.2 for Valkey | 50 minutes | <800 us | 8-9 ms |
ElastiCache Serverless version 8.0 for Valkey | 13 minutes | <800 us | 7-8 ms |
Memory usage improvements
Valkey 8.0 introduces key memory efficiency improvements that help reduce memory usage and resource consumption for your workloads.
First, let’s explore the improvements that AWS contributed to OSS Valkey in this area. One of the main enhancements in OSS Valkey 8.0 is the optimized memory management in cluster configurations. Previously, Valkey’s cluster mode distributed data across 16,384 hash slots, which required significant metadata overhead, especially in larger deployments. Valkey 8.0 now adopts a “dictionary per slot” model, reducing metadata requirements per slot and thereby lowering memory usage. This update translates to a 32-byte memory reduction per key, which, depending on key size in your workload, can lead to significant memory savings. To learn more about the technical details behind the improvements in Valkey 8.0, see Storing more with less: Memory Efficiency in Valkey 8.
Now let’s look at how this manifests in ElastiCache version 8.0 in Valkey. When pricing ElastiCache Serverless for Valkey, we anticipated these memory optimizations and passed on the potential savings. We lowered the per-GB and per-ECPU price, reduced the storage minimum from 1 GB to 100 MB, and priced ElastiCache Serverless for Valkey 33% lower than its Redis OSS counterpart. Additionally, with Valkey 8.0, users using the SET data structure will benefit from 24 fewer bytes per element in the SET, allowing for further savings.
For node-based clusters running Valkey 8.0 in cluster mode, this memory efficiency results in lower memory consumption across the board. In our benchmark, we tested a workload with strings using 16-byte keys and 100-byte values. We stored identical data first on a cluster running Valkey 7.2 and then on one running Valkey 8.0, highlighting the memory savings available with this new release. We used the valkey-benchmark tool to add data to the serverless cache.
We looked at the memory usage for the Valkey 7.2 cluster:
Then we looked at the memory usage for the Valkey 8.0 cluster:
In this specific workload, we observed that the memory usage drops from 1.39 GB to 1.11 GB for a node-based cluster, or a 20% reduction in memory usage. This reduction in memory usage is specific to this workload type. Depending on your workload details, you may see a different reduction in memory usage.
Conclusion
ElastiCache version 8.0 for Valkey is now available in all AWS Regions. We encourage you to upgrade your ElastiCache Serverless caches and node-based ElastiCache clusters from Redis OSS and Valkey 7.2 to Valkey 8.0 and take advantage of these performance and memory usage improvements.
Apart from the scaling improvements and memory efficiency improvements, moving to ElastiCache for Valkey can also help you save money. ElastiCache Serverless for Valkey is priced 33% lower than ElastiCache Serverless for other engines, and node-based ElastiCache for Valkey is priced 20% lower. You can start with ElastiCache Serverless for Valkey at a low monthly price of $6/month.
Read more about ElastiCache version 8.0 for Valkey in our What’s new page. For a step-by-step guide on how to get started with ElastiCache for Valkey, refer to Get started with Amazon ElastiCache for Valkey.
About the author
Abhay Saxena is a Product Manager in the In-Memory Databases team at Amazon Web Services focused on ElastiCache Serverless. Prior to joining the ElastiCache team, Abhay has been at Amazon as a Product Manager for over 13 years
Rashim Gupta is a Senior Manager, Product Management at AWS and is the head of product for Amazon ElastiCache and Amazon MemoryDB. He has over six years of experience in AWS, working as a PM across compute, storage, and databases.
Read MoreAWS Database Blog