Monday, May 20, 2024
No menu items!
HomeCloud ComputingTurbocharge applications with Memorystore’s persistence and flexible node types

Turbocharge applications with Memorystore’s persistence and flexible node types

Since the general availability (GA) of Memorystore for Redis Cluster in November 2023, we have seen rapid adoption and growth of the service. At Google Cloud Next 2024, we announced several major launches for Memorystore for Redis Cluster:

Public preview of data persistence for both RDB (Redis DataBase) and AOF (Append Only File)

General availability of new nodes types of 1.4 GB, 6.5 GB and 58 GB

General availability of ultra-fast vector search on Memorystore for Redis

Public preview of new configuration options

Rapidly growing customers like Statsig are choosing Memorystore for its performance, zero-downtime scalability, its 99.99% SLA, and integration with Google Cloud services. According to their calculations, adopting Memorystore for Redis Cluster resulted in a “70% reduction in the cost of Redis, as compared to the costs of running the same workloads with our previous cloud provider.”.

“Memorystore for Redis Cluster has allowed us to accomplish our business goals without compromising on cost or predictability… It has become an invaluable asset, delivering robust scalability and versatility for our operations.“ – Jason Wang, Software Engineer, Statsig

Other customers like Character.AI are choosing Memorystore for Redis Cluster to offload complex or tedious management tasks to Google and take advantage of the optimized infrastructure: 

“To provide an optimal user experience and reduce load on their production database, Character.AI implemented a caching layer. As we rapidly scaled our business we tried other horizontal sharding solutions, but ran into performance and up-time issues. In just six days we were able to migrate to Memorystore for Redis Cluster and it has met and exceeded all of our latency and reliability requirements. We no longer have to worry about our operational data cache.” – James Groeneveld Research Engineer, Character.AI

AXON Networks is a leading Google Cloud Partner who specializes in real-time OAAS (Operator as a Service) Platforms. AXON adopted Memorystore for Redis Cluster to supercharge their performance, take advantage of scaling, and move away from self-management: 

“In developing our software-defined network and orchestration system with Redis, we strategically transitioned to Google Cloud Memorystore for Redis Cluster. This move has significantly amplified our application’s performance capabilities. Previously, we utilized a self-managed open-source Redis setup. Our choice to switch to Memorystore was driven by its seamless, zero-downtime scaling features, which perfectly align with our evolving workload demands. Furthermore, Memorystore stands out for its exceptional price-to-performance ratio, offering us an unparalleled value proposition.” – Vito Sansevero, Vice President, Information Technology & Cloud Operations, AXON Networks

These new Memorystore features and capabilities give customers even more reasons to choose Memorystore for caching, vector search, real-time analytics, and much more. Continue reading to learn more about these exciting launches!

AOF and RDB Persistence 

The introduction of data persistence, with fully-managed and zero-cost AOF (Append Only File) and RDB (Redis Database) files, provides Memorystore for Redis Cluster users with new robust data durability options. Customers can now easily achieve near-zero Recovery Point Objectives (RPO) through continuous write logging or configure periodic snapshots, ensuring data resiliency against node and zonal failures. Both AOF and RDB are available at no additional cost to Memorystore for Redis Cluster customers. 

You may configure AOF to persist writes with granularity of every write, every 1 second, or allow the OS to determine (which typically results in approximately every 30 seconds). RDB snapshots may be configured at 1, 6, 12, or 24 hour intervals. Once enabled, data from each Cluster node is persisted on durable storage to ensure fast recovery in the case of node or zonal failures. All of this orchestration is fully managed by Google behind the scenes so you can focus on your applications instead of infrastructure choreography.

This release is the first of many persistence features planned on Memorystore for Redis Cluster. Stay tuned for more features coming to Memorystore later in the year.

New node types

Memorystore for Redis Cluster accommodates a wide range of use cases, from microservices requiring rapid access to cached data, to executing real-time analytics for some of the world’s largest organizations. 

To provide additional flexibility, we are excited to introduce three additional node types in addition to the existing 13 GB nodes (redis-highmem-medium):

redis-shared-core-nano – 1.4 GB shared-core nodes, ideal for pre-production or workloads with less stringent performance requirements

redis-standard-small – 6.5 GB nodes, providing superior price-performance as well as smaller (and cheaper) cluster sizing

redis-highmem-xlarge – 58 GB nodes for the largest clusters, scaling out up to 14.5 TB for a single instance 

With the new node sizes, you can further tailor your deployments more precisely to your workload requirements and take advantage of zero-downtime scaling to scale clusters out to 14.5 TB for a single instance. In addition to sizing flexibility, we’ve also introduced useful new configurations so you can tune max clients, max memory, max memory policies, and more. You can find pricing information on the new nodes here.

Turbocharge your gen AI applications

We also announced the general availability (GA) of vector search on Memorystore for Redis to provide ultra-fast vector search for applications where performance or user experience really matters. With the recent preview launch, Memorystore for Redis customers are already taking advantage of both the ultra-fast in-memory vector search capabilities as well as the open-source LangChain integrations. A Memorystore for Redis instance can perform vector search at single-digit millisecond latency over tens of millions of vectors, making Memorystore a sure-fire way to speed up your gen AI workflows. And with LangChain’s surging popularity, we are pleased to offer open-source LangChain integrations for vector store, document loaders, and memorystore storage to make integrations super easy.

Try Memorystore today

Experience how Memorystore for Redis Cluster can accelerate your applications by trying it out today. Ditch complex Redis management and focus on what’s important to your business — building impactful apps!

Get started with Memorystore.

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments