Monday, July 15, 2024
No menu items!
HomeDatabase ManagementIntroducing Valkey GLIDE, an open source client library for Valkey and...

Introducing Valkey GLIDE, an open source client library for Valkey and Redis open source

We’re excited to announce Valkey General Language Independent Driver for the Enterprise (GLIDE), an open source permissively licensed (Apache 2.0 license) Valkey client library. Valkey is an open source permissively licensed software under the Berkeley Software Distribution (BSD) that provides a high-performance key-value data store that supports a variety of workloads such as caching, session stores, leaderboards, and message queues. Valkey GLIDE is one of the official client libraries for Valkey; it supports all Valkey commands, and the GitHub repository is under the project. GLIDE supports Valkey 7.2 and Redis open source 6.2, 7.0, and 7.2, and will continue to support future releases of Valkey. Application programmers can use GLIDE to safely and reliably connect their applications to services that are Valkey- and Redis OSS-compatible. Valkey GLIDE has been designed and configured by AWS to embed best practices learned from over a decade of operating Redis OSS-compatible services used by hundreds of thousands of customers. Implemented as a common core with language-specific wrappers, GLIDE delivers functionality and qualities of service that are consistent between application programming languages.

In this post, we discuss the benefits of Valkey GLIDE.

The end-to-end operational excellence challenge

Customers running their databases and caching workloads across cloud and on-premises environments demand enterprise-grade reliability and seamless operational excellence, including a focus on optimal connectivity and communication between the client library and the service. With over 13 years of operational experience serving hundreds of thousands of customers, the in-memory databases team at AWS found that many operational issues that caused customer outages stem from client-side failures. For example, incorrect error handling, managing connections with faulty retry logic, and incorrect default configuration, which either cause or exacerbate operational issues. Moreover, several client libraries aren’t optimized for performance or aren’t fully compatible. For example, some don’t support the ability to read from replica, which can significantly impact client-side read latency. Finally, customers running microservices or applications in various programming languages face another challenge, where they must use different client libraries that differ in behavior. They need to develop and maintain custom code and configurations for each client library separately.

Introducing Valkey GLIDE

With Valkey GLIDE, developers can build resilient Valkey- and Redis OSS-based applications, and provide a consistent client experience, which reduces the frequency of impactful operational events and simplifies remediation when they occur. Valkey GLIDE is sponsored and supported by AWS. GLIDE supports all Valkey and Redis OSS commands, is designed for reliability, and is preconfigured based on best operational practices. For example, it gracefully handles node failures, cluster topology changes, and connection reestablishment through optimized DNS configuration and connection handling logic.

To help achieve consistency in development and operations, Valkey GLIDE is implemented using a core driver framework, written in Rust, with extensions made available for supported programming languages. This design reduces the time to market of new features in multiple languages. In this release, GLIDE is available in Java and Python, with support for more programming languages in the future. Visit the GitHub repo for details on supported language versions and prerequisites.

Valkey GLIDE client library design

Valkey GLIDE was built using a core engine coded in Rust, complemented by language-specific bindings called wrappers, and a communication layer that connects it all. GLIDE’s Rust core is based on redis-rs, a leading Rust Redis OSS client library. We chose Rust for its built-in memory safety features and high-performance capabilities. The following diagram shows the high-level design.

The Rust core is responsible for communicating with Valkey or Redis OSS, covering aspects such as connection handling, topology adjustments, error management, parsing the RESP protocol, and message encapsulation. The language wrappers are designed to be lightweight and serve as language-specific interfaces for the core. The communication layer provides the seamless transmission of requests and responses between the core and the wrappers. This design provides a uniform interface and consistent client behavior across various programming languages. This is important if you have applications written in various languages that connect to Valkey or Redis OSS, and developers can have a similar client experience.

Feature highlights

Valkey GLIDE is compatible and supports all Valkey and Redis OSS commands. GLIDE supports advanced features implemented with best practices and industry standards. The following are some examples:

Improved availability with cluster topology changes discovery – The Valkey cluster topology can change over time. New nodes can be added or removed, and the primary node owning a specific slot may change. Valkey GLIDE uses best practices to automatically rediscover the cluster topology whenever Valkey indicates a change in slot ownership. GLIDE uses the majority rule algorithm to determine the new cluster topology by querying several nodes, avoid a storm of CLUSTER commands (that can increase latency), reduce potential downtime, and avoid split-brain network errors. In addition, GLIDE runs periodic checks to proactively identify topology changes. These features make sure GLIDE stays in sync with the cluster topology.
Reduced latency with read from replica – Reading from replicas in databases allows for improved read performance, scalability, high availability, and offloading of read workloads from the primary instance. By default, Valkey GLIDE directs read commands to the primary node that owns a specific slot, to avoid reading potentially stale data, and this is aligned with most client libraries. For applications that prioritize read throughput and can tolerate eventual consistency, GLIDE provides the option to route reads to replica nodes. GLIDE supports the following read from replica settings, so you can choose the one that fits your specific use case:

PRIMARY (default) – Always read from primary, in order to get the freshest data.
PREFER_REPLICA – Spread requests between all replicas in a round-robin manner. If no replica is available, route the requests to the primary.

Automatic pub/sub resubscription with stateful connection – Pub/sub channels in Valkey GLIDE are stateful. On disconnect, or in the event of a topology update, such as scaling in or out, GLIDE will automatically resubscribe the connections to the new node. The advantage is that the application code is simplified, and doesn’t have to take care of resubscribing to new nodes during reconnects.

In addition to these features, GLIDE supports Lua scripts, Async APIs, transaction support, and connection handling best practices, such as timeouts and exponential back-off. Valkey GLIDE supports all OSS commands.

Getting started

Valkey GLIDE is available for Java and Python, and can be downloaded using the standard package managers.

For Python, use the following code to install GLIDE using pip:

$ pip install valkey-glide

To help you get started, the following code demonstrates how to integrate GLIDE into your Python application:

import asyncio
from glide import GlideClusterClient, GlideClusterClientConfiguration, NodeAddress
async def main():
addresses = [NodeAddress(“”, 6379)]
config = GlideClusterClientConfiguration(addresses=addresses)
client = await GlideClusterClient.create(config)
await client.set(“test_key”, “Hello, Valkey GLIDE!”)
value = await client.get(“test_key”)
print(value) # Output: “Hello, Valkey GLIDE!”

For Java, to install Valkey GLIDE using maven, follow the steps described the GitHub repo.

The following is the same code example in Java:

/** Copyright Valkey GLIDE Project Contributors – SPDX Identifier: Apache-2.0 */
package glide.examples;

import glide.api.GlideClusterClient;
import glide.api.models.configuration.GlideClusterClientConfiguration;
import glide.api.models.configuration.NodeAddress;
import java.util.concurrent.ExecutionException;

public class ExamplesApp {
// main application entry point
public static void main(String[] args) {
String host = “”;
Integer port = 6379;
GlideClusterClientConfiguration config =
try {
GlideClusterClient client = GlideClusterClient.createClient(config).get();
client.set(“test_key”, “Hello, Valkey GLIDE!”).get();
var value = client.get(“test_key”).get();
} catch (ExecutionException | InterruptedException e) {
System.out.println(“GLIDE example failed with an exception: “);


Customers running their databases and caching workloads across cloud and on-premises environments demand enterprise-grade reliability and operational excellence. Valkey GLIDE is designed to provide a client experience that helps meet these objectives. It is supported by AWS and comes preconfigured with best practices. In this release, GLIDE is available for Java and Python, with support for additional languages actively under development. Valkey GLIDE is open source, permissively licensed (Apache 2.0 license), and can be used with any Valkey- or Redis OSS-compatible distribution supporting version 6.2, 7.0, and 7.2, including Amazon ElastiCache and Amazon MemoryDB. You can get started by downloading it from the major open source package managers. Learn more about it and submit contributions on the Valkey GLIDE GitHub repository.

About the authors

Asaf Porat Stoler is a software development manager on the Amazon In-Memory Databases team. Asaf has over 20 years of experience in storage systems, data reduction, and in-memory databases. Currently, he is focused on Amazon ElastiCache performance and on Valkey GLIDE. Outside of work, he enjoys sports, hiking, and spending time with his family.

Mickey Hoter is a principal product manager on the Amazon In-Memory Databases team. Mickey has over 20 years of experience in building software products – as a developer, team lead, group manager and product manager. Prior to joining AWS, Mickey worked for large companies such as SAP and Informatica, and startups like Zend and ClickTale. Off work, he spends most of his time in nature, where most of his hobbies are.

Read MoreAWS Database Blog



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments