Applications designed in the cloud need to be able to scale. For stateless resources like application servers, this is a straightforward task and can be achieved by simply adding additional compute resources behind a load balancer. For stateful resources such as databases, Local write forwarding with Amazon Aurorascaling can be more challenging.
With the release of Amazon Aurora in 2015, customers could run relational databases in an Aurora cluster comprised of 1 writer and up to 15 low-latency reader nodes. This enabled applications to scale reads significantly. However, as with any database supporting multiple endpoints, developers have built complex application logic or relied on a proxy to distribute connections across nodes. Additionally, for most workloads, asynchronous readers are ideal for performance reasons. However, sometimes developers need to run synchronous reads.
In 2020, Amazon introduced read replica write forwarding for Amazon Aurora Global Database. This functionality enabled developers to issue DML commands to reader nodes in a reader Region of an Aurora global database without any special network configuration and minimal impact to application design. Additionally, this feature enabled developers to perform reads after their writes with a developer-specified level of consistency.
The functionality developed for write forwarding between global database clusters is now available within a single cluster, within a single Region. This enables applications to send write commands to reader nodes in that single cluster, which are then forwarded to the writer node in that same cluster.
In this post, we discuss the benefits of this new functionality.
Introducing local write forwarding
Depending on the needs of the application, local write forwarding can greatly diminish, if not eliminate, the need for a proxy or any special modifications to application code. Applications can now connect to any reader node in the cluster and issue both reads and writes. The reads will be served directly from the reader and the writes will be automatically forwarded to the writer to run, as illustrated in the following diagram.
Read consistency
The aurora_replica_read_consistency parameter was initially introduced with read replica write forwarding for Aurora Global Database. This parameter is carried forward with local write forwarding, and enables the user to specify eventual, session, or global consistency in order to control the level of read consistency. The aurora_replica_read_consistency parameter is set either in a parameter group or at the session level with a command that looks like the following:
By default, due to the asynchronous nature of Aurora replication, readers in an Aurora cluster operate at the eventual consistency level. This is true whether write forwarding is enabled or not.
When the writer node in an Aurora cluster performs a write, it sends redo log records to the storage volume using a 4/6 quorum. It also sends the same log records to each reader in the cluster. The readers apply the log records to the pages stored in the buffer cache if the buffer cache of that reader contains applicable pages. As part of the information sent to the readers, a VDL (Volume Durable Log Sequence Number) is also sent. This tells the reader that the log records it just received have been durably stored to the storage volume and what the LSN (Log Sequence Number) is for the last record written. The difference in time between when those records were sent to the storage node and when the readers became aware of them is the latency between the writer and the readers, which is typically within 20 milliseconds.
By modifying the aurora_replica_read_consistency parameter on a reader node, the read consistency after a write can be specified. When an application attempts a read from a reader node, using the eventual consistency mode, the reader will return immediately with whatever the latest VDL is on the reader. For many applications, this degree of staleness is acceptable, and it’s more important to the application to have a fast response from the reader than the very latest view of the data. For example, a weather monitoring application where the weather data is updated once per minute would be very well suited for eventual consistency. The data is not transactional in nature and not particularly sensitive to write latency.
For applications that require read after write consistency, but only as it pertains to the current session, the session value may be used for the aurora_replica_read_consistency parameter. This will ensure that the reader waits for the VDL associated with the latest write generated by the current session. This results in slightly greater latency, but with a view consistent with writes generated in the current session. Session consistency is useful for cases where a consistent read after a write is important, but the session only cares about seeing its own changes, not changes made by others, or when there simply are no other sessions making changes to the same piece of data. An example here might be updating a user profile where it’s important to echo back to the application the changes that were made to the single profile.
Applications that require that the read must be globally consistent with all other writes in the database can use the global value for the aurora_replica_read_consistency parameter. In this scenario, when the reader issues the first read in a transaction, it will ask the writer for the current VDL, then wait until replication catches up to that VDL. This only happens for the first read in a transaction because all transactions using local write forwarding will use the REPEATABLE READ transaction isolation level. This synchronization process will contribute a delay equal to the round-trip time between reader and writer (asking for the VDL), plus the replication lag (waiting for replication to catch up to the VDL). For some applications, the convenience of maintaining a single connection to the reader may outweigh the increased latency in the request. For applications that require the absolute lowest latency response, a read from the writer node will still be the best option. An example of an application that may require global consistency might be a banking application where multiple writers are potentially modifying a single table, and it’s imperative that all reads and writes are performed on the most recent data.
Enhanced resource utilization
Prior to local write forwarding, users who wanted a consistent read after write had to issue their reads on the writer. The additional resources required by the writer to handle reads diminishes the writer’s capacity for new write traffic. By enabling reader nodes to get consistent reads at any point, developers can more easily achieve read scale for their workloads using Aurora’s 15 read replicas and allow their writer to focus exclusively on writes.
Improving application design
For read-heavy workloads that require occasional writes, write forwarding can greatly simplify application design. Because writes can now be forwarded from readers to the writer, it’s possible to maintain a single session on a reader for both reads and writes, thereby simplifying application development.
In order to not overwhelm the writer, take note of the aurora_fwd_writer_max_connections_pct parameter. This parameter specifies the percentage of connections on the writer that may be used for write forwarding. For example, if the writer can accept up to 1,000 connections and the aurora_fwd_writer_max_connections_pct parameter is set to 10, then up to 100 connections on the writer will be allowed for writes forwarded from a reader.
Write prioritization
Prior to local write forwarding, all connections writing to the database connected to a single writer endpoint and sent DML statements that were then processed by the writer. With local write forwarding, DML statements received by reader nodes are forwarded to the writer as a sort of relay. The writer will still receive the same DML statements and process them just as though the application connected directly to the writer.
Summary
Local write forwarding enables developers to issue read and write statements to any node in a regional Aurora cluster, thereby reducing the complexity of application code or the need to deploy a proxy to differentiate between read and write queries. Additionally, developers can now make trade-offs between read consistency and speed by specifying the aurora_replica_read_consistency parameter.
Get started with Amazon Aurora Global Database and local write forwarding today!
About the author
Steve Abraham is a Principal Solutions Architect for Amazon Web Services. He works with our customers to provide guidance and technical assistance on database projects, helping them improving the value of their solutions when using AWS.
Read MoreAWS Database Blog