Saturday, December 14, 2024
No menu items!
HomeCloud ComputingCloud Bigtable schema tips: Key salting

Cloud Bigtable schema tips: Key salting

Cloud Bigtable is a low-latency, high-throughput NoSQL database. With NoSQL, you want to design a schema that can scale and adapt to your business growth. When working with large sets of data in the real world, it’s possible there will be access pattern outliers with significantly more activity that requires a bit more planning. In this article, we are going to learn how to optimize a Bigtable schema to increase performance on highly active rows on an otherwise well-balanced schema. 

Row key design refresher

Bigtable performs best when the throughput is evenly distributed across the entire row key space and can spread across all the nodes. Bigtable rows are physically stored in tablets containing groups of contiguous row keys, and each tablet is distributed to the available nodes. If rows on the same tablet are receiving a disproportionately large percentage of requests compared to other tablets in that node, that can impact performance. 

Typically, row keys are designed to be optimized for particular queries. For example, to have queries centered around individual users you may put a user id at the beginning, like so:  user_id-xxx-yyy. When some users are significantly more active than others, such as the case for celebrity accounts, writes and reads from their rows could cause hotspotting by putting too much pressure on specific nodes. 

If we can distribute the logical row by physically dividing it amongst multiple tablets, then the rows can get balanced across the available nodes and reduce the hotspot. 

Key prefix salting

A well-distributed user id would typically work as a row key prefix, so we can use this as the starting point for our row key design:
user_id-xxx-yyy

One strategy to distribute this unbalanced throughput across all Bigtable nodes is to prepend an additional value to the row key design: 
01-user_id-xxx-yyy
02-user_id-xxx-yyy

This example has two physical rows corresponding to one logical row which divides the throughput in half. This will distribute all the rows for a particular user id across the rest of the keyspace. Since their prefix is different, they should be able to live on different tablets and are more likely to be hosted on multiple nodes. Note that it is possible for both prefixes to be in the same node or for one prefix to be split into multiple nodes since this setup’s goal is to provide more options to the load balancing mechanism.

Choosing a prefix

Choosing a prefix that doesn’t add much complexity for requests is important to consider. If we used random prefixes, each get request would turn into multiple get requests to ensure the correct row was located. If the prefix is deterministic from the row key, then it allows for minimal changes to single-row read and write requests.

If we would like N divisions, we can take modulo N of the hash of the entire existing row key. We will also refer to N as our salt range.

code_block[StructValue([(u’code’, u’int prefix = rowKey.hashCode() % saltRange;rnString saltedRowKey = prefix + “-” + rowKey;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec223898050>)])]

A point lookup and write will still work as the physical key can be computed from the logical key. Salting won’t eliminate the hotspots, but it spreads them into N hotspots of strength 1/N. These less severe hotspots can be more easily processed by individual nodes. 

Prefix options

If you have common scans over prefixes that you would like to stay intact, you can also hash just part of the row key rather than the entire row key. 

For a row key of the format user_id-site-timestamp, you might want efficient scans over user_id and site combinations. Here, we can leave off the timestamp when creating the hash, so the time-series data for those combinations will always be grouped together.

code_block[StructValue([(u’code’, u’String rowKeyBase = rowKey.substring(0, rowKey.lastIndexOf(“-“));rnint prefix = rowKeyBase.hashCode() % saltRange;rnString saltedRowKey = prefix + “-” + rowKey;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec223898990>)])]

Keys with the same logical prefix that is often scanned can still be efficiently scanned. 

This strategy is less resistant to hotspots—the same problem that the salting strategy is supposed to mitigate can come up again if individual user_id, site combinations get significant access.

Implementation

To implement this in your code, you’ll need to change the areas where you are making requests to Bigtable data. You can view the full source code example on Github.

Writing

Using this new technique, if you want to write data, follow these steps:

Take the row key you intend to write to

Compute the prefix using your hash function

Construct the salted row key by concatenating the prefix and row key

Then use the salted row key for writing your data

You will need to ensure that you integrate this flow to anywhere you are writing data.

In Java, it would look something like this:

code_block[StructValue([(u’code’, u’String saltedRowKey = getSaltedRowKey(rowKey, SALT_RANGE);rnRowMutation rowMutation = RowMutation.create(tableId, saltedRowKey)rn .setCell(….’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec20018cd10>)])]

Reading

Gets

To read individual rows in a table with salted keys, you would follow the same initial steps in writing the data like: 

Take the row key you intend to read

Compute the prefix using your hash function

Construct the salted row key by concatenating the prefix and row key

Then use the salted row key for reading your data

Since the physical row key is computed deterministically from the logical row key, only one read needs to be issued for each logical key.

In Java, it would look something like this:

code_block[StructValue([(u’code’, u’Row row = dataClient.readRow(tableId, getSaltedRowKey(rowKey, SALT_RANGE));’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec20018cd90>)])]

Scans

You can follow these steps for each scan:

Take the row key prefix you intend to scan

For 0 to N (each potential salt option)

Construct the salted row key by concatenating the prefix and row key

Then use the salted row key for your prefix scan

Issue this scan in parallel

Combine the results of all the scans

Let’s look at an example. Say you wanted to get all the data for a user and one subcategory; you would do a prefix scan on “user_id-xxx-“. If you’re working with salted rows, you would need to prefix scans based on how large your hash size is. If our hash size is 4, then we would do 4 prefix scans: 

01-user_id-xxx-02-user_id-xxx-03-user_id-xxx-04-user_id-xxx-

For the best performance you would want to issue each scan in parallel rather than sending all the prefixes into one request. Since the requests are done in parallel, the rows may not be returned in sorted order. If row order is important you will have to do some additional sorting once the results are received. 

Because the physical row keys are no longer a contiguous range, these scans may consume more Bigtable CPU which is an important consideration for choosing a salting factor with a scan-heavy workload. Large scans, however, may be more performant as more resources can be used in parallel to serve the request.

In Java, it would look something like this:

code_block[StructValue([(u’code’, u’List<Query> queries = new ArrayList<>();rnfor (int i = 0; i < SALT_RANGE; i++) {rn queries.add(Query.create(tableId).prefix(i + “-” + prefix));rn}rnrnList<ApiFuture<List<Row>>> futures = new ArrayList<>();rnfor (Query q : queries) {rn futures.add(dataClient.readRowsCallable().all().futureCall(q));rn}rnrnList<Row> rows = new ArrayList<>();rnfor (ApiFuture<List<Row>> future : futures) {rn rows.addAll(future.get());rn}rnrnfor (Row row : rows) {rn // Access your row data here.rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec20018cf90>)])]

Forward looking migrations

It can be difficult to make a large change to existing datasets, so one way to migrate is only applying the salt moving forward. If you have timestamps at the end of the key,  change the code to salt row keys past a certain fixed point in time, and just use an unsalted key for old/existing keys.

Next steps

Read more about reading and writing data to Bigtable

Learn more about Bigtable performance

See if alternative solutions like adding a cache layer to Bigtable could help

Related Article

Eliminate hotspots in Cloud Bigtable

Learn how hotspots can impact the performance of your Cloud Bigtable database. Debugging hot tablets can reduce P99 latencies and increas…

Read Article

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments