Saturday, May 25, 2024
No menu items!
HomeDatabase ManagementStore and analyze time-series data with multi-measure records, magnetic storage writes, and...

Store and analyze time-series data with multi-measure records, magnetic storage writes, and scheduled queries in Amazon Timestream

Time series is one of the most common data formats, particular when collecting information from applications, infrastructure, and IoT devices. Amazon Timestream is a serverless time series database that scales with your data and makes it easy to store and analyze your events at a much lower cost than relational databases. During AWS re:Invent 2021, Timestream announced new updates that make it easier and cheaper to implement many use cases:

Multi-measure records – You can now include more than one measure in a Timestream record. This makes it easy to load your data and reduces the number of records to write and query for many use cases. For example, you can use multi-measure records when you’re tracking more than one metric emitted from the same source at the same time. Multi-measure records also simplify migrating data from relational databases to Timestream because the database schema needs fewer or no changes.
Magnetic storage writes – Timestream manages the lifecycle of your data with storage tiers: a memory store for recent data and a magnetic store for historical data. The transfer of data from the memory store to the magnetic store is automated and based on configurable policies. Previously, data could only be loaded to the memory store. Now, you can optimize storage costs by sending late-arriving data to the magnetic store instead of keeping a large memory store.
Scheduled queries – You can point dashboards and reports to a table containing the result of a scheduled query instead of querying directly the source table. For example, if you’re running frequent aggregates in your queries, such as sums and averages over a large dataset, you can now materialize these results into another table for lower latency and cost.

Let’s see how to use these new capabilities by applying them to a use case. In this post, we show how to use multi-measure records to simplify the data format of an existing database, enable magnetic storage writes to manage late-arriving data, and implement scheduled queries to improve performance and reduce costs for dashboards and reports.

Simplify the data format with multi-measure records

In the announcement of the general availability of Timestream, we built a simple IT monitoring application collecting CPU, memory, swap, and disk usage from a server. These four measurements were each written into their own record using a data model we call single-measure records. Now, with multi-measure records, these four measurements can be ingested, stored, and queried together because they’re emitted at the same time.

While collecting data, each server is identified by a host name and a location expressed as a country and a city. For this use case, the dimensions used to categorize data are the same for all records:


There are four measures to collect:


To create a new Timestream database and table, we use the following AWS CloudFormation template:

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Create a Timestream database and a table
Type: AWS::Timestream::Database
Type: AWS::Timestream::Table
DatabaseName: !Ref rDatabase
MemoryStoreRetentionPeriodInHours: “24”
MagneticStoreRetentionPeriodInDays: “7”
Description: Timestream Database Name
Value: !Ref MyDatabase
Description: Timestream Table Name
Value: !GetAtt MyTable.Name

We create the stack with the AWS Command Line Interface (AWS CLI):

aws cloudformation create-stack
–stack-name devops-time-series
–template-body file://template.yaml

When the stack is ready, we update the application to load data using the multi-measure records. Previously, for each timestamp, we had to insert four records, one for each measure. Now, we can collect all measures at the same time and store them in one record.

The following is the Python code for the updated application:

import time
import boto3
import psutil
import os

from botocore.config import Config

TABLE_NAME = os.environ[‘TABLE_NAME’]

CITY = “London”
HOSTNAME = “MyHostname” # You can make it dynamic using socket.gethostname()

INTERVAL = 1 # Seconds

def prepare_common_attributes():
common_attributes = {
‘Dimensions’: [
{‘Name’: ‘country’, ‘Value’: COUNTRY},
{‘Name’: ‘city’, ‘Value’: CITY},
{‘Name’: ‘hostname’, ‘Value’: HOSTNAME}
‘MeasureName’: ‘utilization’,
‘MeasureValueType’: ‘MULTI’
return common_attributes

def prepare_record(current_time):
record = {
‘Time’: str(current_time),
‘MeasureValues’: []
return record

def prepare_measure(measure_name, measure_value):
measure = {
‘Name’: measure_name,
‘Value’: str(measure_value),
‘Type’: ‘DOUBLE’
return measure

def write_records(records, common_attributes):
result = write_client.write_records(DatabaseName=DATABASE_NAME,
status = result[‘ResponseMetadata’][‘HTTPStatusCode’]
print(“Processed %d records. WriteRecords HTTPStatusCode: %s” %
(len(records), status))
except Exception as err:
print(“Error:”, err)

if __name__ == ‘__main__’:

print(“writing data to database {} table {}”.format(

session = boto3.Session()
write_client = session.client(‘timestream-write’, config=Config(
read_timeout=20, max_pool_connections=5000, retries={‘max_attempts’: 10}))
query_client = session.client(‘timestream-query’) # Not used

common_attributes = prepare_common_attributes()

records = []

while True:

current_time = int(time.time() * 1000)
cpu_utilization = psutil.cpu_percent()
memory_utilization = psutil.virtual_memory().percent
swap_utilization = psutil.swap_memory().percent
disk_utilization = psutil.disk_usage(‘/’).percent

record = prepare_record(current_time)
record[‘MeasureValues’].append(prepare_measure(‘cpu’, cpu_utilization))
record[‘MeasureValues’].append(prepare_measure(‘memory’, memory_utilization))
record[‘MeasureValues’].append(prepare_measure(‘swap’, swap_utilization))
record[‘MeasureValues’].append(prepare_measure(‘disk’, disk_utilization))


print(“records {} – cpu {} – memory {} – swap {} – disk {}”.format(
len(records), cpu_utilization, memory_utilization,
swap_utilization, disk_utilization))

if len(records) == 100:
write_records(records, common_attributes)
records = []


In the code, we set a measure name for the whole record (we use utilization) and use the MULTI measure value type. Then we can add multiple measure values, each with their name, value, and type.

To run the application, we use this script using the AWS CLI to fill the DATABASE_NAME and TABLE_NAME environment variables with the outputs of the CloudFormation stack:

export TABLE_NAME=`aws cloudformation describe-stacks –stack-name devops-time-series –query ‘Stacks[0].Outputs’ –output text | grep TableName | cut -f 3`
export DATABASE_NAME=`aws cloudformation describe-stacks –stack-name devops-time-series –query ‘Stacks[0].Outputs’ –output text | grep DatabaseName | cut -f 3`

We start the script on a laptop to collect some meaningful usage data. Measures are written to the Timestream table in batches of 100 records. Each record contains a single timestamp and four measures (CPU, memory, disk, swap):

writing data to database MyDatabase-0XM3OgacnHQR table MyTable-XAX9sYiaV5Ig
records 1 – cpu 40.0 – memory 67.6 – swap 62.7 – disk 26.2
records 2 – cpu 14.4 – memory 67.6 – swap 62.7 – disk 26.2
records 3 – cpu 13.1 – memory 67.6 – swap 62.7 – disk 26.2
records 4 – cpu 12.1 – memory 67.7 – swap 62.7 – disk 26.2
records 5 – cpu 10.0 – memory 66.9 – swap 62.7 – disk 26.2

records 95 – cpu 9.2 – memory 66.2 – swap 62.7 – disk 26.2
records 96 – cpu 10.3 – memory 66.2 – swap 62.7 – disk 26.2
records 97 – cpu 11.4 – memory 66.2 – swap 62.7 – disk 26.2
records 98 – cpu 11.7 – memory 66.2 – swap 62.7 – disk 26.2
records 99 – cpu 13.7 – memory 66.2 – swap 62.7 – disk 26.2
records 100 – cpu 9.3 – memory 66.2 – swap 62.7 – disk 26.2
Processed 100 records. WriteRecords HTTPStatusCode: 200

On the Timestream console, we run a query to see the last few records inserted in the table.

The following screenshot shows the records using the new multi-measure format. As expected, each record has the three dimensions used to categorize the data (hostname, city, and country), a time, a measure_name equal to utilization, and four measures with their values: cpu, memory, disk, and swap.

With multi-measure records, we reduce the number of records written and stored in the table. Previously, for each timestamp, we were writing four records, one for each measure. Now, we write the four measures emitted at the same time in one multi-measure record. We also use less storage, but we have the same level of information as before.

Running queries on multi-measure records is easier and more efficient because less data needs to be processed. For example, we don’t need to use joins to return multiple measures with the same query. This also reduces our overall costs.

Manage late-arriving data and data corrections with magnetic storage writes

When working with time series, you have to manage late-arriving data, which is data with a timestamp that is in the past. There can be many reasons why you receive late-arriving data and they’re often outside of your control. For example, an application or a device might be stopped or disconnected from the network for some time and is going to send all buffered data at once when it’s back online.

Previously with Timestream, you had to increase the memory store retention of your table to accept late arrivals. That’s because data outside of the memory retention period was rejected when you tried to write it into a table.

Now, you can send late-arriving data to the magnetic store by simply enabling a property on your tables. You use the same WriteRecord API call to write data to Timestream. Depending on the timestamp on the data point and the table’s configured memory store retention, the data is automatically routed to the appropriate store.

With this feature, you can now right-size your memory store data retention period to match your data ingestion throughput and the query requirements of your application. Magnetic store writes are applied asynchronously, so the data written cannot be immediately queried. However, the data is durable for any write request which the system has acknowledged.

For the IT monitoring application, when network issues occur in remote locations, data can be sent that is a few days old. Memory retention for the table is currently 24 hours (see the MemoryStoreRetentionPeriodInHours property in the CloudFormation template used to create the table).

To store data that is older than 1 day, on the Timestream console, we select our table and choose Edit. Under Data Retention options, there is the new Magnetic Storage Writes section. We enable the setting and add a location on Amazon Simple Storage Service (Amazon S3) for the error logs. It’s important to add a location for the logs because magnetic storage writes are asynchronous and you don’t have another way to get feedback if some records can’t be written (for example, because there is a version mismatch).

Improve performance and reduce costs for dashboards and reports with scheduled queries

To further optimize costs, we can use a scheduled query to prepare the data we use more frequently for reports and dashboards. For example, we previously created a Grafana dashboard for the IT monitoring application. We connected the Timestream table as data source and used the following query to collect data for the dashboard ($__database and $__table are macros defined by the data source):

SELECT country, hostname, city, avg(cpu) AS avg_cpu,
avg(memory) AS avg_memory, avg(disk) AS avg_disk,
avg(swap) AS avg_swap, bin(time, 10m) AS binned_time
FROM $__database.$__table
GROUP BY country, hostname, city, bin(time, 10m)
ORDER BY binned_time

The following screenshot shows our output:

To understand how the query is run by Timestream, we inspect the query metadata.

In the query metadata, we see that about 4 MB has been scanned by Timestream (CumulativeBytesScanned) to populate the dashboard with a latency of 400 milliseconds (fetchTime). The amount of data scanned is affecting both the cost and the performance of the query.

To optimize costs and improve performance, we can configure a scheduled query to populate another table with pre-aggregated data. Then we can point dashboards and reports to this new table instead of querying the considerably larger source table.

First, we create a table to store the result of the scheduled query. To do so, we add a new resource to the CloudFormation template:

Type: AWS::Timestream::Table
DatabaseName: !Ref MyDatabase
MemoryStoreRetentionPeriodInHours: “24”
MagneticStoreRetentionPeriodInDays: “7”

Then, using the AWS CLI, we update the CloudFormation stack:

aws cloudformation update-stack –stack-name devops-time-series –template-body file://template.yaml

In the Timestream console, we create a scheduled query. We select the new table as the destination and enter a name for the scheduled query. We can also create scheduled queries using CloudFormation but following the process in the console makes it easier to understand how they work.

In the Query statement section, we use the following SQL query to aggregate data in 10-minute intervals. The source table has data with 1-second intervals.

SELECT country, hostname, city,
avg(cpu) AS avg_cpu,
avg(memory) AS avg_memory,
avg(disk) AS avg_disk,
avg(swap) AS avg_swap,
bin(time, 10m) AS binned_timestamp
FROM “MyDatabase-0XM3OgacnHQR”.”MyTable-XAX9sYiaV5Ig”
GROUP BY country, hostname, city, bin(time, 10m)

We check that the query is correct by choosing Validate. Then, in the Visual Editor, we leave the default measure name for the multi-measure record and set the attribute type of country, host name, and city to Dimension.

We choose to run the query every 10 minutes. Depending on your use case, you can pick a shorter or longer interval, or use a CRON expression to avoid running the query outside of your business hours.

In the security settings, we choose an AWS Identity and Access Management (IAM) role that we prepared earlier.

The role has three policies attached:

Timestream permissions to read from the source table and write to the destination table:

“Version”: “2012-10-17”,
“Statement”: [
“Effect”: “Allow”,
“Action”: [
“Resource”: “arn:aws:timestream:eu-west-1:123412341234:database/MyDatabase-0XM3OgacnHQR/table/MyTable-XAX9sYiaV5Ig”
“Effect”: “Allow”,
“Action”: [
“Resource”: “”
“Action”: [
“Resource”: [
“Effect”: “Allow”
“Action”: [
“Resource”: “”,
“Effect”: “Allow”

Amazon SNS permissions to publish to the selected SNS topic:

“Version”: “2012-10-17”,
“Statement”: [
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: “sns:Publish”,
“Resource”: “arn:aws:sns:eu-west-1:123412341234:timestream-scheduled-query”

Amazon S3 permissions to write logs to an S3 bucket:

“Version”: “2012-10-17”,
“Statement”: [
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::my-bucket/*”

Finally, the following trust relationship allows Timestream to assume the role:

“Version”: “2012-10-17”,
“Statement”: [
“Effect”: “Allow”,
“Principal”: {
“Service”: “”
“Action”: “sts:AssumeRole”

The role also gives permissions to do the following:

Publish to an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification when a query runs
Write logs to an S3 bucket

We use the default AWS Key Management Service (AWS KMS) key to encrypt the query statement. Under SNS notifications, we select the SNS topic for the notifications. These notifications are important because they tell you if the scheduled query ran correctly or if an error occurred.

We configure the error report logging to store error logs in an S3 bucket.

Optionally, we can add tags to search and filter resources or track AWS costs. Finally, we review all the configurations and create the scheduled query.

After the scheduled query runs at least once, we create a new dashboard pointing to the aggregated data table. We use the same query as before. We just renamed the measure fields, for example cpu is now avg_cpu, and so on. The result of the query is exactly the same as before.

To see the benefits of using the aggregated table, we review the query metadata again.

About 9 KB has been scanned for the query used to draw the same chart, down from 4 MB when running the query on the source table. The fetch time has also improved, down to 250 milliseconds from 400 milliseconds. As the amount of data stored in the database grows, we will start to see more latency improvements. In this way, the dashboard is faster to render and cheaper to run. For more usage patterns with end-to-end examples of how to use scheduled queries, see Scheduled Query Patterns and Examples.

Clean up

To avoid future costs, we remove all the resources created for this post. First, we delete the CloudFormation stack with the following command. This deletes the database and the two tables.

aws cloudformation delete-stack —stack-name devops-time-series

On the Timestream console, we select the scheduled query and choose Delete on the Actions menu. You can also create scheduled queries using CloudFormation to manage all your Timestream resources as code.


Multi-measure records, magnetic storage writes, and scheduled queries make it easier and cheaper to store and analyze time series data for your applications and are available in all AWS Regions where Timestream is offered.

There are no additional costs for using these new features—in fact, they can all help you optimize your costs. Running scheduled queries means you pay the cost to run the query and then materialize the results. However, if you’re repeatedly loading your dashboards, the cost savings from querying the precomputed results usually is more significant that the cost to run the queries.

Timestream provides the flexibility to model your data in different ways to suit your application’s requirements. For patterns and guidelines for you to optimize your costs and performance, we added a new Data Modeling section in the Timestream documentation. Let us know in the comments or on Twitter what you’re going to build with these new features!

About the Authors

Danilo Poccia works with startups and companies of any size to support their innovation. In his role as Chief Evangelist (EMEA) at Amazon Web Services, he leverages his experience to help people bring their ideas to life, focusing on serverless architectures and event-driven programming, and on the technical and business impact of machine learning and edge computing. He is the author of AWS Lambda in Action from Manning.

Sreenath Gotur is a Sr. Timestream Specialist Solutions Architect at AWS based out of Charlotte,NC. Prior to joining AWS, he was heading enterprise data management, enterprise data services and data innovation portfolio with a large Financial firm. Sreenath has a special interest in Data & Analytics, Document DB and Graph DB. In his spare time, he enjoys spending quality time with his family.

Read MoreAWS Database Blog



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments