We are excited to invite customers to join a preview of new SQL features that simplify time series analysis in BigQuery. These new features simplify writing queries that perform two of the most common time series operations: windowing and gap filling. We are also introducing the RANGE data type and supporting functions. The RANGE type represents a continuous window of time and is useful for recording time-based state of a value. Combined, these features make it easier to write time series queries over your data in BigQuery.
Time-series analytics and data warehouses
Time-series data is an important, and growing class of data for many enterprises. Operational data like debug logs, metrics, and event logs are inherently time-oriented and full of valuable information. Additionally, connected devices common in manufacturing, scientific research, and other domains continually produce important metrics that need to be analyzed. All of this data follows the same pattern: the data is reported as values at a specific time, and deriving meaning from that data requires understanding how it changes over time.
Users with time-series analytics problems have traditionally relied on domain-specific solutions that result in harder to access data silos. Additionally, they often have their own query languages, which require additional expertise to use. Organizations looking to analyze their time-series data alongside the rest of their data must invest in pipelines that first clean and normalize the data in one store before exporting it to their data warehouse. This slows down time-to-insights and potentially hides data from the users who could build value from the data. Adding more time-series analytics features to BigQuery SQL democratizes access to this valuable data. Organizations can now leverage BigQuery’s scalable serverless architecture to store and analyze their time-series data alongside the rest of their business data without having to build and maintain expensive pipelines first.
Introducing time-series windowing and gap filling functions
The first step in most time-series queries is to map individual data points to output windows of a specific time duration and alignment. For example, you want to know the average of a sampled value every ten minutes over the past twelve hours. The underlying data is often sampled at arbitrary times so the user’s query must include logic to map the input times to the output time windows. BigQuery’s new time bucketing functions provide a full-featured method for performing this mapping.
Let’s look at an example using a time series data set showing the air quality index (AQI) and temperature recorded over part of the day. Each row in the table represents a single sample of time-series data. In this case, we have a single time series for the zip code 60606, but the table can easily hold data for multiple time series, all represented by their zip code.
<ListValue: [StructValue([(‘code’, “CREATE OR REPLACE TABLE mydataset.environmental_data_hourly ASrnSELECT * FROM UNNEST(rn ARRAY<STRUCT<zip_code INT64, time TIMESTAMP, aqi INT64, temperature INT64>>[rn STRUCT(60606, TIMESTAMP ‘2020-09-08 00:30:51’, 22, 66),rn STRUCT(60606, TIMESTAMP ‘2020-09-08 01:32:10’, 23, 63),rn STRUCT(60606, TIMESTAMP ‘2020-09-08 02:30:35’, 22, 60),rn STRUCT(60606, TIMESTAMP ‘2020-09-08 03:29:39’, 21, 58),rn STRUCT(60606, TIMESTAMP ‘2020-09-08 06:31:14’, 22, 56),rn STRUCT(60606, TIMESTAMP ‘2020-09-08 07:31:06’, 28, 55)rn]);”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed06a2cc4f0>)])]>
Before doing additional processing we’d like to assign each row to a specific time bucket that is two-hours wide. We’ll also aggregate within those windows to compute the average AQI and maximum temperature.
<ListValue: [StructValue([(‘code’, ‘SELECTrn TIMESTAMP_BUCKET(time, INTERVAL 2 HOUR) AS time,rn zip_code,rn CAST(AVG(aqi) AS INT64) AS aqi,rn MAX(temperature) AS max_temperaturernFROM mydataset.environmental_data_hourlyrnGROUP BY zip_code, timernORDER BY zip_code, time;rnrn+———————+———-+—–+—————–+rn| time | zip_code | aqi | max_temperature |rn+———————+———-+—–+—————–+rn| 2020-09-08 00:00:00 | 60606 | 23 | 66 |rn| 2020-09-08 02:00:00 | 60606 | 22 | 60 |rn| 2020-09-08 06:00:00 | 60606 | 25 | 56 |rn+———————+———-+—–+—————–+’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed06a2cca00>)])]>
Note that we can use any valid INTERVAL to define the width of the output window. We can also define an arbitrary origin for aligning the windows if we want to start the windows at something other than 00:00:00.
Gap filling is another common step in time-series analysis that addresses gaps in time of the underlying data. A server producing metrics may reset, stopping the flow of data in the progress. A lack of user traffic may mean gaps in event data. Or, data may be inherently sparse. Users want to control how these gaps are filled in before joining with other data or preparing it for graphing. The newly released GAP_FILL table valued function does just that.
In our previous example, we have a gap between 02:00 and 06:00 in the output data because there is no raw data between 04:00:00 and 05:59:59. We would like to fill in that gap before joining this data with other time-series data. Below, we demonstrate two modes of backfill supported by the GAP_FILL TVF: “last observation carried forward” (locf) and linear interpolation. The TVF also supports inserting nulls.
<ListValue: [StructValue([(‘code’, “WITH aggregated_2_hr AS (rn SELECTrn TIMESTAMP_BUCKET(time, INTERVAL 2 HOUR) AS time,rn zip_code,rn CAST(AVG(aqi) AS INT64) AS aqi,rn MAX(temperature) AS max_temperaturern FROM mydataset.environmental_data_hourlyrn GROUP BY zip_code, timern ORDER BY zip_code, time)rnrnSELECT *rnFROM GAP_FILL(rn TABLE aggregated_2_hr,rn ts_column => ‘time’,rn bucket_width => INTERVAL 2 HOUR,rn partitioning_columns => [‘zip_code’],rn value_columns => [rn (‘aqi’, ‘locf’),rn (‘max_temperature’, ‘linear’)rn ]rn)rnORDER BY zip_code, time;rnrnrn+———————+———-+—–+—————–+rn| time | zip_code | aqi | max_temperature |rn+———————+———-+—–+—————–+rn| 2020-09-08 00:00:00 | 60606 | 23 | 66 |rn| 2020-09-08 02:00:00 | 60606 | 22 | 60 |rn| 2020-09-08 04:00:00 | 60606 | 22 | 58 |rn| 2020-09-08 06:00:00 | 60606 | 25 | 56 |rn+———————+———-+—–+—————–+”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed06a2cc6a0>)])]>
GAP_FILL can also be used to produce aligned output data without having to bucket the input data first.
Combined, these new bucketing and gap-filling operations provide key building blocks of time-series analytics. Users can now leverage BigQuery’s serverless architecture, high-throughput streaming ingestion, massive parallelism, and built-in AI to derive valuable insights from their time-series data alongside the rest of their business data, all without relying on a separate data silo.
Importantly, these new functions align with the standard SQL syntax and semantics analytics that experts are familiar with. There is no need to learn a new language or a new computation model. Simply add these functions into your existing workflows and start delivering insights.
Introducing the RANGE data type
The RANGE type and functions make it easier to work with contiguous windows of data. For example, RANGE<DATE> “[2024-01-01, 2024-02-01)” represents all DATE values starting from 2024-01-01 up to and excluding 2024-02-01. RANGE can also be unbounded on either end, representing the beginning and end of time respectively. RANGE is useful for representing a state that is true, or valid, over a contiguous period of time. Quota assigned to a user, the exchange rate of a currency over different times, the value of a system setting, or the version number of an algorithm are all great examples.
We are also introducing features that allow you to combine or expand RANGEs at query time. RANGE_SESSIONIZE is a TVF that allows you to combine overlapping or adjacent RANGEs into a single row. The table below shows how users can use this feature to create a smaller table with the same information:
<ListValue: [StructValue([(‘code’, ‘CREATE OR REPLACE TABLE mydataset.sensor_metrics ASrnSELECT * FROM UNNEST(rn ARRAY<STRUCT<sensor_id INT64, duration RANGE<DATETIME>, flow INT64, spins INT64>>[rn (1, RANGE<DATETIME> “[2020-01-01 12:00:01, 2020-01-01 12:05:23)”, 10, 1),rn (1, RANGE<DATETIME> “[2020-01-01 12:05:12, 2020-01-01 12:10:46)”, 10, 20), rn (1, RANGE<DATETIME> “[2020-01-01 12:10:27, 2020-01-01 12:15:56)”, 11, 4),rn (2, RANGE<DATETIME> “[2020-01-01 12:05:08, 2020-01-01 12:10:30)”, 21, 2),rn (2, RANGE<DATETIME> “[2020-01-01 12:10:22, 2020-01-01 12:15:42)”, 21, 10)rn]);’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed06a2cc3a0>)])]>
RANGE_SESSIONIZE finds all overlapping ranges and outputs the session range, which is a union of all the ranges that overlap within the (sensor_id, flow) partition. We finally group by the session ranges to output a table that combines the overlapping ranges:
<ListValue: [StructValue([(‘code’, ‘SELECT sensor_id, session_range, flowrnFROM RANGE_SESSIONIZE(rn (SELECT sensor_id, duration, flow FROM mydataset.sensor_metrics),rn “duration”,rn [“sensor_id”, “flow”],rn “OVERLAPS”)rnORDER BY sensor_id, session_range;rnrn+———–+——————————————–+——+rn| sensor_id | session_range | flow | rn+———–+——————————————–+——+rn| 1 | [2020-01-01 12:00:01, 2020-01-01 12:10:46) | 10 |rn| 1 | [2020-01-01 12:00:01, 2020-01-01 12:10:46) | 10 |rn| 1 | [2020-01-01 12:10:27, 2020-01-01 12:15:56) | 11 |rn| 2 | [2020-01-01 12:05:08, 2020-01-01 12:15:42) | 21 |rn| 2 | [2020-01-01 12:05:08, 2020-01-01 12:15:42) | 21 |rn+———–+——————————————–+——+rnrnSELECT sensor_id, session_range, flow, SUM(spins)rnFROM RANGE_SESSIONIZE(rn TABLE mydataset.sensor_metrics, rn “duration”,rn [“sensor_id”, “flow”],rn “OVERLAPS”)rnGROUP BY sensor_id, session_range, flowrnORDER BY sensor_id, session_rangernrn+———–+——————————————–+——+——-+rn| sensor_id | session_range | flow | spins |rn+———–+——————————————–+——+——-+rn| 1 | [2020-01-01 12:00:01, 2020-01-01 12:10:46) | 10 | 21 |rn| 1 | [2020-01-01 12:10:27, 2020-01-01 12:15:56) | 11 | 4 |rn| 2 | [2020-01-01 12:05:08, 2020-01-01 12:15:42) | 21 | 12 |rn+———–+——————————————–+——+——-+’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed06a2cc700>)])]>
You can do much more with RANGE. For example, you can use RANGE to JOIN a timestamp table with a RANGE table or emulate the behavior of an AS OF JOIN. See more examples in the RANGE and time series documentation.
Getting started
To use the new time series analysis features, all you need is a regular table with some time series data in it. You can even try experimenting with one of the existing public datasets. Check out the documentation to learn more and get started. Give these new features a try and let us know if you have feedback.
Cloud BlogRead More