Wednesday, April 24, 2024
No menu items!
HomeCloud ComputingAnnouncing BigQuery and BigQuery ML operators for Vertex AI Pipelines

Announcing BigQuery and BigQuery ML operators for Vertex AI Pipelines

Developers (especially ML engineers) looking to orchestrate BigQuery and BigQuery ML (BQML) operations as part of a Vertex AI Pipeline have previously needed to write their own custom components. Today we are excited to announce the release of new BigQuery and BQML components for Vertex AI Pipelines, that help make it easier to operationalize BigQuery and BQML jobs in a Vertex AI Pipeline. As official components created by Google Cloud, using these components will allow you to more easily include BigQuery and BigQuery ML as part of your Vertex AI Pipelines. For example, with Vertex AI Pipelines, you can automate and monitor the entire model life cycle of your BQML models, from training to serving. In addition, using these components as part of a Vertex AI Pipelines provides extra data and model governance, as each time you run a pipeline, Vertex AI Pipelines tracks any artifacts produced automatically.

For BigQuery, the following components are now available:

BigqueryQueryJobOp

Allows users to submit an arbitrary BQ query which will either write to a temporary or permanent table. Launch a BigQuery query job and wait for it to finish.

For BigQuery ML (BQML), the following components are now available:

BigqueryCreateModelJobOp

Allow users to submit a DDL statement to create a BigQuery ML model.

BigqueryEvaluateModelJobOp

Allows users to evaluate a BigQuery ML model.

BigqueryPredictModelJobOp

Allows users to make predictions using a BigQuery ML model.

BigqueryExportModelJobOp

Allows users to export a BigQuery ML model to a Google Cloud Storage bucket 

Learn how to create a simple Vertex AI Pipeline train a BQML model, and deploy the model to Vertex AI for online predictions:

https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_bqml_pipeline_components.ipynb

In addition to the notebook above, check out the end-to-end example of using Dataflow, BigQuery and BigQuery ML components to predict the topic label of text documents using BQML and Dataflow.

End-to-end example using BigQuery and BQML components in a Vertex AI Pipeline

In this section, we’ll show an end-to-end example of using BigQuery and BQML components in a Vertex AI Pipeline. The pipeline predicts the topic of raw text documents by first converting them into embeddings using the pre-trained Swivel TensorFlow model with BQML. Then, it trains a logistic regression model in BQML using the text embeddings to predict the label, which is the topic of the document. For simplification, the model will just predict if the topic is equal to “acq” (1) or not (0), where “acq” means that document is related to the topic of “corporate acquisitions” as defined in the dataset. 

Below you can see a high level picture of the pipeline

The high level picture of the pipeline (Click to enlarge)

From left to right: 

Start with news-related text documents stored in Google Cloud Storage

Create a dataset in BigQuery using BigqueryQueryJobOp

Extract title, content and topic of (HTML) documents using Dataflow and ingest into BigQuery

Using BigQuery ML, apply the Swivel TensorFlow model to generate embeddings for each document’s content

Train a logistic regression model to predict if a document’s embedding is related to a pre-defined topic (1 if related, 0 if not)

Evaluate the model 

Apply the model to a dataset to make predictions

Let’s dive into them. 

ETL with DataflowPythonJobOp component

Imagine that you have raw text documents from Reuters stored in Google Cloud Storage (GCS). You want to preprocess them and import the text into BigQuery to train a classification model using BQML.

First, you will need to create a BQ dataset, which you can do by running the SQL query CREATE SCHEMA IF NOT EXISTS mydataset. In a Vertex AI Pipeline you can execute this query within a BigqueryQueryJobOp.

code_block[StructValue([(u’code’, u’from google_cloud_pipeline_components.v1.bigquery import (rn BigqueryQueryJobOp,rn BigqueryCreateModelJobOp,rn BigqueryEvaluateModelJobOp,rn BigqueryPredictModelJobOp) rnrnBQ_DATASET = “mydataset”rnrn# create the BQ datasetrnbq_dataset_op = BigqueryQueryJobOp(rn query=f”CREATE SCHEMA IF NOT EXISTS {BQ_DATASET}”,rn project=project,rn location=”US”,rn )’), (u’language’, u”)])]

For the full code, please see this notebook.

Aftering creating the dataset, you will need to prepare the text documents for model training. Importantly, you would need to parse out the relevant sections (“Title”, “Body”, “Topics”) of text from the raw documents. In our case we use a Beam pipeline on Dataflow which uses the Beautiful Soup (bs4) library in Python to extract the following:

Title, the title of the article

Body, the full text content of the article

Topics, one or more categories that the article belongs to

This preprocessing step is based on the ETL pipeline with Apache Beam and can be wrapped in a DataflowPythonJobOp to be used in a Vertex AI Pipeline. DataflowPythonJobOp components enable you to submit Apache Beam jobs written in Python to Dataflow for execution on Google Cloud. In other words, you can use this component to run your data pre-processing step to extract the relevant sections of text via Dataflow.

code_block[StructValue([(u’code’, u’# Dataflow jobrndataflow_python_op = DataflowPythonJobOp(rn requirements_file_path=requirements_file_path,rn python_module_path=python_file_path,rn args=build_dataflow_args_op.output,rn project=project,rn location=region,rn temp_location=temp_location,rn ).after(build_dataflow_args_op)rnrn# Wait for Dataflow job to finish runningrndataflow_wait_op = WaitGcpResourcesOp(rn gcp_resources=dataflow_python_op.outputs[“gcp_resources”]rn ).after(dataflow_python_op)’), (u’language’, u”)])]

For the full code, please see this notebook.

To clarify what’s happening in the component above, the requirements_file_path contains the GCS bucket URI to a requirements file, the python_module_path contains the bucket uri to the Apache Beam pipeline python script and args contains a list of arguments that are passed via the Beam Runner to your Apache Beam code. Also notice that WaitGcpResourcesOp is required, which allows the pipeline to wait for the component to finish execution before continuing to the next part of the pipeline.

Feature Engineering with BigqueryQueryJobOp and DataflowPythonJobOp

Once the documents have been pre-processed in Dataflow, the next step is to generate text embeddings from the pre-processed documents. The embeddings can then be used to as training data for the logistic regression model to predict the topic. 

To generate embeddings from text, you can use the Swivel model, which is a pre-trained TensorFlow model publicly available on TensorFlow Hub. How do you use a pre-trained TensorFlow SavedModel on text in BigQuery? You can import it using BQML and apply it to the text of our documents to generate the embeddings. Then you split the dataset to get the training sample you will consume into the text classifier.  

The following shows what the component looks like in the pipeline:

code_block[StructValue([(u’code’, u’# run preprocessing jobrn bq_preprocess_op = BigqueryQueryJobOp(rn query=bq_preprocess_query,rn project=project,rn location=”US”,rn ).after(dataflow_wait_op)’), (u’language’, u”)])]

For the full code, please see this notebook.

Where bq_preprocess_query contains the preprocessing query, project and location where the BigQuery job runs. 

For example, within bq_preprocess_query, the first step will be to import the Swivel model using BigQuery ML. And a step thereafter is to retrieve the embeddings, which serves as the training data for the document classifier model in the next section.

code_block[StructValue([(u’code’, u”# Create the embedding model by importing the model from GCSrn CREATE OR REPLACE MODELrn mydataset.swivel_model rn OPTIONS(model_type=’tensorflow’,rn model_path='{MODEL_PATH}’);rnrn…rnrn# Retrieve embeddings from text using ML.PREDICTrn SELECTrn title,rn sentences,rn output_0 as content_embeddings,rn topicsrn FROM ML.PREDICT(MODEL `{PROJECT_ID}.{BQ_DATASET}.{MODEL_NAME}`,(rn SELECT topics, title, content AS sentencesrn FROM `{PROJECT_ID}.{BQ_DATASET}.{BQ_TABLE}`rn )”), (u’language’, u”)])]

For the full code, please see this notebook.

Training a document classifier with BigqueryCreateModelJobOp

Now that you have the training data (as a table), you are ready to build a document classifier, using logistic regression to determine if the label of the document is ‘acq’ or not, where ‘acq’ means that the document is related to the topic of “corporate acquisitions”. You can automate this BQML model creation operation within a Vertex AI Pipeline using the BigqueryCreateModelJobOp. This component allows you to pass the BQML training query and, in case you need, parameterize it and the job associated in order to schedule a model training on BigQuery. The component returns a google.BQMLModel which also tracks the BQML model automatically using Vertex ML Metadata which provides extra insight into model and data lineage.

code_block[StructValue([(u’code’, u’create_bq_model_query = f”””rnCREATE OR REPLACE MODEL `{PROJECT_ID}.{BQ_DATASET}.{CLASSIFICATION_MODEL_NAME}`rn OPTIONS (rn model_type=’logistic_reg’,rn input_label_cols=[‘label’]) ASrn SELECTrn label,rn feature.*rn FROMrn `{PROJECT_ID}.{BQ_DATASET}.{PREPROCESSED_TABLE}`rn WHERE split = ‘TRAIN’;rn”””‘), (u’language’, u”)])]

For the full code, please see this notebook.

Evaluate the model using BigqueryEvaluateModelJobOp

Once you train the model, you would probably evaluate it before deploying into production for generating predictions. With the BigqueryEvaluateModelJobOp, you just need to pass the google.BQMLModel output you received from the previous training BigqueryCreateModelJobOp and it will generate various evaluation metrics, based on the type of the model. In our case we get precision, recall, f1_score, log_loss and roc_auc which are available in the lineage of pipeline in the Vertex ML Metadata.

A view of metrics in Vertex AI Metadata (Click to enlarge)

Notice that, thanks to these components, you can implement conditional logic in order to decide what you want to do with the model downstream. For example, if the performance is above a certain threshold, you can have the pipeline proceed to make predictions directly with BigQuery ML, or register your model into a model registry service, or deploy the model to a staging endpoint. This will really depend on the logic you want to implement with your ML pipeline. 

In this example pipeline with document classification, you’ll just be generating batch predictions directly without conditional logic. 

Predict using BigqueryPredictModelJobOp

In order to operationalize BQML model prediction, the bigquery module of google_cloud_pipeline_components library provides you the BigqueryPredictModelJobOp which allows you to launch a BigQuery predict model job by consuming the google.BQMLModel of the training component.

code_block[StructValue([(u’code’, u’#simulate predictionrn bq_predict_op = BigqueryPredictModelJobOp(rn model=bq_model_op.outputs[“model”],rn query_statement=create_bq_prediction_query,rn job_configuration_query=job_config,rn project=project,rn location=’US’rn ).after(bq_evaluate_op)’), (u’language’, u”)])]

For the full code,  please see this notebook.

In this component,  you also pass in a job_config in order to define the destination table (project ID, dataset ID and table ID) beyond the query statement to format the columns you want to have in the prediction table. 

Below you can see the visualization of the overall pipeline you get in the Vertex AI Pipelines UI.

The visualization of the pipeline in the Vertex AI Pipelines UI.

Conclusion

In this blogpost, we described the new BigQuery and BQML components now available for Vertex AI Pipelines. We also showed an end-to-end example of using the components for document classification involving BigQuery ML and Vertex AI Pipelines. 

What’s Next

Are you ready for running your BQML pipeline with Vertex AI Pipelines? Check out the following resources and let give it a try: 

Documentation

BigQuery ML

Vertex AI Pipelines

BigQuery and BigQuery components for Vertex AI Pipelines

Code Labs

Intro to Vertex Pipelines

Using Vertex ML Metadata with Pipelines

Github samples

Video Series: AI Simplified: Vertex AI

Quick Lab: Build and Deploy Machine Learning Solutions on Vertex AI

References 

https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.2.2/google_cloud_pipeline_components.experimental.bigquery.html

https://cloud.google.com/architecture/analyzing-text-semantic-similarity-using-tensorflow-and-cloud-dataflow?hl=en

https://towardsdatascience.com/how-to-do-text-similarity-search-and-document-clustering-in-bigquery-75eb8f45ab65 

Special thanks to Bo Yang, Andrew Ferlitsch, Abhinav Khushraj, Ivan Cheung, and Gus Martins for their contributions to this blogpost.

Related Article

New to ML: Learning path on Vertex AI

If you’re new to ML, or new to Vertex AI, this post will walk through a few example ML scenarios to help you understand when to use which…

Read Article

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments