Developers and ML engineers face a variety of challenges when it comes to operationalizing Spark ML workloads. One set of challenges may come in the form of infrastructure concerns, for example, how to provision infrastructure clusters in advance, how to ensure that there are enough resources to run different kinds of tasks like data preparation, model training, and model evaluation in a reasonable time. Another set of challenges could come from task orchestration and data handling, for example, how to ensure that the most up-to-date features are available when a model training task is run.
In order to solve those challenges, you can use Vertex AI Pipelines to automate ML workflows in conjunction with Dataproc for running serverless Spark workloads. For example, this article shows you how to train and deploy a Spark ML model to achieve near real-time predictions, without provisioning any infrastructure in advance. In particular, It proves how to launch and orchestrate Dataproc jobs from Vertex AI Pipelines by using custom Python components.
Today we are excited to announce the official release of new Dataproc Serverless components for Vertex AI Pipelines that further simplify MLOps for Spark, Spark SQL, PySpark and Spark jobs.
The following components are now available:
DataprocPySparkBatchOp: PySpark batch workloadsDataprocSparkBatchOp: Spark batch workloadsDataprocSparkSqlBatchOp: Spark SQL batch workloadsDataprocSparkRBatchOp: SparkR batch workloads
With those components, you have native KFP operators to easily orchestrate Spark-based ML pipelines with Vertex AI Pipelines and Dataproc Serverless. You just need to build their preprocessing, training and postprocessing steps and compile the KFP pipeline. Then, thanks to Vertex AI Pipeline and Dataproc Serverless, you will run ML workflow in a reliable, scalable and reproducible way without requiring to provision and manage the infrastructure.
To learn more about how to use these components, you can find a getting started tutorial below
In addition to that, below you have an end-to-end example of using PySpark and DataprocPySparkBatchOp component.
Training Loan Eligibility’s Model using Pyspark and Dataproc serverless in a Vertex AI Pipeline
In this section, we will show you how to build a Spark ML pipeline using Spark MLlib and DataprocPySparkBatchOp component to determine the customer eligibility for a loan from a banking company. In particular, the pipeline covers a Spark MLlib pipeline, from data preprocessing to hyperparameter tuning of a random forest classifier which predicts the probability of a customer being eligible for a loan.
Below you have the pipeline view:
As shown in the diagram, the pipeline:
Stores data in a Cloud storage
Imputes categorical and numerical variables with DataprocPySparkBatchOp
Trains an RandomForestClassifier with DataprocPySparkBatchOp
Runs a custom component to evaluate and represents model metrics in Vertex AI Pipelines UI
If the model respects the performance condition (area under the precision-recall curve, auPR for short, is higher that a minimum value of 0.5), then
Hypertune the RandomForestClassifier with DataprocPySparkBatchOp
Register the model in the Vertex AI Model Registry
To simplify, let’s consider the training step in order to show how DataprocPySparkBatchOp works.
Train PySpark MLlib model using DataprocPySparkBatchOp component
In the example, we train a Random Forest Classifier using PySpark and Spark MLlib as a pipeline step. To use the DataprocPySparkBatchOp component to execute the training in Dataproc Serverless, you first need to create the training script.
Below you have the one you will find in the GitHub repo:
For the full code, please see this notebook.
Once the Spark session has been initialized, the script builds the Spark ML pipeline, loads preprocessed data, generates train and test samples, trains the model and saves artifacts and metrics to a Cloud Storage bucket.
Before using the PySpark script, it needs to be uploaded to a Cloud Storage bucket:
For the full code, please see this notebook.
Once the script has been uploaded to Cloud Storage, you can use the DataprocPySparkBatchOp to define your training component. The value of the main_python_file_uri argument should be the location of the PySpark script within Cloud Storage.
Here you have what we define
For the full code, please see this notebook.
DataprocPySparkBatchOp requires you to specify values for the following parameters:
project, the Project to run the Dataproc batch workload
batch_id, a unique ID to use for the batch job
main_python_file_uri, the HCFS URI of the main Python file to use as the Spark driver
The DataprocPySparkBatchOp component accepts other optional parameters that might be necessary for your workload. To learn more about the component, check out the documentation.
Finally, you integrate this component with other tasks in a pipeline definition by using the dsl.pipeline decorator. You then compile the pipeline definition and run it using the Vertex AI SDK.
While running, the pipeline would submit the training workload to Dataproc Serverless service when the model_traning_op step is executed. You can see the batch workload details in the Cloud Console after the job successfully runs.
At this point, you can see how it’s simple to integrate Spark jobs into your machine learning workflow by using the Dataproc Serverless components for Vertex AI Pipelines. Dataproc Serverless takes care of managing all the infrastructure, and the training workload consumes only the resources it requires for the time it is running.
You can integrate other tasks into your pipeline using a similar approach. The code below is an example pipeline definition that executes a complete machine learning workflow, including data preprocessing and dataset creation, model training, model evaluation, hyperparameter tuning, and uploading the model to Vertex AI Model Registry. The pipeline uses DataprocPySparkBatchOp to execute PySpark workloads on Dataproc Serverless, and other components that are part of the Google Cloud Pipeline Components SDK.
For the full code, please see this notebook
Vertex AI Pipelines allows you to visualize your machine learning workflow as it executes. The following image shows a completed end-to-end execution using the pipeline definition above.
Conclusion
In this blogpost, we announced new Dataproc componentsnow available for Vertex AI Pipelines. We also provided an end-to-end example of how to use the DataprocPySparkBatchOp to preprocess data, train and hypertune a PySpark MLlib model for loan eligibility.
What’s Next
Do you want to know more about Dataproc serverless, Vertex AI Pipelines and how to deploy Spark models on Vertex AI? Check out the following resources:
Documentation
Serving Spark ML models using Vertex AI | Cloud Architecture Center
Code Labs
Using Vertex ML Metadata with Pipelines
Video Series: AI Simplified: Vertex AI
Quick Lab: Build and Deploy Machine Learning Solutions on Vertex AI
Special thanks to Abhishek Kashyap, Henry Tappen, Mikhail Chrestkha, Karthik Ramachadran, Yang Pan, Karl Weinmeister, Andrew Ferlitsch for their support and contributions to this blogpost.
Cloud BlogRead More