Thursday, March 28, 2024
No menu items!
HomeCloud ComputingSimplify model serving with custom prediction routines on Vertex AI

Simplify model serving with custom prediction routines on Vertex AI

The data received at serving time is rarely in the format your model expects. Numerical columns need to be normalized, features created, image bytes decoded, input values validated. Transforming the data can be as important as the prediction itself. That’s why we’re excited to announce custom prediction routines on Vertex AI, which simplify the process of writing pre and post processing code. 

With custom prediction routines, you can provide your data transformations as Python code, and behind the scenes Vertex AI SDK will build a custom container that you can test locally and deploy to the cloud. 

Understanding custom prediction routines

The Vertex AI pre-built containers handle prediction requests by performing the prediction operation of the machine learning framework. Prior to custom prediction routines, if you wanted to preprocess the input before the prediction is performed, or postprocess the model’s prediction before returning the result, you would need to build a custom container from scratch.

Building a custom serving container requires writing an HTTP server that wraps the trained model, translates HTTP requests into model inputs, and translates model outputs into responses. You can see an example here showing how to build a model server with FastAPI.

With custom prediction routines, Vertex AI provides the serving-related components for you, so that you can focus on your model and data transformations.

The predictor

The predictor class is responsible for the ML-related logic in a prediction request: loading the model, getting predictions, and applying custom preprocessing and postprocessing. To write custom prediction logic, you’ll subclass the Vertex AI Predictor interface. In most cases, customizing the predictor is all you’ll require, but check out this notebook if you’d like to see an example of customizing the request handler.

This release of custom prediction routines comes with reusable XGBoost and Sklearn predictors, but if you need to use a different framework you can create your own by subclassing the base predictor.

You can see an example predictor implementation below, specifically the reusable Sklearn predictor. This is all the code you would need to write in order to build this custom model server.

code_block[StructValue([(u’code’, u’import joblibrnimport numpy as nprn rnfrom google.cloud.aiplatform.utils import prediction_utilsrnfrom google.cloud.aiplatform.prediction.predictor import Predictorrn rnclass SklearnPredictor(Predictor):rn “””Default Predictor implementation for Sklearn models.”””rn rn def __init__(self):rn returnrn rn def load(self, artifacts_uri: str):rn prediction_utils.download_model_artifacts(artifacts_uri)rn self._model = joblib.load(“model.joblib”)rn rn def preprocess(self, prediction_input: dict) -> np.ndarray:rn instances = prediction_input[“instances”]rn return np.asarray(instances)rn rn def predict(self, instances: np.ndarray) -> np.ndarray:rn return self._model.predict(instances)rn rn def postprocess(self, prediction_results: np.ndarray) -> dict:rn return {“predictions”: prediction_results.tolist()}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efc71ac90>)])]

A predictor implements four methods: 

Load: Loads in the model artifacts, and any optional preprocessing artifacts such as an encoder you saved to a pickle file.

Preprocess:Performs the logic to preprocess the input data before the prediction request. By default, the preprocess method receives a dictionary which contains all the data in the request body after it has been deserialized from JSON. 

Predict: Performs the prediction, which will look something like model.predict(instances) depending on what framework you’re using.

Postprocess:Postprocesses the prediction results before returning them to the end user. By default, the output of the postprocess method will be serialized into a JSON object and returned as the response body.

You can customize as many of the above methods as your use case requires. To customize, all you need to do is subclass the predictor and save your new custom predictor to a Python file. 

Let’s take a deeper look at how you might customize each one of these methods.

Load

The load method is where you load in any artifacts from Cloud Storage. This includes the model, but can also include custom preprocessors. 

For example, let’s say you wrote the following preprocessor to scale numerical features, and stored it as a pickle file called preprocessor.pkl in Cloud Storage.

code_block[StructValue([(u’code’, u’class MySimpleScaler(object):rn def __init__(self):rn self._means = Nonern self._stds = Nonern rn def preprocess(self, data):rn if self._means is None: # during training onlyrn self._means = np.mean(data, axis=0)rn rn if self._stds is None: # during training onlyrn self._stds = np.std(data, axis=0)rn if not self._stds.all():rn raise ValueError(“At least one column has standard deviation of 0.”)rn return (data – self._means) / self._stds’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efc71a1d0>)])]

When customizing the predictor, you would write a load method to read the pickle file, similar to the following, where artifacts_uri is the Cloud Storage path to your model and preprocessing artifacts.

code_block[StructValue([(u’code’, u’def load(self, artifacts_uri: str):rn “””Loads the preprocessor artifacts.”””rn super().load(artifacts_uri)rn gcs_client = storage.Client()rn with open(“preprocessor.pkl”, ‘wb’) as preprocessor_f:rn gcs_client.download_blob_to_file(rn f”{artifacts_uri}/preprocessor.pkl”, preprocessor_frn )rn rn with open(“preprocessor.pkl”, “rb”) as f:rn preprocessor = pickle.load(f)rn rn self._preprocessor = preprocessor’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efe90a650>)])]

Preprocess

The preprocess method is where you write the logic to perform any preprocessing needed for your serving data. It can be as simple as just applying the preprocessor you loaded in the load method as shown below:

code_block[StructValue([(u’code’, u’def preprocess(self, prediction_input):rn inputs = super().preprocess(prediction_input)rn return self._preprocessor.preprocess(inputs)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efe90a210>)])]

Instead of loading in a preprocessor, you might write the preprocessing directly in the preprocess method. For example, you might need to check your inputs are in the format you expect. Let’s say your model expects the feature at index 3 to be a string in its abbreviated form. You want to check that at serving time the value for that feature is abbreviated.

code_block[StructValue([(u’code’, u’def preprocess(self, prediction_input):rn inputs = super().preprocess(prediction_input)rn clarity_dict={“Flawless”: “FL”,rn “Internally Flawless”: “IF”,rn “Very Very Slightly Included”: “VVS1”,rn “Very Slightly Included”: “VS2”,rn “Slightly Included”: “S12”,rn “Included”: “I3″}rn for sample in inputs:rn if sample[3] not in clarity_dict.values():rn sample[3] = clarity_dict[sample[3]] rn return inputs’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efe90ae10>)])]

There are numerous other ways you could customize the preprocessing logic. You might need to tokenize text for a language model, generate new features, or load data from an external source.

Predict

This method usually just calls model.predict, and generally doesn’t need to be customized unless you’re building your predictor from scratch instead of with a reusable predictor.

Postprocess

Sometimes the model prediction is only the first step. After you get a prediction from the model you might need to transform it to make it valuable to the end user. This might be something as simple as converting the numerical class label returned by the model to the string label as shown below.

code_block[StructValue([(u’code’, u’def postprocess(self, prediction_results):rn label_dict = {0: ‘rose’,rn 1: ‘daisy’,rn 2: ‘dandelion’,rn 3: ‘tulip’,rn 4: ‘sunflower’}rn return {“predictions”: [label_dict[class_num] for class_num in prediction_results]}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efe90a810>)])]

Or you could implement additional business logic. For example, you might want to only return a prediction if the model’s confidence is above a certain threshold. If it’s below, you want the input to be sent to a human instead to double check.

code_block[StructValue([(u’code’, u’def postprocess(self, prediction_results):rn returned_predictions = []rn for result in prediction_results:rn if result > self._confidence_threshold:rn returned_predictions.append(result)rn else:rn returned_predictions.append(“confidence too low for prediction”)rn return {“predictions”: returned_predictions}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efd655d90>)])]

Just like with preprocessing, there are numerous ways you can postprocess your data with custom prediction routines. You might need to detokenize text for a language model, convert the model output into a more readable format for the end user, or even call a Vertex AI Matching Engine index endpoint to search for data with a similar embedding.

Local Testing

When you’ve written your predictor, you’ll want to save the class out to a Python file. Then you can build your image with the command below, where LOCAL_SOURCE_DIR is a local directory that contains the Python file where you saved your custom predictor.

code_block[StructValue([(u’code’, u’from google.cloud.aiplatform.prediction import LocalModelrnfrom src_dir.predictor import MyCustomPredictorrnimport osrn rnlocal_model = LocalModel.build_cpr_model(rn {LOCAL_SOURCE_DIR},rn f”{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}”,rn predictor=MyCustomPredictor,rn requirements_path=os.path.join(LOCAL_SOURCE_DIR, “requirements.txt”),rn)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efd655750>)])]

Once the image is built, you can test it out by deploying it to a local endpoint and then calling the predict method and passing in the request data. You’ll set artifact_uri to the path in Cloud Storage where you’ve saved your model and any artifacts needed for preprocessing or postprocessing. You can also use a local path for testing.

code_block[StructValue([(u’code’, u’with local_model.deploy_to_local_endpoint(rn artifact_uri=f”{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}”,rn credential_path=CREDENTIALS_FILE,rn) as local_endpoint:rn predict_response = local_endpoint.predict(rn request_file=INPUT_FILE,rn headers={“Content-Type”: “application/json”},rn )’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efd655b50>)])]

Deploy to Vertex AI

After testing the model locally to confirm that the predictions work as expected, the next steps are to push the image to Artifact Registry, import the model to the Vertex AI Model Registry, and optionally deploy it to an endpoint if you want online predictions.

code_block[StructValue([(u’code’, u’# push imagernlocal_model.push_image()rn rn# upload to registryrnmodel = aiplatform.Model.upload(local_model=local_model, rn display_name=MODEL_DISPLAY_NAME,rn artifact_uri=f”{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}”,)rn rn rn# deployrnendpoint = model.deploy(machine_type=”n1-standard-4″)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efd655a10>)])]

When the model has been uploaded to Vertex AI and deployed, you’ll be able to see it in the model registry. And then you can make prediction requests like you would with any other model you have deployed on Vertex AI. 

code_block[StructValue([(u’code’, u’# get predictionrnendpoint.predict(instances=PREDICTION_DATA)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e6efd655390>)])]

What’s next

You now know the basics of how to use custom prediction routines to help add powerful customization to your serving workflows without having to worry about model servers or building Docker containers. To get hands on experience with an end to end example, check out this codelab. It’s time to start writing some custom prediction code of your own!

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments