Editor’s note: This post features third party projects built with AI Platform. At Google I/O on May 18, 2021 Google Cloud announced Vertex AI, a unified UI for the entire ML workflow, which includes equivalent functionality from the AI Platform and new MLOps services. Most of the sample code and materials introduced in this post will also be applicable to Vertex AI products.
Do you know Google Developers Experts (GDEs)? The GDE program is a network of highly experienced technology experts, influencers and thought leaders who are passionate in sharing their knowledge and experiences with fellow developers. Among the many GDEs specialized in various Google technologies, ML (Machine Learning) GDEs have been very active across the globe hence we would like to share some of the great demos, samples and blog posts these ML GDEs have recently published for learning Cloud AI technologies. If you are interested in becoming an ML GDE, please check the bottom of this article to apply.
Try the live demo: and learn how to train and serve scikit-learn models
Victor Dibia created a great live demo NYC Taxi Trip Advisor with Cloud AI tools. Anyone can try it out. With this demo, you can choose a starting point and destination point (e.g. from JFK Airport to Central Park) so the tool shows a predicted trip time and fare using a multitask ML model (sklearn)
On the Notebooks published on the GitHub repo, Victor explains how he designed the demo with Vertex AI Notebooks, Prediction and App Engine, including the process for downloading the training data, preprocessing, training of the ML models (Random Forest and MLP) with scikit-learn, deploying to Prediction and serving with App Engine. The repo will be improved to further fine tune the user experience and the underlying ML models (e.g. use of a bayesian prediction model that allows for principled measures of uncertainty).
Visual sanity checks on the MLP model predictions.
AutoML + Notebooks + BigQuery = a fast, quick and efficient ML
Minori Matsuda published a blog post Empowering Google Cloud AI Platform Notebooks by powerful AutoML where he explains how you can integrate Vertex AI Notebooks and AutoML Tables with BigQuery by using New York City taxi trips public dataset. He says: “Combining these, we can quickly implement efficient iterations of feature engineering, modeling, evaluation, and prediction to increase the accuracy.”
In the post, Minori explains how AutoML technology works, using Model Search Google published recently. “The article says the concept of model search uses greedy beam-search the multiple trainers (even try RNNs such as LSTM), tunes the depth of the layers and the connection, and eventually does ensembles. It creates a model written in TensorFlow finally”. Minori actually tries out the framework and shows how it works with a video:
Also, Minori points out that one of the easiest ways to create an AutoML model from the dataset on BigQuery is to use BigQuery ML on Vertex AI Notebooks.
Creating an AutoML Tables model from BigQuery ML on Vertex AI Notebooks
This is a great example of an integrated solution you can compose with the powerful platform and services on Google Cloud.
Video tutorials on Google Cloud AI platform and services
Srivatsan Srinivasan has been posting a great series of videos on YouTube: Artificial Intelligence on Google Cloud Platform with sample code. One of those videos features a telecom churn prediction use case where he trains a XGBoost model and deploys it to Vertex AI Prediction.
This is not only a sample code, but a great online learning content. The video includes introductions to the following concepts:
Google Cloud Vertex AI OverviewCreating Cloud AI Notebook InstanceDeveloping Your First ML Model on Google CloudCreating Custom Predictor for InferenceBundling Dependency for DeploymentDeploying model on Vertex AI predictionCloud Storage
Feature importance with the XGBoost model
In addition to Google cloud AI platform and AI platform prediction, the video tutorial covers:
Deploying model on Google Cloud Run, App Engine and GKEBigQuery MLCloud AutoML VisionSpeech to TextMLOps on Google Cloud
Distributed Training in TensorFlow with AI Platform and Docker
Last April, Sayak Paul posted a full-fledged content Distributed Training in TensorFlow with AI Platform & Docker. He starts with: “Operating with a Jupyter Notebook environment can get very challenging if you are working your way through large-scale training workflows as is common in deep learning.” He uses AI Platform and Docker for solving this problem by providing a training workflow that is fully managed by a secure and reliable service with high availability.
Sayak says: “While developing this workflow, I considered the following aspects for services I used to develop the workflow:”
The service should automatically provision and deprovision the resources we would ask it to configure allowing us to only get charged for what’s been truly consumed.The service should also be very flexible. It must not introduce too much technical debt into our existing pipelines.
In this post, he explains the end-to-end processes starting from designing the data pipeline that takes images for cats and dogs and converts to TFRecord stored on Cloud Storage.
Data pipeline with TensorFlow
Also, his published repository contains the all code required for implementing the workflow, with rich documentation explaining how those files are organized and packaged in a Docker container to be submitted to AI Platform Training.
Dockerfile for the container packaging
Training logs on Cloud Logging
If you are a TensorFlow user, Sayak’s post could be the best way to learn what benefit you can get from the AI Platform and how to get started with the actual sample code.
SNS curation with AI Platform + GKE
Chansung Park‘s project Curated Personal Newsletter is a great sample with an actual demo app and the source code that aims for “collecting all the posts from one’s SNS wall (including personal note/shared/retweeted), then it will send an automatically curated periodic newsletter”.
The system combines AI Platform Training and Prediction with Google Kubernetes Engine for building an end-to-end MLOps pipelines for continuous training and deployment whenever a new version of data or code for a model is integrated.
Although the project is still in development, it is a useful example as an end-to-end ML pipeline built with various Google Cloud services. Chansung also published a great write up on MLOps in Google Cloud which also helps understanding how you can build a production ML pipeline with various Cloud AI tools.
If you are interested in joining the community nearby you, please check Google Cloud community page and find relevant information on meetups, tutorials and discussions.
If you share the same passion in sharing your Cloud AI knowledge and experiences with fellow developers and interested in joining this ML GDE network, please check the GDE Program website, watch this ML GDE Program intro video and send an email to [email protected] with your intro and relevant activity information.
Cloud BlogRead More