Thursday, May 2, 2024
No menu items!
HomeCloud ComputingHow you can automate ML experiment tracking with Vertex AI Experiments autologging

How you can automate ML experiment tracking with Vertex AI Experiments autologging

Practical machine learning (ML) is a trial and error process. ML practitioners compare different performance metrics by running ML experiments till you find the best model with a given set of parameters. Because of the experimental nature of ML, there are many reasons for tracking ML experiments and making them reproducible including debugging and compliance. 

But tracking experiments is challenging: you need to organize experiments so that other team members can quickly understand, reproduce and compare them. That adds overhead that you don’t need.

We are happy to announce Vertex AI Experiments autologging, a solution which provides automated experiment tracking for your models, which streamlines your ML experimentation

With Vertex AI Experiments autologging, you can now log parameters, performance metrics and lineage artifacts by adding one line of code to your training script without needing to explicitly call any other logging methods.

How to use Vertex AI autologging

As a data scientist or ML practitioner, you conduct your experiment in a notebook environment such as Colab or Vertex AI Workbench. To enable Vertex AI Experiments autologging, you call aiplatform.autolog() in your Vertex AI Experiment session. After that call, any parameters, metrics and artifacts associated with model training are automatically logged and then accessible within the Vertex AI Experiment console. 

Here’s  how to enable autologging in your training session with a Scikit-learn model.

code_block[StructValue([(u’code’, u’# Enable autologgingrnaiplatform.autolog()rnrn# Build training pipelinernml_pipeline = Pipeline(…)rnrn# Train modelrnml_pipeline.fit(x_train, y_train)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e0c6b2e0990>)])]

This video shows parameters and training/post-training metrics in the Vertex AI Experiment console.

Vertex AI Experiments – Autologging

Vertex AI SDK autologging uses MLFlow’s autologging in its implementation and it supports several frameworks including XGBoost, Keras and Pytorch Lighting. See documentation for all supported frameworks. 

Vertex AI Experiments autologging automatically logs model time series metrics when you train models along multiple epochs. That’s because of the integration between Vertex AI Experiments autologging and Vertex AI Tensorboard

Furthermore, you can adapt Vertex AI Experiments autologging to your needs. For example, let’s say your team has a specific experiment naming convention. By default, Vertex AI Experiments autologging automatically creates Experiment Runs for you without requiring you to call `aiplatform.start_run()` or `aiplatform.end_run()`. If you’d like to specify your own Experiment Run names for autologging, you can manually initialize a specific run within the experiment using aiplatform.start_run() and aiplatform.end_run() after autologging has been enabled. 

What’s next

You can access Vertex AI Experiments autologging with the latest version of Vertex AI SDK for Python. To learn more, check out these resources :

Documentation: Autolog data to an experiment run

Github:  Get started with Vertex AI Experiments autologging

While I’m thinking about the next blog post, let me know if there is Vertex AI content you’d like to see on Linkedin or Twitter.

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments