Thursday, April 25, 2024
No menu items!

Easier experimenting in Python



Last Updated on February 23, 2022

When we work on a machine learning project, quite often we need to experiment with multiple alternatives. Some features in Python allows us to try out different options without much effort. In this tutorial, we are going to see some tips to make our experiments faster.

After finishing this tutorial, you will learn

How to leverage on duck-typing feature to easily swapping functions and objects
How making components drop-in replacement with each other can help experiments faster

Let’s get started.

Easier experimenting in Python. Photo by Jake Givens. Some rights reserved

Overview

This tutorial is in three parts, they are

Workflow of a machine learning project
Functions as objects
Caveats

Workflow of a machine learning project

Consider a very simple machine learning project, as follows:

from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC

# Load dataset
url = “https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv”
names = [‘sepal-length’, ‘sepal-width’, ‘petal-length’, ‘petal-width’, ‘class’]
dataset = read_csv(url, names=names)

# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.20, random_state=1, shuffle=True)

# Train
clf = SVC()
clf.fit(X_train, y_train)

# Test
score = clf.score(X_val, y_val)
print(“Validation accuracy”, score)

This is a typical machine learning project workflow. We have a stage of preprocessing of data, then training a model, and afterwards, evaluate our result. But in each step, we may want to try something different. For example, we may wonder if normalizing the data would make it better. So we may rewrite the code above into the following:

from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler

# Load dataset
url = “https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv”
names = [‘sepal-length’, ‘sepal-width’, ‘petal-length’, ‘petal-width’, ‘class’]
dataset = read_csv(url, names=names)

# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.20, random_state=1, shuffle=True)

# Train
clf = Pipeline([(‘scaler’,StandardScaler()), (‘classifier’,SVC())])
clf.fit(X_train, y_train)

# Test
score = clf.score(X_val, y_val)
print(“Validation accuracy”, score)

So far so good. But what if we keep experimenting with different dataset, different models, or different score functions? Each time, we keep flipping between using a scaler and not would mean a lot of code change, and quite easy to make mistakes.

Because Python supports duck-typing, we can see that the following two classifier models implemented the same interface:

clf = SVC()
clf = Pipeline([(‘scaler’,StandardScaler()), (‘classifier’,SVC())])

therefore, we can simply select between these two version and keep everything intact. We can say these two models are drop-in replacement of each other.

Making use of this property, we can create a toggle variable to control the design choice we make:

USE_SCALER = True

if USE_SCALER:
clf = Pipeline([(‘scaler’,StandardScaler()), (‘classifier’,SVC())])
else:
clf = SVC()

by toggling the variable USE_SCALER between True and False, we can select whether a scaler should be applied. A more complex example would be to select among different scaler and the classifier models, such as

SCALER = “standard”
CLASSIFIER = “svc”

if CLASSIFIER == “svc”:
model = SVC()
elif CLASSIFIER == “cart”:
model = DecisionTreeClassifier()
else:
raise NotImplementedError

if SCALER == “standard”:
clf = Pipeline([(‘scaler’,StandardScaler()), (‘classifier’,model)])
elif SCALER == “maxmin”:
clf = Pipeline([(‘scaler’,MaxMinScaler()), (‘classifier’,model)])
elif SCALER == None:
clf = model
else:
raise NotImplementedError

A complete example is as follows:

from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, MinMaxScaler

# toggle between options
SCALER = “maxmin” # “standard”, “maxmin”, or None
CLASSIFIER = “cart” # “svc” or “cart”

# Load dataset
url = “https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv”
names = [‘sepal-length’, ‘sepal-width’, ‘petal-length’, ‘petal-width’, ‘class’]
dataset = read_csv(url, names=names)

# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.20, random_state=1, shuffle=True)

# Create model
if CLASSIFIER == “svc”:
model = SVC()
elif CLASSIFIER == “cart”:
model = DecisionTreeClassifier()
else:
raise NotImplementedError

if SCALER == “standard”:
clf = Pipeline([(‘scaler’,StandardScaler()), (‘classifier’,model)])
elif SCALER == “maxmin”:
clf = Pipeline([(‘scaler’,MinMaxScaler()), (‘classifier’,model)])
elif SCALER == None:
clf = model
else:
raise NotImplementedError

# Train
clf.fit(X_train, y_train)

# Test
score = clf.score(X_val, y_val)
print(“Validation accuracy”, score)

Functions as objects

In Python, functions are first-class citizens. You can assign functions to a variable. Indeed, functions are objects in Python, so as classes (the classes themselves, not only incarnations of classes). Therefore, we can use the same technique as above to experiment amongst similar functions.

import numpy as np

DIST = “normal”

if DIST == “normal”:
rangen = np.random.normal
elif DIST == “uniform”:
rangen = np.random.uniform
else:
raise NotImplementedError

random_data = rangen(size=(10,5))
print(random_data)

The above is similar to calling np.random.normal(size=(10,5)) but we hold the function in a variable for the convenience of swaping one function with another. Note that since we call the functions with the same argument, we have to make sure all variations will accept it. In case it is not, we may need some additional lines of code to make a wrapper. For example, in case of generating Student’s t distribution, we need an additional parameter for the degree of freedom:

import numpy as np

DIST = “t”

if DIST == “normal”:
rangen = np.random.normal
elif DIST == “uniform”:
rangen = np.random.uniform
elif DIST == “t”:
def t_wrapper(size):
# Student’s t distribution with 3 degree of freedom
return np.random.standard_t(df=3, size=size)
rangen = t_wrapper
else:
raise NotImplementedError

random_data = rangen(size=(10,5))
print(random_data)

This works because in the above, np.random.normal, np.random.uniform, and t_wrapper as we defined are all drop-in replacement of each other.

Caveats

Machine learning differs from other programming projects because there are more uncertainties in the workflow. When you build a web page, or build a game, you have a picture in your mind on what to achieve. But there are some exploratory work in machine learning projects.

You will probably use some source code control system like git or Mercurial to manage your source code development history in other projects. In machine learning projects, however, we are trying out different combinations of many steps. Using git to manage the different variations may not fit, not to say sometimes overkill. Therefore using a toggle variable to control the flow should allow us try out different things faster. This is especially handy when we are working on our projects in Jupyter notebooks.

However, as we put multiple versions of code together, we made the program clumsy and less readable. It is better to do some clean up after we confirmed with what to do. This will help us in maintenance into the future.

Further reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Fluent Python, second edition, by Luciano Ramalho, https://www.amazon.com/dp/1492056359/

Summary

In this tutorial, you’ve see how duck-typing property in Python help us create drop-in replacements. Specifically you learned

Duck-typing can help us switch between alternatives easily in a machine learning workflow
We can make use a toggle variable to experiment among alternatives



The post Easier experimenting in Python appeared first on Machine Learning Mastery.

Read MoreMachine Learning Mastery

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments