Saturday, April 27, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningUsing Learning Rate Schedules for Deep Learning Models in Python with Keras

Using Learning Rate Schedules for Deep Learning Models in Python with Keras



Last Updated on July 12, 2022

Training a neural network or large deep learning model is a difficult optimization task.

The classical algorithm to train neural networks is called stochastic gradient descent. It has been well established that you can achieve increased performance and faster training on some problems by using a learning rate that changes during training.

In this post you will discover how you can use different learning rate schedules for your neural network models in Python using the Keras deep learning library.

After reading this post you will know:

How to configure and evaluate a time-based learning rate schedule.
How to configure and evaluate a drop-based learning rate schedule.

Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Jun/2016: First published
Update Mar/2017: Updated for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
Update Sep/2019: Updated for Keras 2.2.5 API.
Update Jul/2022: Updated for TensorFlow 2.x API

Using Learning Rate Schedules for Deep Learning Models in Python with Keras
Photo by Columbia GSAPP, some rights reserved.

Learning Rate Schedule For Training Models

Adapting the learning rate for your stochastic gradient descent optimization procedure can increase performance and reduce training time.

Sometimes this is called learning rate annealing or adaptive learning rates. Here we will call this approach a learning rate schedule, were the default schedule is to use a constant learning rate to update network weights for each training epoch.

The simplest and perhaps most used adaptation of learning rate during training are techniques that reduce the learning rate over time. These have the benefit of making large changes at the beginning of the training procedure when larger learning rate values are used, and decreasing the learning rate such that a smaller rate and therefore smaller training updates are made to weights later in the training procedure.

This has the effect of quickly learning good weights early and fine tuning them later.

Two popular and easy to use learning rate schedules are as follows:

Decrease the learning rate gradually based on the epoch.
Decrease the learning rate using punctuated large drops at specific epochs.

Next, we will look at how you can use each of these learning rate schedules in turn with Keras.

Need help with Deep Learning in Python?

Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Time-Based Learning Rate Schedule

Keras has a time-based learning rate schedule built in.

The stochastic gradient descent optimization algorithm implementation in the SGD class has an argument called decay. This argument is used in the time-based learning rate decay schedule equation as follows:

LearningRate = LearningRate * 1/(1 + decay * epoch)

When the decay argument is zero (the default), this has no effect on the learning rate.

LearningRate = 0.1 * 1/(1 + 0.0 * 1)
LearningRate = 0.1

When the decay argument is specified, it will decrease the learning rate from the previous epoch by the given fixed amount.

For example, if we use the initial learning rate value of 0.1 and the decay of 0.001, the first 5 epochs will adapt the learning rate as follows:

Epoch Learning Rate
1 0.1
2 0.0999000999
3 0.0997006985
4 0.09940249103
5 0.09900646517

Extending this out to 100 epochs will produce the following graph of learning rate (y axis) versus epoch (x axis):

Time-Based Learning Rate Schedule

You can create a nice default schedule by setting the decay value as follows:

Decay = LearningRate / Epochs
Decay = 0.1 / 100
Decay = 0.001

The example below demonstrates using the time-based learning rate adaptation schedule in Keras.

It is demonstrated on the Ionosphere binary classification problem. This is a small dataset that you can download from the UCI Machine Learning repository. Place the data file in your working directory with the filename ionosphere.csv.

The ionosphere dataset is good for practicing with neural networks because all of the input values are small numerical values of the same scale.

A small neural network model is constructed with a single hidden layer with 34 neurons and using the rectifier activation function. The output layer has a single neuron and uses the sigmoid activation function in order to output probability-like values.

The learning rate for stochastic gradient descent has been set to a higher value of 0.1. The model is trained for 50 epochs and the decay argument has been set to 0.002, calculated as 0.1/50. Additionally, it can be a good idea to use momentum when using an adaptive learning rate. In this case we use a momentum value of 0.8.

The complete example is listed below.

# Time Based Learning Rate Decay
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from sklearn.preprocessing import LabelEncoder
# load dataset
dataframe = read_csv(“ionosphere.csv”, header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:34].astype(float)
Y = dataset[:,34]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
Y = encoder.transform(Y)
# create model
model = Sequential()
model.add(Dense(34, input_shape=(34,), activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# Compile model
epochs = 50
learning_rate = 0.1
decay_rate = learning_rate / epochs
momentum = 0.8
sgd = SGD(learning_rate=learning_rate, momentum=momentum, decay=decay_rate, nesterov=False)
model.compile(loss=’binary_crossentropy’, optimizer=sgd, metrics=[‘accuracy’])
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=epochs, batch_size=28, verbose=2)

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The model is trained on 67% of the dataset and evaluated using a 33% validation dataset.

Running the example shows a classification accuracy of 99.14%. This is higher than the baseline of 95.69% without the learning rate decay or momentum.


Epoch 45/50
0s – loss: 0.0622 – acc: 0.9830 – val_loss: 0.0929 – val_acc: 0.9914
Epoch 46/50
0s – loss: 0.0695 – acc: 0.9830 – val_loss: 0.0693 – val_acc: 0.9828
Epoch 47/50
0s – loss: 0.0669 – acc: 0.9872 – val_loss: 0.0616 – val_acc: 0.9828
Epoch 48/50
0s – loss: 0.0632 – acc: 0.9830 – val_loss: 0.0824 – val_acc: 0.9914
Epoch 49/50
0s – loss: 0.0590 – acc: 0.9830 – val_loss: 0.0772 – val_acc: 0.9828
Epoch 50/50
0s – loss: 0.0592 – acc: 0.9872 – val_loss: 0.0639 – val_acc: 0.9828

Drop-Based Learning Rate Schedule

Another popular learning rate schedule used with deep learning models is to systematically drop the learning rate at specific times during training.

Often this method is implemented by dropping the learning rate by half every fixed number of epochs. For example, we may have an initial learning rate of 0.1 and drop it by 0.5 every 10 epochs. The first 10 epochs of training would use a value of 0.1, in the next 10 epochs a learning rate of 0.05 would be used, and so on.

If we plot out the learning rates for this example out to 100 epochs you get the graph below showing learning rate (y axis) versus epoch (x axis).

Drop Based Learning Rate Schedule

We can implement this in Keras using a the LearningRateScheduler callback when fitting the model.

The LearningRateScheduler callback allows us to define a function to call that takes the epoch number as an argument and returns the learning rate to use in stochastic gradient descent. When used, the learning rate specified by stochastic gradient descent is ignored.

In the code below, we use the same example before of a single hidden layer network on the Ionosphere dataset. A new step_decay() function is defined that implements the equation:

LearningRate = InitialLearningRate * DropRate^floor(Epoch / EpochDrop)

Where InitialLearningRate is the initial learning rate such as 0.1, the DropRate is the amount that the learning rate is modified each time it is changed such as 0.5, Epoch is the current epoch number and EpochDrop is how often to change the learning rate such as 10.

Notice that we set the learning rate in the SGD class to 0 to clearly indicate that it is not used. Nevertheless, you can set a momentum term in SGD if you want to use momentum with this learning rate schedule.

# Drop-Based Learning Rate Decay
from pandas import read_csv
import math
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.callbacks import LearningRateScheduler

# learning rate schedule
def step_decay(epoch):
initial_lrate = 0.1
drop = 0.5
epochs_drop = 10.0
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate

# load dataset
dataframe = read_csv(“ionosphere.csv”, header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:34].astype(float)
Y = dataset[:,34]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
Y = encoder.transform(Y)
# create model
model = Sequential()
model.add(Dense(34, input_shape=(34,), activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# Compile model
sgd = SGD(learning_rate=0.0, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=sgd, metrics=[‘accuracy’])
# learning schedule callback
lrate = LearningRateScheduler(step_decay)
callbacks_list = [lrate]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=50, batch_size=28, callbacks=callbacks_list, verbose=2)

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example results in a classification accuracy of 99.14% on the validation dataset, again an improvement over the baseline for the model on the problem.


Epoch 45/50
0s – loss: 0.0546 – acc: 0.9830 – val_loss: 0.0634 – val_acc: 0.9914
Epoch 46/50
0s – loss: 0.0544 – acc: 0.9872 – val_loss: 0.0638 – val_acc: 0.9914
Epoch 47/50
0s – loss: 0.0553 – acc: 0.9872 – val_loss: 0.0696 – val_acc: 0.9914
Epoch 48/50
0s – loss: 0.0537 – acc: 0.9872 – val_loss: 0.0675 – val_acc: 0.9914
Epoch 49/50
0s – loss: 0.0537 – acc: 0.9872 – val_loss: 0.0636 – val_acc: 0.9914
Epoch 50/50
0s – loss: 0.0534 – acc: 0.9872 – val_loss: 0.0679 – val_acc: 0.9914

Tips for Using Learning Rate Schedules

This section lists some tips and tricks to consider when using learning rate schedules with neural networks.

Increase the initial learning rate. Because the learning rate will very likely decrease, start with a larger value to decrease from. A larger learning rate will result in a lot larger changes to the weights, at least in the beginning, allowing you to benefit from the fine tuning later.
Use a large momentum. Using a larger momentum value will help the optimization algorithm to continue to make updates in the right direction when your learning rate shrinks to small values.
Experiment with different schedules. It will not be clear which learning rate schedule to use so try a few with different configuration options and see what works best on your problem. Also try schedules that change exponentially and even schedules that respond to the accuracy of your model on the training or test datasets.

Summary

In this post you discovered learning rate schedules for training neural network models.

After reading this post you learned:

How to configure and use a time-based learning rate schedule in Keras.
How to develop your own drop-based learning rate schedule in Keras.

Do you have any questions about learning rate schedules for neural networks or about this post? Ask your question in the comments and I will do my best to answer.



The post Using Learning Rate Schedules for Deep Learning Models in Python with Keras appeared first on Machine Learning Mastery.

Read MoreMachine Learning Mastery

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments