Thursday, March 28, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningThe Attention Mechanism from Scratch

The Attention Mechanism from Scratch



Last Updated on September 20, 2021

The attention mechanism was introduced to improve the performance of the encoder-decoder model for machine translation. The idea behind the attention mechanism was to permit the decoder to utilize the most relevant parts of the input sequence in a flexible manner, by a weighted combination of all of the encoded input vectors, with the most relevant vectors being attributed the highest weights. 

In this tutorial, you will discover the attention mechanism and its implementation. 

After completing this tutorial, you will know:

How the attention mechanism uses a weighted sum of all of the encoder hidden states to flexibly focus the attention of the decoder to the most relevant parts of the input sequence. 
How the attention mechanism can be generalized for tasks where the information may not necessarily be related in a sequential fashion.
How to implement the general attention mechanism in Python with NumPy and SciPy. 

Let’s get started. 

The Attention Mechanism from Scratch
Photo by Nitish Meena, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

The Attention Mechanism
The General Attention Mechanism
The General Attention Mechanism with NumPy and SciPy

The Attention Mechanism

The attention mechanism was introduced by Bahdanau et al. (2014), to address the bottleneck problem that arises with the use of a fixed-length encoding vector, where the decoder would have limited access to the information provided by the input. This is thought to become especially problematic for long and/or complex sequences, where the dimensionality of their representation would be forced to be the same as for shorter or simpler sequences.

We had seen that Bahdanau et al.’s attention mechanism is divided into the step-by-step computations of the alignment scores, the weights and the context vector:

Alignment scores: The alignment model takes the encoded hidden states, $mathbf{h}_i$, and the previous decoder output, $mathbf{s}_{t-1}$, to compute a score, $e_{t,i}$, that indicates how well the elements of the input sequence align with the current output at position, $t$. The alignment model is represented by a function, $a(.)$, which can be implemented by a feedforward neural network:

$$e_{t,i} = a(mathbf{s}_{t-1}, mathbf{h}_i)$$

Weights: The weights, $alpha_{t,i}$, are computed by applying a softmax operation to the previously computed alignment scores:

$$alpha_{t,i} = text{softmax}(e_{t,i})$$

Context vector: A unique context vector, $mathbf{c}_t$, is fed into the decoder at each time step. It is computed by a weighted sum of all, $T$, encoder hidden states:

$$mathbf{c}_t = sum_{i=1}^T alpha_{t,i} mathbf{h}_i$$

Bahdanau et al. had implemented an RNN for both the encoder and decoder. 

However, the attention mechanism can be re-formulated into a general form that can be applied to any sequence-to-sequence (abbreviated to seq2seq) task, where the information may not necessarily be related in a sequential fashion. 

In other words, the database doesn’t have to consist of the hidden RNN states at different steps, but could contain any kind of information instead.

– Advanced Deep Learning with Python, 2019.

The General Attention Mechanism

The general attention mechanism makes use of three main components, namely the queries, $mathbf{Q}$, the keys, $mathbf{K}$, and the values, $mathbf{V}$. 

If we had to compare these three components to the attention mechanism as proposed by Bahdanau et al., then the query would be analogous to the previous decoder output, $mathbf{s}_{t-1}$, while the values would be analogous to the encoded inputs, $mathbf{h}_i$. In the Bahdanau attention mechanism, the keys and values are the same vector.

In this case, we can think of the vector $mathbf{s}_{t-1}$ as a query executed against a database of key-value pairs, where the keys are vectors and the hidden states $mathbf{h}_i$ are the values.

– Advanced Deep Learning with Python, 2019.

The general attention mechanism then performs the following computations:

Each query vector, $mathbf{q} = mathbf{s}_{t-1}$, is matched against a database of keys to compute a score value. This matching operation is computed as the dot product of the specific query under consideration with each key vector, $mathbf{k}_i$: 

$$e_{mathbf{q},mathbf{k}_i} = mathbf{q} cdot mathbf{k}_i$$

The scores are passed through a softmax operation to generate the weights:

$$alpha_{mathbf{q},mathbf{k}_i} = text{softmax}(e_{mathbf{q},mathbf{k}_i})$$

The generalized attention is then computed by a weighted sum of the value vectors, $mathbf{v}_{mathbf{k}_i}$, where each value vector is paired with a corresponding key:

$$text{attention}(mathbf{q}, mathbf{K}, mathbf{V}) = sum_i alpha_{mathbf{q},mathbf{k}_i} mathbf{v}_{mathbf{k}_i}$$

Within the context of machine translation, each word in an input sentence would be attributed its own query, key and value vectors. These vectors are generated by multiplying the encoder’s representation of the specific word under consideration, with three different weight matrices that would have been generated during training. 

In essence, when the generalized attention mechanism is presented with a sequence of words, it takes the query vector attributed to some specific word in the sequence and scores it against each key in the database. In doing so, it captures how the word under consideration relates to the others in the sequence. Then it scales the values according to the attention weights (computed from the scores), in order to retain focus on those words that are relevant to the query. In doing so, it produces an attention output for the word under consideration. 

The General Attention Mechanism with NumPy and SciPy

In this section, we will explore how to implement the general attention mechanism using the NumPy and SciPy libraries in Python. 

For simplicity, we will initially calculate the attention for the first word in a sequence of four. We will then generalize the code to calculate an attention output for all four words in matrix form. 

Hence, let’s start by first defining the word embeddings of the four different words for which we will be calculating the attention. In actual practice, these word embeddings would have been generated by an encoder, however for this particular example we shall be defining them manually. 

# encoder representations of four different words
word_1 = array([1, 0, 0])
word_2 = array([0, 1, 0])
word_3 = array([1, 1, 0])
word_4 = array([0, 0, 1])

The next step generates the weight matrices, which we will eventually be multiplying to the word embeddings to generate the queries, keys and values. Here, we shall be generating these weight matrices randomly, however in actual practice these would have been learned during training. 


# generating the weight matrices
random.seed(42) # to allow us to reproduce the same attention values
W_Q = random.randint(3, size=(3, 3))
W_K = random.randint(3, size=(3, 3))
W_V = random.randint(3, size=(3, 3))

Notice how the number of rows of each of these matrices is equal to the dimensionality of the word embeddings (which in this case is three) to allow us to perform the matrix multiplication.

Subsequently, the query, key and value vectors for each word are generated by multiplying each word embedding by each of the weight matrices. 


# generating the queries, keys and values
query_1 = word_1 @ W_Q
key_1 = word_1 @ W_K
value_1 = word_1 @ W_V

query_2 = word_2 @ W_Q
key_2 = word_2 @ W_K
value_2 = word_2 @ W_V

query_3 = word_3 @ W_Q
key_3 = word_3 @ W_K
value_3 = word_3 @ W_V

query_4 = word_4 @ W_Q
key_4 = word_4 @ W_K
value_4 = word_4 @ W_V

Considering only the first word for the time being, the next step scores its query vector against all of the key vectors using a dot product operation. 


# scoring the first query vector against all key vectors
scores = array([dot(query_1, key_1), dot(query_1, key_2), dot(query_1, key_3), dot(query_1, key_4)])

The score values are subsequently passed through a softmax operation to generate the weights. Before doing so, it is common practice to divide the score values by the square root of the dimensionality of the key vectors (in this case, three), to keep the gradients stable. 


# computing the weights by a softmax operation
weights = softmax(scores / key_1.shape[0] ** 0.5)

Finally, the attention output is calculated by a weighted sum of all four value vectors. 


# computing the attention by a weighted sum of the value vectors
attention = (weights[0] * value_1) + (weights[1] * value_2) + (weights[2] * value_3) + (weights[3] * value_4)

print(attention)

[0.98522025 1.74174051 0.75652026]

For faster processing, the same calculations can be implemented in matrix form to generate an attention output for all four words in one go:

from numpy import array
from numpy import random
from numpy import dot
from scipy.special import softmax

# encoder representations of four different words
word_1 = array([1, 0, 0])
word_2 = array([0, 1, 0])
word_3 = array([1, 1, 0])
word_4 = array([0, 0, 1])

# stacking the word embeddings into a single array
words = array([word_1, word_2, word_3, word_4])

# generating the weight matrices
random.seed(42)
W_Q = random.randint(3, size=(3, 3))
W_K = random.randint(3, size=(3, 3))
W_V = random.randint(3, size=(3, 3))

# generating the queries, keys and values
Q = words @ W_Q
K = words @ W_K
V = words @ W_V

# scoring the query vectors against all key vectors
scores = Q @ K.transpose()

# computing the weights by a softmax operation
weights = softmax(scores / K.shape[1] ** 0.5, axis=1)

# computing the attention by a weighted sum of the value vectors
attention = weights @ V

print(attention)

[[0.98522025 1.74174051 0.75652026]
[0.90965265 1.40965265 0.5 ]
[0.99851226 1.75849334 0.75998108]
[0.99560386 1.90407309 0.90846923]]

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Advanced Deep Learning with Python, 2019.
Deep Learning Essentials, 2018.

Papers

Neural Machine Translation by Jointly Learning to Align and Translate, 2014.

Summary

In this tutorial, you discovered the attention mechanism and its implementation.

Specifically, you learned:

How the attention mechanism uses a weighted sum of all of the encoder hidden states to flexibly focus the attention of the decoder to the most relevant parts of the input sequence. 
How the attention mechanism can be generalized for tasks where the information may not necessarily be related in a sequential fashion.
How to implement the general attention mechanism with NumPy and SciPy.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.



The post The Attention Mechanism from Scratch appeared first on Machine Learning Mastery.

Read MoreMachine Learning Mastery

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments