Friday, March 29, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningIntroduction to the Python Deep Learning Library TensorFlow

Introduction to the Python Deep Learning Library TensorFlow



Last Updated on June 15, 2022

TensorFlow is a Python library for fast numerical computing created and released by Google.

It is a foundation library that can be used to create Deep Learning models directly or by using wrapper libraries that simplify the process built on top of TensorFlow.

In this post you will discover the TensorFlow library for Deep Learning.

Updates:

Jun 2022: Update to TensorFlow 2.x

Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Introduction to the Python Deep Learning Library TensorFlow
Photo by Nicolas Raymond, some rights reserved.

What is TensorFlow?

TensorFlow is an open source library for fast numerical computing.

It was created and is maintained by Google and released under the Apache 2.0 open source license. The API is nominally for the Python programming language, although there is access to the underlying C++ API.

Unlike other numerical libraries intended for use in Deep Learning like Theano, TensorFlow was designed for use both in research and development and in production systems, not least RankBrain in Google search and the fun DeepDream project.

It can run on single CPU systems, GPUs as well as mobile devices and large scale distributed systems of hundreds of machines.

How to Install TensorFlow

Installation of TensorFlow is straightforward if you already have a Python SciPy environment.

TensorFlow works with Python 3.3+. You can follow the Download and Setup instructions on the TensorFlow website. Installation is probably simplest via PyPI and specific instructions of the pip command to use for your Linux or Mac OS X platform are on the Download and Setup webpage. In the simplest case, you just need to enter the following in your command line:

pip install tensorflow

Exception would be on the newer Mac with Apple Silicon CPU. The package name for this specific architecture is tensorflow-macos instead:

pip install tensorflow-macos

There are also virtualenv and docker images that you can use if you prefer.

To make use of the GPU, you need to have the Cuda Toolkit installed as well.

Your First Examples in TensorFlow

Computation is described in terms of data flow and operations in the structure of a directed graph.

Nodes: Nodes perform computation and have zero or more inputs and outputs. Data that moves between nodes are known as tensors, which are multi-dimensional arrays of real values.
Edges: The graph defines the flow of data, branching, looping and updates to state. Special edges can be used to synchronize behavior within the graph, for example waiting for computation on a number of inputs to complete.
Operation: An operation is a named abstract computation which can take input attributes and produce output attributes. For example, you could define an add or multiply operation.

Computation with TensorFlow

This first example is a modified version of the example on the TensorFlow website. It shows how you can define values as tensors and execute a operation.

import tensorflow as tf
a = tf.constant(10)
b = tf.constant(32)
print(a+b)

Running this example displays:

tf.Tensor(42, shape=(), dtype=int32)

Linear Regression with TensorFlow

This next example comes from the introduction on the TensorFlow tutorial.

This examples shows how you can define variables (e.g. W and b) as well as variables that are the result of computation (y).

We get some sense of TensorFlow separates the definition and declaration of the computation. In below, there is automatic differentiation under the hood. When we use the function mse_loss() to compute the difference between y and y_data, there is a graph created connecting the value produced by the function to the TensorFlow variables W and b. TensorFlow uses this graph to deduce how to update the variables inside the minimize() function.

import tensorflow as tf
import numpy as np

# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3

# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but Tensorflow will
# figure that out for us.)
W = tf.Variable(tf.random.normal([1]))
b = tf.Variable(tf.zeros([1]))

# A function to compute mean squared error between y_data and computed y
def mse_loss():
y = W * x_data + b
loss = tf.reduce_mean(tf.square(y – y_data))
return loss

# Minimize the mean squared errors.
optimizer = tf.keras.optimizers.Adam()
for step in range(5000):
optimizer.minimize(mse_loss, var_list=[W,b])
if step % 500 == 0:
print(step, W.numpy(), b.numpy())

# Learns best fit is W: [0.1], b: [0.3]

Running this example prints the following output:

0 [-0.35913563] [0.001]
500 [-0.04056413] [0.3131764]
1000 [0.01548613] [0.3467598]
1500 [0.03492216] [0.3369852]
2000 [0.05408324] [0.32609695]
2500 [0.07121297] [0.316361]
3000 [0.08443557] [0.30884594]
3500 [0.09302785] [0.3039626]
4000 [0.09754606] [0.3013947]
4500 [0.09936733] [0.3003596]

You can learn more about the mechanics of TensorFlow in the Basic Usage guide.

More Deep Learning Models

Your TensorFlow installation comes with a number of Deep Learning models that you can use and experiment with directly.

Firstly, you need to find out where TensorFlow was installed on your system. For example, you can use the following Python script:

python -c ‘import os; import inspect; import tensorflow; print(os.path.dirname(inspect.getfile(tensorflow)))’

For example, this could be:

/usr/lib/python3.9/site-packages/tensorflow

Change to this directory and take note of the models subdirectory. Included are a number of deep learning models with tutorial-like comments, such as:

Multi-threaded word2vec mini-batched skip-gram model.
Multi-threaded word2vec unbatched skip-gram model.
CNN for the CIFAR-10 network.
Simple, end-to-end, LeNet-5-like convolutional MNIST model example.
Sequence-to-sequence model with an attention mechanism.

Also check the examples directory as it contains an example using the MNIST dataset.

There is also an excellent list of tutorials on the main TensorFlow website. They show how to use different network types, different datasets and how to use the framework in various different ways.

Finally, there is the TensorFlow playground where you can experiment with small networks right in your web browser.

Need help with Deep Learning in Python?

Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

TensorFlow Resources

TensorFlow Official Homepage
TensforFlow Project on GitHub
TensorFlow Tutorials

More Resources

TensorFlow Course on Udacity
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (2015)

Summary

In this post you discovered the TensorFlow Python library for deep learning.

You learned that it is a library for fast numerical computation, specifically designed for the types of operations that are required in the development and evaluation of large deep learning models.

Do you have any questions about TensorFlow or about this post? Ask your questions in the comments and I will do my best to answer them.



The post Introduction to the Python Deep Learning Library TensorFlow appeared first on Machine Learning Mastery.

Read MoreMachine Learning Mastery

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments