Introduction to TensorFlow 2 and Keras#

Objectives#

The goal of this notebook is to teach some basics of the TensorFlow framework and the Keras API.

What is TensorFlow?#

TensorFlow is an open-source framework developed in late 2015 by Google for building various machine learning and deep learning models. TensorFlow is free and open-source, thanks to the Apache Open Source license.

The main objective of using TensorFlow is to reduce the complexity of implementing computations on large numerical data sets. In practice, these large computations can manifest as training and inference with machine learning or deep learning models.

TensorFlow was designed to operate with multiple CPUs or GPUs, as well as a growing number of mobile operating systems. The framework includes wrappers in Python, C++, and Java.

How does it work?#

TensorFlow accepts inputs as a n-dimensional array called a Tensor. These units can be used in quick, singular computations or in structured graphs. The framework was initially designed to support creation of a flowchart of operations (a graph) to be applied to input Tensors, which travel in one direction and out the other. Importantly, TensorFlow 2.0 introduced eager execution to the framework. Eager execution is based on the immediate evaluation of operations, as opposed to graph mode, in which operations are chained together and outputs are produced only when an input is passed through the whole graph. Thus, eager execution allows intermediate values to be returned from sequential operations. Stated otherwise, with eager execution, the values of tensors are calculated as they occur in your code.

Eager execution does not build graphs, and the operations return actual values instead of computational graphs to run later. With Eager execution, TensorFlow calculates the values of tensors wherever they occur in a program. One benefit of eager execution is simplified debugging, as errors are easy to isolate when breaking down a program into executable components. Secondly, eager execution reduces boilerplate code as functions themselves are immediately callable and don’t have to be instantiated by sessions first. The third benefit worth mentioning is that eager execution is especially useful for rapid research and experimentation, as it allows for easy swapping of components without having to rewrite the entire graph.

TensorFlow’s structure#

There are five main components to TensorFlow’s structure.

  1. Preprocessing the data using tf.keras.preprocessing

  2. Building the model using tf.keras.Model

  3. Training / estimating the model using tf.keras.Model.fit()

  4. Evaluating the model using tf.keras.Model.evaluate()

  5. Serving / distributing the model using TFServing

The name Tensorflow derives from the way in which the framework receives input in the form of a multi-dimensional array, i.e. the tensors. These tensors flow through TensorFlow operations (hence the framework’s name), producing intermediate and output tensors.

Outside of these five aforementioned components, there is a plethora of utilities and extensions that enrich the TensorFlow ecosystem. Some noteworthy mentions include TensorBoard for model visualization and metrics logging, TensorFlow Data Validation for dataset inspection and statistics and TensorFlow Datasets which provides a catalog of pre-built datasets ready for machine learning.

Why do so many people like TensorFlow?#

TensorFlow is intentionally user-friendly (especially since the integration of the Keras API), with helpful plugins to visualize model training and a useful software debugging tool. As well, TensorFlow is highly scalable, with easy deployment on both CPUs and GPUs.

What is Keras?#

Keras is an API built on Python which reduces the cognitive load associated with programming models through human readability and simple and consistent structures.

Keras is what some might call a wrapper for TensorFlow. It is intended for rapid experimentation. Starting with TensorFlow 2.0, the Keras API was directly built into the TensorFlow framework (tf.keras) and henceforth recommended as the main interface for deep learning model development with the TensorFlow. This integration was originally distinct from the original Keras API, which was relatively framework-agnostic. In the tf.keras API, Keras abstraction is built directly upon TensorFlow low level protocols. The distinction in functionality between tf.keras and the Keras library is minimal when it comes to building Tensorflow programs. Recently, Keras announced they are soon releasing a multi-framework release: Keras 3.0 in Fall 2023. This version will presumably make it easier to switch between Pytorch, Tensorflow and JAX backends for training deep learning models, and make it easier to convert models from one framework format to another.

With tf.keras, we can address all components of a machine learning life cycle. This includes, model definition (architecture selection, network topology, activation function), model compilation (loss function, optimization function), model fitting a.k.a. training or estimation (set number of epochs, batch size), model evaluation (evaluation metric), model prediction (class label, numerical value, probability depending on domain).

Tha main components of Keras include:

  1. tf.keras.Model, used to execute the main steps in a modeling workflow, such as train and evaluate.

  2. tf.keras.layers, used to define the tensor in/tensor out computation functions and network topology. With this API, we can construct models with varying levels of complexity. There are two main subclasses that factor into model design when working with the layers API: a. the tf.keras.Sequential, which we use when stacking layers to be executed sequentially and each layer has only one input tensor and one output tensor, and b. the tf.keras.Functional which is used for constructing networks with non-sequential topology (RNNs), layer reuse, and support for multiple inputs or outputs.

../_images/Keras_functional_vs_sequential.png

Fig. 20 Image source#

  1. tf.keras.callbacks, used to program specific actions to occur during training, such as log training metrics, visualize interim/internal states and statistics of the model during training, and perform early stopping when the model converges.

  2. tf.keras.preprocessing, which offers support for prepping raw data from disk to model ready Tensor format.

  3. tf.keras.optimizers where all of the state of the art optimizers can be plugged in. Learning rate decay / scheduling can also be implemented as part of this API.

  4. tf.keras.metrics which is used for assessing the performance of the model during training. A metric is the target to optimize during training, with specific metrics chosen for specific modeling objectives.

  5. tf.keras.losses that informs the model quantitatively how much it should try to minimize during training by providing a measure of error. Similar to metrics, specific loss functions are selected for specific modeling objectives.

As an example using the `tf.keras.Functional`` API, our main workflow will follow the diagram below.

../_images/Keras_functional_API.jpg

Fig. 21 Keras Functional API diagram (from https://miro.com/app/board/o9J_lhnKhVE=/).#

The main difference between the Sequential and Functional APIs for model definition are that the latter, albeit more complex, supports multiple input and output paths (e.g. a scalar and a matrix) as it requires explicit connection of layers together (i.e. if the input to layer x is layer y then that must be defined). The Sequential API in comparison assumes linear interaction between the stacked layers and thus that they all build from the initial input.

User notes#

  • TensorFlow 2.x should be installed with a Python version no older than 3.6.