Sitemap

TensorFlow Computer Vision & Deep Learning Examples

9 min readJan 25, 2021
Press enter or click to view image in full size

Reading code is one effective way to get professional in TensorFlow (TF). In this article, we reuse the examples in the TensorFlow tutorial but we keep them condense and take away codes that are for tracing or demonstration purposes. We also keep our discussion minimum so you can browse through as many examples as possible to get a complete picture. If you have problems to follow, in particular after reading the first example, please read the articles in this series first.

Yet, in most examples, we keep all the boilerplate codes required by the examples. So skip them if needed. We use snapshots for the code so you cannot cut-and-paste them because of potential TF API changes. Refer to the link on the top of the example instead. Here is the list of examples, tested with TF 2.4.0 released in Dec 2020.

  • Keras MNIST data: Sequential Model using Dense Layers,
  • Keras MINST data: Custom CNN Model Class trained with GradientTape & Dataset,
  • Custom layer creating new model parameters,
  • Dataset performance,
  • Overfitting,
  • Save and load,
  • Classify flowers with data augmentation layers,
  • Classify flowers with data augmentation using dataset mapping,
  • Transfer learning,
  • Image segmentation,
  • Regression on CSV file: Using Pandas to process data,
  • CSV preprocessing,

Keras MNIST data: Sequential Model using Dense Layers

Classify MNIST NumPy data using a Sequential Model with dense layers.

Press enter or click to view image in full size

Model summary:

Press enter or click to view image in full size

Keras MINST data: Custom CNN Model Class trained with GradientTape & Dataset

In this example,

  • train with a Dataset (in particular, if samples cannot fit into the memory),
  • a custom CNN model class, and
  • a custom training with GraidentTape.

First, we create the dataset and define the custom CNN model.

Press enter or click to view image in full size

Custom training with GradientTape and testing step:

Press enter or click to view image in full size

Training loop:

Press enter or click to view image in full size

Custom layer creating new model parameters

A custom layer can contain other layers and/or have its own layer parameters. However, the input shape may not be known at the time of instantiation. A separate method “build” is called to create parameters once TF knows the input shape, say by invoking the layer with an input for the first time.

Press enter or click to view image in full size

Overfitting

In this example:

  • Load Higgs CSV dataset using tf.data,
  • Train with a custom learning rate decay for the optimizer,
  • Record data later for TensorBoard,
  • Apply regularization and dropout to avoid overfitting, and
  • Apply early stopping.

The detection of Higgs boson particles proves the presence of Higgs Field that gives mass to the fundamental particles (quarks, leptons, etc …). This example predicts the class of an event — a “signal” or a “background”. “Signal” indicates the 4-lepton events have occurred from decays involving a Higgs boson. Otherwise, it is “background” from decays not involving a Higgs boson. First, we load a CSV dataset using tf.data. Each row contains the label and 28 features — 21 are kinematic properties measured by the particle detectors and 7 derived from these measurements.

Press enter or click to view image in full size

Create a dataset with features and labels separated — ds is batched so the mapping is done in a single operation, and then later un-batched.

Press enter or click to view image in full size

Create the training and validation dataset — it uses “take” and “skip” to partition the original training samples.

Press enter or click to view image in full size

The optimizer will use a custom learning rate for the optimizer.

Press enter or click to view image in full size

compile_and_fit will configure the training and fit the model. We also set up callbacks to log data for the TensorBoard.

Press enter or click to view image in full size

We built a “combined” model with regularization and dropout to avoid overfitting.

Press enter or click to view image in full size

Finally, we train the model.

Press enter or click to view image in full size

We store the TensorBoard data for this training and model (called combined) under $tmp/tensorboard_logs/regularizers/combined. We can train a different model under a different director, say regularizers/other_model.

Press enter or click to view image in full size

We can review the data using the TensorBoard for this training and other training under “regularizers”.

Press enter or click to view image in full size

Dataset Performance

To improve Dataset performance, we should cache and prefetch data. But we may not need to shuffle validation or testing dataset.

Press enter or click to view image in full size

Save & Load

In this example, we save and restore a model. First, we have the boilerplate code in loading the MNIST data and create a dense model.

Press enter or click to view image in full size

Next, we create a callback for model.fit to save a model. For these checkpoints, we save the weights only.

Press enter or click to view image in full size

Here is the checkpoint created.

Press enter or click to view image in full size

We can instantiate a new model and reload the weights again.

Press enter or click to view image in full size

We can add the epoch number as part of the checkpoint filename and we can change the frequency of the checkpoint (every 5 epochs below).

Press enter or click to view image in full size

Here are the checkpoints after 50 epochs.

Press enter or click to view image in full size

And we can load the latest checkpoint with:

Press enter or click to view image in full size

To salve the weights manually instead:

Press enter or click to view image in full size

Or save the whole model. In the latter case, we don't need the Python code to restore the model.

Press enter or click to view image in full size

CheckpointManager

If we have access to the optimizer and the data iterator, like in GradientTape, we can use CheckpointManager to save checkpoints. Here is the boilerplate code first.

Press enter or click to view image in full size

Then, we can configure a CheckpointManager.

Press enter or click to view image in full size

This includes an option to keep max_to_keep checkpoints only.

Press enter or click to view image in full size

And we can use it to save and restore the model.

Press enter or click to view image in full size

Classify flowers with data augmentation layers

In this example, we use data augmentation to improve model performance. Prepare training and validation dataset:

Press enter or click to view image in full size

Build and train a model with Keras pre-processing layers:

Press enter or click to view image in full size

Perform a test:

Press enter or click to view image in full size

Classify flowers with data augmentation using dataset mapping

We perform the data augmentation using the mapping in dataset pipelining. In this example, we perform resizing, rescaling, flipping, and rotation. Here is the boilerplate code for creating dataset and augmentation layers.

Press enter or click to view image in full size

Then, we use dataset mapping for data augmentation.

Press enter or click to view image in full size

This is the model and the training.

Press enter or click to view image in full size

Transfer Learning

Here is the boilerplate code in loading cats and dogs pictures in creating datasets.

Press enter or click to view image in full size

Then, we construct a model with data augmentation layers, a preprocessing layer from the MobileNet v2, a pre-trained MobileNet v2, and a classification head. In the first part of the transfer learning, we freeze the MobileNet v2 layers (line 54) and train the classification head only. We also set the training to False for the MobileNet v2 (line 65) such that it will run in inference mode: dropout layers will not be applied and it uses the original training means and variances for the Batch Normalization.

Press enter or click to view image in full size

Next, we train the model,

Press enter or click to view image in full size

Now, we go to the second phase of the training. We will only freeze the first 100 layers of the MobileNet v2 and fine-tune the rest of the model again.

Press enter or click to view image in full size

Once it is trained again, we use it to make predictions for testing images.

Press enter or click to view image in full size

Image Segmentation

Image segmentation creates a mask to segment an object.

Press enter or click to view image in full size
Source

First, we load the dataset and create the preprocessing layers.

Press enter or click to view image in full size

Then, we create the datasets.

Press enter or click to view image in full size

The image segmentation model contains downsampling layers followed by upsampling layers. Here is the downsampling.

Press enter or click to view image in full size

and the upsampling part.

Press enter or click to view image in full size

The more complicated logic above deals with the skip connections between the downsampling and upsampling layers — the horizontal connections between the downsampling and the upsampling layers below that have the same spatial resolution.

Here is the more precise diagram.

Press enter or click to view image in full size
Modified from source

At last, we train the model and make predictions.

Press enter or click to view image in full size

Regression on CSV file: Using Pandas to process data

This example is a simple regression problem in predicting the MPG of a car from features containing in a CSV file. Prepare datasets for CSV files using Pandas:

Press enter or click to view image in full size

Visualize features co-relationships.

Press enter or click to view image in full size
Press enter or click to view image in full size

Create training/testing features/labels.

Press enter or click to view image in full size

Create a model with a Normalization layer to normalize each feature.

Press enter or click to view image in full size

Fit, evaluate & predict:

Press enter or click to view image in full size

Save & load a model:

Press enter or click to view image in full size

And the model will be saved as:

Press enter or click to view image in full size

CSV preprocessing

In this example, we will predict who will survive the Titanic accident. This example demonstrates how to preprocess CSV data.

First, we will load the data from a CSV file.

Press enter or click to view image in full size

We collect all the numeric fields, normalize it, and put them into preprocessed_inputs.

Press enter or click to view image in full size

Convert all category fields into one-hot vectors and append them in preprocessed_inputs.

Press enter or click to view image in full size

Concatenate all input and create a pre-processing model.

Press enter or click to view image in full size

Create a classification model. Train and make predictions.

Press enter or click to view image in full size

Credits and References

All the source code is originated or modified from the TensorFlow tutorial.

--

--

Responses (1)