{ "cells": [ { "cell_type": "markdown", "id": "919d4913", "metadata": { "origin_pos": 1 }, "source": [ "# Convolutional Neural Networks (LeNet)\n", ":label:`sec_lenet`\n", "\n", "We now have all the ingredients required to assemble\n", "a fully-functional CNN.\n", "In our earlier encounter with image data, we applied\n", "a linear model with softmax regression (:numref:`sec_softmax_scratch`)\n", "and an MLP (:numref:`sec_mlp-implementation`)\n", "to pictures of clothing in the Fashion-MNIST dataset.\n", "To make such data amenable we first flattened each image from a $28\\times28$ matrix\n", "into a fixed-length $784$-dimensional vector,\n", "and thereafter processed them in fully connected layers.\n", "Now that we have a handle on convolutional layers,\n", "we can retain the spatial structure in our images.\n", "As an additional benefit of replacing fully connected layers with convolutional layers,\n", "we will enjoy more parsimonious models that require far fewer parameters.\n", "\n", "In this section, we will introduce *LeNet*,\n", "among the first published CNNs\n", "to capture wide attention for its performance on computer vision tasks.\n", "The model was introduced by (and named for) Yann LeCun,\n", "then a researcher at AT&T Bell Labs,\n", "for the purpose of recognizing handwritten digits in images :cite:`LeCun.Bottou.Bengio.ea.1998`.\n", "This work represented the culmination\n", "of a decade of research developing the technology;\n", "LeCun's team published the first study to successfully\n", "train CNNs via backpropagation :cite:`LeCun.Boser.Denker.ea.1989`.\n", "\n", "At the time LeNet achieved outstanding results\n", "matching the performance of support vector machines,\n", "then a dominant approach in supervised learning, achieving an error rate of less than 1% per digit.\n", "LeNet was eventually adapted to recognize digits\n", "for processing deposits in ATM machines.\n", "To this day, some ATMs still run the code\n", "that Yann LeCun and his colleague Leon Bottou wrote in the 1990s!\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "08e44e08", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:57:49.669910Z", "iopub.status.busy": "2023-08-18T19:57:49.669347Z", "iopub.status.idle": "2023-08-18T19:57:53.053954Z", "shell.execute_reply": "2023-08-18T19:57:53.052649Z" }, "origin_pos": 3, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "import torch\n", "from torch import nn\n", "from d2l import torch as d2l" ] }, { "cell_type": "markdown", "id": "2e5bc270", "metadata": { "origin_pos": 6 }, "source": [ "## LeNet\n", "\n", "At a high level, (**LeNet (LeNet-5) consists of two parts:\n", "(i) a convolutional encoder consisting of two convolutional layers; and\n", "(ii) a dense block consisting of three fully connected layers**).\n", "The architecture is summarized in :numref:`img_lenet`.\n", "\n", "![Data flow in LeNet. The input is a handwritten digit, the output is a probability over 10 possible outcomes.](../img/lenet.svg)\n", ":label:`img_lenet`\n", "\n", "The basic units in each convolutional block\n", "are a convolutional layer, a sigmoid activation function,\n", "and a subsequent average pooling operation.\n", "Note that while ReLUs and max-pooling work better,\n", "they had not yet been discovered.\n", "Each convolutional layer uses a $5\\times 5$ kernel\n", "and a sigmoid activation function.\n", "These layers map spatially arranged inputs\n", "to a number of two-dimensional feature maps, typically\n", "increasing the number of channels.\n", "The first convolutional layer has 6 output channels,\n", "while the second has 16.\n", "Each $2\\times2$ pooling operation (stride 2)\n", "reduces dimensionality by a factor of $4$ via spatial downsampling.\n", "The convolutional block emits an output with shape given by\n", "(batch size, number of channel, height, width).\n", "\n", "In order to pass output from the convolutional block\n", "to the dense block,\n", "we must flatten each example in the minibatch.\n", "In other words, we take this four-dimensional input and transform it\n", "into the two-dimensional input expected by fully connected layers:\n", "as a reminder, the two-dimensional representation that we desire uses the first dimension to index examples in the minibatch\n", "and the second to give the flat vector representation of each example.\n", "LeNet's dense block has three fully connected layers,\n", "with 120, 84, and 10 outputs, respectively.\n", "Because we are still performing classification,\n", "the 10-dimensional output layer corresponds\n", "to the number of possible output classes.\n", "\n", "While getting to the point where you truly understand\n", "what is going on inside LeNet may have taken a bit of work,\n", "we hope that the following code snippet will convince you\n", "that implementing such models with modern deep learning frameworks\n", "is remarkably simple.\n", "We need only to instantiate a `Sequential` block\n", "and chain together the appropriate layers,\n", "using Xavier initialization as\n", "introduced in :numref:`subsec_xavier`.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "5cd242a5", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:57:53.059640Z", "iopub.status.busy": "2023-08-18T19:57:53.058558Z", "iopub.status.idle": "2023-08-18T19:57:53.066072Z", "shell.execute_reply": "2023-08-18T19:57:53.064963Z" }, "origin_pos": 7, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "def init_cnn(module): #@save\n", " \"\"\"Initialize weights for CNNs.\"\"\"\n", " if type(module) == nn.Linear or type(module) == nn.Conv2d:\n", " nn.init.xavier_uniform_(module.weight)" ] }, { "cell_type": "code", "execution_count": 3, "id": "20dc7869", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:57:53.070462Z", "iopub.status.busy": "2023-08-18T19:57:53.069656Z", "iopub.status.idle": "2023-08-18T19:57:53.078800Z", "shell.execute_reply": "2023-08-18T19:57:53.077655Z" }, "origin_pos": 8, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class LeNet(d2l.Classifier): #@save\n", " \"\"\"The LeNet-5 model.\"\"\"\n", " def __init__(self, lr=0.1, num_classes=10):\n", " super().__init__()\n", " self.save_hyperparameters()\n", " self.net = nn.Sequential(\n", " nn.LazyConv2d(6, kernel_size=5, padding=2), nn.Sigmoid(),\n", " nn.AvgPool2d(kernel_size=2, stride=2),\n", " nn.LazyConv2d(16, kernel_size=5), nn.Sigmoid(),\n", " nn.AvgPool2d(kernel_size=2, stride=2),\n", " nn.Flatten(),\n", " nn.LazyLinear(120), nn.Sigmoid(),\n", " nn.LazyLinear(84), nn.Sigmoid(),\n", " nn.LazyLinear(num_classes))" ] }, { "cell_type": "markdown", "id": "084d8710", "metadata": { "origin_pos": 10 }, "source": [ "We have taken some liberty in the reproduction of LeNet insofar as we have replaced the Gaussian activation layer by\n", "a softmax layer. This greatly simplifies the implementation, not least due to the\n", "fact that the Gaussian decoder is rarely used nowadays. Other than that, this network matches\n", "the original LeNet-5 architecture.\n" ] }, { "cell_type": "markdown", "id": "54aac677", "metadata": { "origin_pos": 11, "tab": [ "pytorch" ] }, "source": [ "Let's see what happens inside the network. By passing a\n", "single-channel (black and white)\n", "$28 \\times 28$ image through the network\n", "and printing the output shape at each layer,\n", "we can [**inspect the model**] to ensure\n", "that its operations line up with\n", "what we expect from :numref:`img_lenet_vert`.\n" ] }, { "cell_type": "markdown", "id": "9e7befc6", "metadata": { "origin_pos": 13 }, "source": [ "![Compressed notation for LeNet-5.](../img/lenet-vert.svg)\n", ":label:`img_lenet_vert`\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "8f2c67b9", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:57:53.083550Z", "iopub.status.busy": "2023-08-18T19:57:53.082696Z", "iopub.status.idle": "2023-08-18T19:57:53.135302Z", "shell.execute_reply": "2023-08-18T19:57:53.134064Z" }, "origin_pos": 14, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Conv2d output shape:\t torch.Size([1, 6, 28, 28])\n", "Sigmoid output shape:\t torch.Size([1, 6, 28, 28])\n", "AvgPool2d output shape:\t torch.Size([1, 6, 14, 14])\n", "Conv2d output shape:\t torch.Size([1, 16, 10, 10])\n", "Sigmoid output shape:\t torch.Size([1, 16, 10, 10])\n", "AvgPool2d output shape:\t torch.Size([1, 16, 5, 5])\n", "Flatten output shape:\t torch.Size([1, 400])\n", "Linear output shape:\t torch.Size([1, 120])\n", "Sigmoid output shape:\t torch.Size([1, 120])\n", "Linear output shape:\t torch.Size([1, 84])\n", "Sigmoid output shape:\t torch.Size([1, 84])\n", "Linear output shape:\t torch.Size([1, 10])\n" ] } ], "source": [ "@d2l.add_to_class(d2l.Classifier) #@save\n", "def layer_summary(self, X_shape):\n", " X = torch.randn(*X_shape)\n", " for layer in self.net:\n", " X = layer(X)\n", " print(layer.__class__.__name__, 'output shape:\\t', X.shape)\n", "\n", "model = LeNet()\n", "model.layer_summary((1, 1, 28, 28))" ] }, { "cell_type": "markdown", "id": "a30a6b0f", "metadata": { "origin_pos": 17 }, "source": [ "Note that the height and width of the representation\n", "at each layer throughout the convolutional block\n", "is reduced (compared with the previous layer).\n", "The first convolutional layer uses two pixels of padding\n", "to compensate for the reduction in height and width\n", "that would otherwise result from using a $5 \\times 5$ kernel.\n", "As an aside, the image size of $28 \\times 28$ pixels in the original\n", "MNIST OCR dataset is a result of *trimming* two pixel rows (and columns) from the\n", "original scans that measured $32 \\times 32$ pixels. This was done primarily to\n", "save space (a 30% reduction) at a time when megabytes mattered.\n", "\n", "In contrast, the second convolutional layer forgoes padding,\n", "and thus the height and width are both reduced by four pixels.\n", "As we go up the stack of layers,\n", "the number of channels increases layer-over-layer\n", "from 1 in the input to 6 after the first convolutional layer\n", "and 16 after the second convolutional layer.\n", "However, each pooling layer halves the height and width.\n", "Finally, each fully connected layer reduces dimensionality,\n", "finally emitting an output whose dimension\n", "matches the number of classes.\n", "\n", "\n", "## Training\n", "\n", "Now that we have implemented the model,\n", "let's [**run an experiment to see how the LeNet-5 model fares on Fashion-MNIST**].\n", "\n", "While CNNs have fewer parameters,\n", "they can still be more expensive to compute\n", "than similarly deep MLPs\n", "because each parameter participates in many more\n", "multiplications.\n", "If you have access to a GPU, this might be a good time\n", "to put it into action to speed up training.\n", "Note that\n", "the `d2l.Trainer` class takes care of all details.\n", "By default, it initializes the model parameters on the\n", "available devices.\n", "Just as with MLPs, our loss function is cross-entropy,\n", "and we minimize it via minibatch stochastic gradient descent.\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "8f82d79c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:57:53.139574Z", "iopub.status.busy": "2023-08-18T19:57:53.139010Z", "iopub.status.idle": "2023-08-18T19:58:55.458380Z", "shell.execute_reply": "2023-08-18T19:58:55.457425Z" }, "origin_pos": 18, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-18T19:58:55.354028\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.7.2, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "trainer = d2l.Trainer(max_epochs=10, num_gpus=1)\n", "data = d2l.FashionMNIST(batch_size=128)\n", "model = LeNet(lr=0.1)\n", "model.apply_init([next(iter(data.get_dataloader(True)))[0]], init_cnn)\n", "trainer.fit(model, data)" ] }, { "cell_type": "markdown", "id": "4f396baf", "metadata": { "origin_pos": 20 }, "source": [ "## Summary\n", "\n", "We have made significant progress in this chapter. We moved from the MLPs of the 1980s to the CNNs of the 1990s and early 2000s. The architectures proposed, e.g., in the form of LeNet-5 remain meaningful, even to this day. It is worth comparing the error rates on Fashion-MNIST achievable with LeNet-5 both to the very best possible with MLPs (:numref:`sec_mlp-implementation`) and those with significantly more advanced architectures such as ResNet (:numref:`sec_resnet`). LeNet is much more similar to the latter than to the former. One of the primary differences, as we shall see, is that greater amounts of computation enabled significantly more complex architectures.\n", "\n", "A second difference is the relative ease with which we were able to implement LeNet. What used to be an engineering challenge worth months of C++ and assembly code, engineering to improve SN, an early Lisp-based deep learning tool :cite:`Bottou.Le-Cun.1988`, and finally experimentation with models can now be accomplished in minutes. It is this incredible productivity boost that has democratized deep learning model development tremendously. In the next chapter we will journey down this rabbit to hole to see where it takes us.\n", "\n", "## Exercises\n", "\n", "1. Let's modernize LeNet. Implement and test the following changes:\n", " 1. Replace average pooling with max-pooling.\n", " 1. Replace the softmax layer with ReLU.\n", "1. Try to change the size of the LeNet style network to improve its accuracy in addition to max-pooling and ReLU.\n", " 1. Adjust the convolution window size.\n", " 1. Adjust the number of output channels.\n", " 1. Adjust the number of convolution layers.\n", " 1. Adjust the number of fully connected layers.\n", " 1. Adjust the learning rates and other training details (e.g., initialization and number of epochs).\n", "1. Try out the improved network on the original MNIST dataset.\n", "1. Display the activations of the first and second layer of LeNet for different inputs (e.g., sweaters and coats).\n", "1. What happens to the activations when you feed significantly different images into the network (e.g., cats, cars, or even random noise)?\n" ] }, { "cell_type": "markdown", "id": "44e195cb", "metadata": { "origin_pos": 22, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/74)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }