{ "cells": [ { "cell_type": "markdown", "id": "53fef961", "metadata": { "origin_pos": 1 }, "source": [ "# Layers and Modules\n", ":label:`sec_model_construction`\n", "\n", "When we first introduced neural networks,\n", "we focused on linear models with a single output.\n", "Here, the entire model consists of just a single neuron.\n", "Note that a single neuron\n", "(i) takes some set of inputs;\n", "(ii) generates a corresponding scalar output;\n", "and (iii) has a set of associated parameters that can be updated\n", "to optimize some objective function of interest.\n", "Then, once we started thinking about networks with multiple outputs,\n", "we leveraged vectorized arithmetic\n", "to characterize an entire layer of neurons.\n", "Just like individual neurons,\n", "layers (i) take a set of inputs,\n", "(ii) generate corresponding outputs,\n", "and (iii) are described by a set of tunable parameters.\n", "When we worked through softmax regression,\n", "a single layer was itself the model.\n", "However, even when we subsequently\n", "introduced MLPs,\n", "we could still think of the model as\n", "retaining this same basic structure.\n", "\n", "Interestingly, for MLPs,\n", "both the entire model and its constituent layers\n", "share this structure.\n", "The entire model takes in raw inputs (the features),\n", "generates outputs (the predictions),\n", "and possesses parameters\n", "(the combined parameters from all constituent layers).\n", "Likewise, each individual layer ingests inputs\n", "(supplied by the previous layer)\n", "generates outputs (the inputs to the subsequent layer),\n", "and possesses a set of tunable parameters that are updated\n", "according to the signal that flows backwards\n", "from the subsequent layer.\n", "\n", "\n", "While you might think that neurons, layers, and models\n", "give us enough abstractions to go about our business,\n", "it turns out that we often find it convenient\n", "to speak about components that are\n", "larger than an individual layer\n", "but smaller than the entire model.\n", "For example, the ResNet-152 architecture,\n", "which is wildly popular in computer vision,\n", "possesses hundreds of layers.\n", "These layers consist of repeating patterns of *groups of layers*. Implementing such a network one layer at a time can grow tedious.\n", "This concern is not just hypothetical---such\n", "design patterns are common in practice.\n", "The ResNet architecture mentioned above\n", "won the 2015 ImageNet and COCO computer vision competitions\n", "for both recognition and detection :cite:`He.Zhang.Ren.ea.2016`\n", "and remains a go-to architecture for many vision tasks.\n", "Similar architectures in which layers are arranged\n", "in various repeating patterns\n", "are now ubiquitous in other domains,\n", "including natural language processing and speech.\n", "\n", "To implement these complex networks,\n", "we introduce the concept of a neural network *module*.\n", "A module could describe a single layer,\n", "a component consisting of multiple layers,\n", "or the entire model itself!\n", "One benefit of working with the module abstraction\n", "is that they can be combined into larger artifacts,\n", "often recursively. This is illustrated in :numref:`fig_blocks`. By defining code to generate modules\n", "of arbitrary complexity on demand,\n", "we can write surprisingly compact code\n", "and still implement complex neural networks.\n", "\n", "![Multiple layers are combined into modules, forming repeating patterns of larger models.](../img/blocks.svg)\n", ":label:`fig_blocks`\n", "\n", "\n", "From a programming standpoint, a module is represented by a *class*.\n", "Any subclass of it must define a forward propagation method\n", "that transforms its input into output\n", "and must store any necessary parameters.\n", "Note that some modules do not require any parameters at all.\n", "Finally a module must possess a backpropagation method,\n", "for purposes of calculating gradients.\n", "Fortunately, due to some behind-the-scenes magic\n", "supplied by the auto differentiation\n", "(introduced in :numref:`sec_autograd`)\n", "when defining our own module,\n", "we only need to worry about parameters\n", "and the forward propagation method.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "3b911ed7", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:38.169811Z", "iopub.status.busy": "2023-08-18T19:31:38.169219Z", "iopub.status.idle": "2023-08-18T19:31:40.246403Z", "shell.execute_reply": "2023-08-18T19:31:40.245375Z" }, "origin_pos": 3, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "import torch\n", "from torch import nn\n", "from torch.nn import functional as F" ] }, { "cell_type": "markdown", "id": "b80abfaa", "metadata": { "origin_pos": 6 }, "source": [ "[**To begin, we revisit the code\n", "that we used to implement MLPs**]\n", "(:numref:`sec_mlp`).\n", "The following code generates a network\n", "with one fully connected hidden layer\n", "with 256 units and ReLU activation,\n", "followed by a fully connected output layer\n", "with ten units (no activation function).\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "7df830d6", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.251527Z", "iopub.status.busy": "2023-08-18T19:31:40.250671Z", "iopub.status.idle": "2023-08-18T19:31:40.284734Z", "shell.execute_reply": "2023-08-18T19:31:40.283757Z" }, "origin_pos": 8, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "torch.Size([2, 10])" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = nn.Sequential(nn.LazyLinear(256), nn.ReLU(), nn.LazyLinear(10))\n", "\n", "X = torch.rand(2, 20)\n", "net(X).shape" ] }, { "cell_type": "markdown", "id": "fe16ebaf", "metadata": { "origin_pos": 12, "tab": [ "pytorch" ] }, "source": [ "In this example, we constructed\n", "our model by instantiating an `nn.Sequential`, with layers in the order\n", "that they should be executed passed as arguments.\n", "In short, (**`nn.Sequential` defines a special kind of `Module`**),\n", "the class that presents a module in PyTorch.\n", "It maintains an ordered list of constituent `Module`s.\n", "Note that each of the two fully connected layers is an instance of the `Linear` class\n", "which is itself a subclass of `Module`.\n", "The forward propagation (`forward`) method is also remarkably simple:\n", "it chains each module in the list together,\n", "passing the output of each as input to the next.\n", "Note that until now, we have been invoking our models\n", "via the construction `net(X)` to obtain their outputs.\n", "This is actually just shorthand for `net.__call__(X)`.\n" ] }, { "cell_type": "markdown", "id": "93754877", "metadata": { "origin_pos": 14 }, "source": [ "## [**A Custom Module**]\n", "\n", "Perhaps the easiest way to develop intuition\n", "about how a module works\n", "is to implement one ourselves.\n", "Before we do that,\n", "we briefly summarize the basic functionality\n", "that each module must provide:\n", "\n", "\n", "1. Ingest input data as arguments to its forward propagation method.\n", "1. Generate an output by having the forward propagation method return a value. Note that the output may have a different shape from the input. For example, the first fully connected layer in our model above ingests an input of arbitrary dimension but returns an output of dimension 256.\n", "1. Calculate the gradient of its output with respect to its input, which can be accessed via its backpropagation method. Typically this happens automatically.\n", "1. Store and provide access to those parameters necessary\n", " for executing the forward propagation computation.\n", "1. Initialize model parameters as needed.\n", "\n", "\n", "In the following snippet,\n", "we code up a module from scratch\n", "corresponding to an MLP\n", "with one hidden layer with 256 hidden units,\n", "and a 10-dimensional output layer.\n", "Note that the `MLP` class below inherits the class that represents a module.\n", "We will heavily rely on the parent class's methods,\n", "supplying only our own constructor (the `__init__` method in Python) and the forward propagation method.\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "9c5f010e", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.289115Z", "iopub.status.busy": "2023-08-18T19:31:40.288828Z", "iopub.status.idle": "2023-08-18T19:31:40.295756Z", "shell.execute_reply": "2023-08-18T19:31:40.294461Z" }, "origin_pos": 16, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class MLP(nn.Module):\n", " def __init__(self):\n", " # Call the constructor of the parent class nn.Module to perform\n", " # the necessary initialization\n", " super().__init__()\n", " self.hidden = nn.LazyLinear(256)\n", " self.out = nn.LazyLinear(10)\n", "\n", " # Define the forward propagation of the model, that is, how to return the\n", " # required model output based on the input X\n", " def forward(self, X):\n", " return self.out(F.relu(self.hidden(X)))" ] }, { "cell_type": "markdown", "id": "6b7eaced", "metadata": { "origin_pos": 19 }, "source": [ "Let's first focus on the forward propagation method.\n", "Note that it takes `X` as input,\n", "calculates the hidden representation\n", "with the activation function applied,\n", "and outputs its logits.\n", "In this `MLP` implementation,\n", "both layers are instance variables.\n", "To see why this is reasonable, imagine\n", "instantiating two MLPs, `net1` and `net2`,\n", "and training them on different data.\n", "Naturally, we would expect them\n", "to represent two different learned models.\n", "\n", "We [**instantiate the MLP's layers**]\n", "in the constructor\n", "(**and subsequently invoke these layers**)\n", "on each call to the forward propagation method.\n", "Note a few key details.\n", "First, our customized `__init__` method\n", "invokes the parent class's `__init__` method\n", "via `super().__init__()`\n", "sparing us the pain of restating\n", "boilerplate code applicable to most modules.\n", "We then instantiate our two fully connected layers,\n", "assigning them to `self.hidden` and `self.out`.\n", "Note that unless we implement a new layer,\n", "we need not worry about the backpropagation method\n", "or parameter initialization.\n", "The system will generate these methods automatically.\n", "Let's try this out.\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "8c8301dc", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.300597Z", "iopub.status.busy": "2023-08-18T19:31:40.300120Z", "iopub.status.idle": "2023-08-18T19:31:40.308051Z", "shell.execute_reply": "2023-08-18T19:31:40.307090Z" }, "origin_pos": 20, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "torch.Size([2, 10])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = MLP()\n", "net(X).shape" ] }, { "cell_type": "markdown", "id": "98cef5ea", "metadata": { "origin_pos": 22 }, "source": [ "A key virtue of the module abstraction is its versatility.\n", "We can subclass a module to create layers\n", "(such as the fully connected layer class),\n", "entire models (such as the `MLP` class above),\n", "or various components of intermediate complexity.\n", "We exploit this versatility\n", "throughout the coming chapters,\n", "such as when addressing\n", "convolutional neural networks.\n", "\n", "\n", "## [**The Sequential Module**]\n", ":label:`subsec_model-construction-sequential`\n", "\n", "We can now take a closer look\n", "at how the `Sequential` class works.\n", "Recall that `Sequential` was designed\n", "to daisy-chain other modules together.\n", "To build our own simplified `MySequential`,\n", "we just need to define two key methods:\n", "\n", "1. A method for appending modules one by one to a list.\n", "1. A forward propagation method for passing an input through the chain of modules, in the same order as they were appended.\n", "\n", "The following `MySequential` class delivers the same\n", "functionality of the default `Sequential` class.\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "09b7f913", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.312685Z", "iopub.status.busy": "2023-08-18T19:31:40.312400Z", "iopub.status.idle": "2023-08-18T19:31:40.318061Z", "shell.execute_reply": "2023-08-18T19:31:40.317031Z" }, "origin_pos": 24, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class MySequential(nn.Module):\n", " def __init__(self, *args):\n", " super().__init__()\n", " for idx, module in enumerate(args):\n", " self.add_module(str(idx), module)\n", "\n", " def forward(self, X):\n", " for module in self.children():\n", " X = module(X)\n", " return X" ] }, { "cell_type": "markdown", "id": "3c6a2da4", "metadata": { "origin_pos": 28, "tab": [ "pytorch" ] }, "source": [ "In the `__init__` method, we add every module\n", "by calling the `add_modules` method. These modules can be accessed by the `children` method at a later date.\n", "In this way the system knows the added modules,\n", "and it will properly initialize each module's parameters.\n" ] }, { "cell_type": "markdown", "id": "008229d2", "metadata": { "origin_pos": 29 }, "source": [ "When our `MySequential`'s forward propagation method is invoked,\n", "each added module is executed\n", "in the order in which they were added.\n", "We can now reimplement an MLP\n", "using our `MySequential` class.\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "2b8d5d6a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.323023Z", "iopub.status.busy": "2023-08-18T19:31:40.322454Z", "iopub.status.idle": "2023-08-18T19:31:40.330187Z", "shell.execute_reply": "2023-08-18T19:31:40.329340Z" }, "origin_pos": 31, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "torch.Size([2, 10])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = MySequential(nn.LazyLinear(256), nn.ReLU(), nn.LazyLinear(10))\n", "net(X).shape" ] }, { "cell_type": "markdown", "id": "e98fc5fe", "metadata": { "origin_pos": 34 }, "source": [ "Note that this use of `MySequential`\n", "is identical to the code we previously wrote\n", "for the `Sequential` class\n", "(as described in :numref:`sec_mlp`).\n", "\n", "\n", "## [**Executing Code in the Forward Propagation Method**]\n", "\n", "The `Sequential` class makes model construction easy,\n", "allowing us to assemble new architectures\n", "without having to define our own class.\n", "However, not all architectures are simple daisy chains.\n", "When greater flexibility is required,\n", "we will want to define our own blocks.\n", "For example, we might want to execute\n", "Python's control flow within the forward propagation method.\n", "Moreover, we might want to perform\n", "arbitrary mathematical operations,\n", "not simply relying on predefined neural network layers.\n", "\n", "You may have noticed that until now,\n", "all of the operations in our networks\n", "have acted upon our network's activations\n", "and its parameters.\n", "Sometimes, however, we might want to\n", "incorporate terms\n", "that are neither the result of previous layers\n", "nor updatable parameters.\n", "We call these *constant parameters*.\n", "Say for example that we want a layer\n", "that calculates the function\n", "$f(\\mathbf{x},\\mathbf{w}) = c \\cdot \\mathbf{w}^\\top \\mathbf{x}$,\n", "where $\\mathbf{x}$ is the input, $\\mathbf{w}$ is our parameter,\n", "and $c$ is some specified constant\n", "that is not updated during optimization.\n", "So we implement a `FixedHiddenMLP` class as follows.\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "9f8721f0", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.334075Z", "iopub.status.busy": "2023-08-18T19:31:40.333497Z", "iopub.status.idle": "2023-08-18T19:31:40.340281Z", "shell.execute_reply": "2023-08-18T19:31:40.339397Z" }, "origin_pos": 36, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class FixedHiddenMLP(nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " # Random weight parameters that will not compute gradients and\n", " # therefore keep constant during training\n", " self.rand_weight = torch.rand((20, 20))\n", " self.linear = nn.LazyLinear(20)\n", "\n", " def forward(self, X):\n", " X = self.linear(X)\n", " X = F.relu(X @ self.rand_weight + 1)\n", " # Reuse the fully connected layer. This is equivalent to sharing\n", " # parameters with two fully connected layers\n", " X = self.linear(X)\n", " # Control flow\n", " while X.abs().sum() > 1:\n", " X /= 2\n", " return X.sum()" ] }, { "cell_type": "markdown", "id": "77e65b0b", "metadata": { "origin_pos": 39 }, "source": [ "In this model,\n", "we implement a hidden layer whose weights\n", "(`self.rand_weight`) are initialized randomly\n", "at instantiation and are thereafter constant.\n", "This weight is not a model parameter\n", "and thus it is never updated by backpropagation.\n", "The network then passes the output of this \"fixed\" layer\n", "through a fully connected layer.\n", "\n", "Note that before returning the output,\n", "our model did something unusual.\n", "We ran a while-loop, testing\n", "on the condition its $\\ell_1$ norm is larger than $1$,\n", "and dividing our output vector by $2$\n", "until it satisfied the condition.\n", "Finally, we returned the sum of the entries in `X`.\n", "To our knowledge, no standard neural network\n", "performs this operation.\n", "Note that this particular operation may not be useful\n", "in any real-world task.\n", "Our point is only to show you how to integrate\n", "arbitrary code into the flow of your\n", "neural network computations.\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "ede5347f", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.344398Z", "iopub.status.busy": "2023-08-18T19:31:40.343674Z", "iopub.status.idle": "2023-08-18T19:31:40.355810Z", "shell.execute_reply": "2023-08-18T19:31:40.353856Z" }, "origin_pos": 40, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor(-0.3836, grad_fn=)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = FixedHiddenMLP()\n", "net(X)" ] }, { "cell_type": "markdown", "id": "e343c1a3", "metadata": { "origin_pos": 42 }, "source": [ "We can [**mix and match various\n", "ways of assembling modules together.**]\n", "In the following example, we nest modules\n", "in some creative ways.\n" ] }, { "cell_type": "code", "execution_count": 9, "id": "c0c3d190", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:31:40.359588Z", "iopub.status.busy": "2023-08-18T19:31:40.359258Z", "iopub.status.idle": "2023-08-18T19:31:40.372492Z", "shell.execute_reply": "2023-08-18T19:31:40.371497Z" }, "origin_pos": 44, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor(0.0679, grad_fn=)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "class NestMLP(nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " self.net = nn.Sequential(nn.LazyLinear(64), nn.ReLU(),\n", " nn.LazyLinear(32), nn.ReLU())\n", " self.linear = nn.LazyLinear(16)\n", "\n", " def forward(self, X):\n", " return self.linear(self.net(X))\n", "\n", "chimera = nn.Sequential(NestMLP(), nn.LazyLinear(20), FixedHiddenMLP())\n", "chimera(X)" ] }, { "cell_type": "markdown", "id": "48005bbf", "metadata": { "origin_pos": 47 }, "source": [ "## Summary\n", "\n", "Individual layers can be modules.\n", "Many layers can comprise a module.\n", "Many modules can comprise a module.\n", "\n", "A module can contain code.\n", "Modules take care of lots of housekeeping, including parameter initialization and backpropagation.\n", "Sequential concatenations of layers and modules are handled by the `Sequential` module.\n", "\n", "\n", "## Exercises\n", "\n", "1. What kinds of problems will occur if you change `MySequential` to store modules in a Python list?\n", "1. Implement a module that takes two modules as an argument, say `net1` and `net2` and returns the concatenated output of both networks in the forward propagation. This is also called a *parallel module*.\n", "1. Assume that you want to concatenate multiple instances of the same network. Implement a factory function that generates multiple instances of the same module and build a larger network from it.\n" ] }, { "cell_type": "markdown", "id": "d3116594", "metadata": { "origin_pos": 49, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/55)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }