{ "cells": [ { "cell_type": "markdown", "id": "0de623d7", "metadata": { "origin_pos": 1 }, "source": [ "# Data Manipulation\n", ":label:`sec_ndarray`\n", "\n", "In order to get anything done, \n", "we need some way to store and manipulate data.\n", "Generally, there are two important things \n", "we need to do with data: \n", "(i) acquire them; \n", "and (ii) process them once they are inside the computer. \n", "There is no point in acquiring data \n", "without some way to store it, \n", "so to start, let's get our hands dirty\n", "with $n$-dimensional arrays, \n", "which we also call *tensors*.\n", "If you already know the NumPy \n", "scientific computing package, \n", "this will be a breeze.\n", "For all modern deep learning frameworks,\n", "the *tensor class* (`ndarray` in MXNet, \n", "`Tensor` in PyTorch and TensorFlow) \n", "resembles NumPy's `ndarray`,\n", "with a few killer features added.\n", "First, the tensor class\n", "supports automatic differentiation.\n", "Second, it leverages GPUs\n", "to accelerate numerical computation,\n", "whereas NumPy only runs on CPUs.\n", "These properties make neural networks\n", "both easy to code and fast to run.\n", "\n", "\n", "\n", "## Getting Started\n" ] }, { "cell_type": "markdown", "id": "084dc517", "metadata": { "origin_pos": 3, "tab": [ "pytorch" ] }, "source": [ "(**To start, we import the PyTorch library.\n", "Note that the package name is `torch`.**)\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "01fa8e58", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:55.152236Z", "iopub.status.busy": "2023-08-18T19:32:55.151500Z", "iopub.status.idle": "2023-08-18T19:32:57.051589Z", "shell.execute_reply": "2023-08-18T19:32:57.050409Z" }, "origin_pos": 6, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "import torch" ] }, { "cell_type": "markdown", "id": "8d828de8", "metadata": { "origin_pos": 9 }, "source": [ "[**A tensor represents a (possibly multidimensional) array of numerical values.**]\n", "In the one-dimensional case, i.e., when only one axis is needed for the data,\n", "a tensor is called a *vector*.\n", "With two axes, a tensor is called a *matrix*.\n", "With $k > 2$ axes, we drop the specialized names\n", "and just refer to the object as a $k^\\textrm{th}$-*order tensor*.\n" ] }, { "cell_type": "markdown", "id": "1a471639", "metadata": { "origin_pos": 11, "tab": [ "pytorch" ] }, "source": [ "PyTorch provides a variety of functions \n", "for creating new tensors \n", "prepopulated with values. \n", "For example, by invoking `arange(n)`,\n", "we can create a vector of evenly spaced values,\n", "starting at 0 (included) \n", "and ending at `n` (not included).\n", "By default, the interval size is $1$.\n", "Unless otherwise specified, \n", "new tensors are stored in main memory \n", "and designated for CPU-based computation.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "b6aa30a9", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.056039Z", "iopub.status.busy": "2023-08-18T19:32:57.055276Z", "iopub.status.idle": "2023-08-18T19:32:57.089028Z", "shell.execute_reply": "2023-08-18T19:32:57.088195Z" }, "origin_pos": 14, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.])" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = torch.arange(12, dtype=torch.float32)\n", "x" ] }, { "cell_type": "markdown", "id": "1a12b5d8", "metadata": { "origin_pos": 18, "tab": [ "pytorch" ] }, "source": [ "Each of these values is called\n", "an *element* of the tensor.\n", "The tensor `x` contains 12 elements.\n", "We can inspect the total number of elements \n", "in a tensor via its `numel` method.\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "640cadaf", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.093138Z", "iopub.status.busy": "2023-08-18T19:32:57.092473Z", "iopub.status.idle": "2023-08-18T19:32:57.098450Z", "shell.execute_reply": "2023-08-18T19:32:57.097452Z" }, "origin_pos": 21, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "12" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.numel()" ] }, { "cell_type": "markdown", "id": "d50c7483", "metadata": { "origin_pos": 23 }, "source": [ "(**We can access a tensor's *shape***) \n", "(the length along each axis)\n", "by inspecting its `shape` attribute.\n", "Because we are dealing with a vector here,\n", "the `shape` contains just a single element\n", "and is identical to the size.\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "6e0a9616", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.102194Z", "iopub.status.busy": "2023-08-18T19:32:57.101575Z", "iopub.status.idle": "2023-08-18T19:32:57.107424Z", "shell.execute_reply": "2023-08-18T19:32:57.106501Z" }, "origin_pos": 24, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "torch.Size([12])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.shape" ] }, { "cell_type": "markdown", "id": "5c60413a", "metadata": { "origin_pos": 25 }, "source": [ "We can [**change the shape of a tensor\n", "without altering its size or values**],\n", "by invoking `reshape`.\n", "For example, we can transform \n", "our vector `x` whose shape is (12,) \n", "to a matrix `X` with shape (3, 4).\n", "This new tensor retains all elements\n", "but reconfigures them into a matrix.\n", "Notice that the elements of our vector\n", "are laid out one row at a time and thus\n", "`x[3] == X[0, 3]`.\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "6092207c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.111467Z", "iopub.status.busy": "2023-08-18T19:32:57.110749Z", "iopub.status.idle": "2023-08-18T19:32:57.117759Z", "shell.execute_reply": "2023-08-18T19:32:57.116917Z" }, "origin_pos": 26, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[ 0., 1., 2., 3.],\n", " [ 4., 5., 6., 7.],\n", " [ 8., 9., 10., 11.]])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X = x.reshape(3, 4)\n", "X" ] }, { "cell_type": "markdown", "id": "2d2e1706", "metadata": { "origin_pos": 28 }, "source": [ "Note that specifying every shape component\n", "to `reshape` is redundant.\n", "Because we already know our tensor's size,\n", "we can work out one component of the shape given the rest.\n", "For example, given a tensor of size $n$\n", "and target shape ($h$, $w$),\n", "we know that $w = n/h$.\n", "To automatically infer one component of the shape,\n", "we can place a `-1` for the shape component\n", "that should be inferred automatically.\n", "In our case, instead of calling `x.reshape(3, 4)`,\n", "we could have equivalently called `x.reshape(-1, 4)` or `x.reshape(3, -1)`.\n", "\n", "Practitioners often need to work with tensors\n", "initialized to contain all 0s or 1s.\n", "[**We can construct a tensor with all elements set to 0**] (~~or one~~)\n", "and a shape of (2, 3, 4) via the `zeros` function.\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "383cafca", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.122018Z", "iopub.status.busy": "2023-08-18T19:32:57.121194Z", "iopub.status.idle": "2023-08-18T19:32:57.128294Z", "shell.execute_reply": "2023-08-18T19:32:57.127285Z" }, "origin_pos": 30, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[[0., 0., 0., 0.],\n", " [0., 0., 0., 0.],\n", " [0., 0., 0., 0.]],\n", "\n", " [[0., 0., 0., 0.],\n", " [0., 0., 0., 0.],\n", " [0., 0., 0., 0.]]])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.zeros((2, 3, 4))" ] }, { "cell_type": "markdown", "id": "1e967d02", "metadata": { "origin_pos": 33 }, "source": [ "Similarly, we can create a tensor \n", "with all 1s by invoking `ones`.\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "0ea249d4", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.132534Z", "iopub.status.busy": "2023-08-18T19:32:57.131716Z", "iopub.status.idle": "2023-08-18T19:32:57.139029Z", "shell.execute_reply": "2023-08-18T19:32:57.138135Z" }, "origin_pos": 35, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[[1., 1., 1., 1.],\n", " [1., 1., 1., 1.],\n", " [1., 1., 1., 1.]],\n", "\n", " [[1., 1., 1., 1.],\n", " [1., 1., 1., 1.],\n", " [1., 1., 1., 1.]]])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.ones((2, 3, 4))" ] }, { "cell_type": "markdown", "id": "0615f2d6", "metadata": { "origin_pos": 38 }, "source": [ "We often wish to \n", "[**sample each element randomly (and independently)**] \n", "from a given probability distribution.\n", "For example, the parameters of neural networks\n", "are often initialized randomly.\n", "The following snippet creates a tensor \n", "with elements drawn from \n", "a standard Gaussian (normal) distribution\n", "with mean 0 and standard deviation 1.\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "2254595d", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.143051Z", "iopub.status.busy": "2023-08-18T19:32:57.142388Z", "iopub.status.idle": "2023-08-18T19:32:57.149695Z", "shell.execute_reply": "2023-08-18T19:32:57.148813Z" }, "origin_pos": 40, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[ 0.1351, -0.9099, -0.2028, 2.1937],\n", " [-0.3200, -0.7545, 0.8086, -1.8730],\n", " [ 0.3929, 0.4931, 0.9114, -0.7072]])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.randn(3, 4)" ] }, { "cell_type": "markdown", "id": "d35eda39", "metadata": { "origin_pos": 43 }, "source": [ "Finally, we can construct tensors by\n", "[**supplying the exact values for each element**] \n", "by supplying (possibly nested) Python list(s) \n", "containing numerical literals.\n", "Here, we construct a matrix with a list of lists,\n", "where the outermost list corresponds to axis 0,\n", "and the inner list corresponds to axis 1.\n" ] }, { "cell_type": "code", "execution_count": 9, "id": "b26863d8", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.153567Z", "iopub.status.busy": "2023-08-18T19:32:57.153222Z", "iopub.status.idle": "2023-08-18T19:32:57.160436Z", "shell.execute_reply": "2023-08-18T19:32:57.159548Z" }, "origin_pos": 45, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[2, 1, 4, 3],\n", " [1, 2, 3, 4],\n", " [4, 3, 2, 1]])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])" ] }, { "cell_type": "markdown", "id": "5b589cdb", "metadata": { "origin_pos": 48 }, "source": [ "## Indexing and Slicing\n", "\n", "As with Python lists,\n", "we can access tensor elements \n", "by indexing (starting with 0).\n", "To access an element based on its position\n", "relative to the end of the list,\n", "we can use negative indexing.\n", "Finally, we can access whole ranges of indices \n", "via slicing (e.g., `X[start:stop]`), \n", "where the returned value includes \n", "the first index (`start`) *but not the last* (`stop`).\n", "Finally, when only one index (or slice)\n", "is specified for a $k^\\textrm{th}$-order tensor,\n", "it is applied along axis 0.\n", "Thus, in the following code,\n", "[**`[-1]` selects the last row and `[1:3]`\n", "selects the second and third rows**].\n" ] }, { "cell_type": "code", "execution_count": 10, "id": "d9049a53", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.164537Z", "iopub.status.busy": "2023-08-18T19:32:57.163812Z", "iopub.status.idle": "2023-08-18T19:32:57.171699Z", "shell.execute_reply": "2023-08-18T19:32:57.170451Z" }, "origin_pos": 49, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(tensor([ 8., 9., 10., 11.]),\n", " tensor([[ 4., 5., 6., 7.],\n", " [ 8., 9., 10., 11.]]))" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X[-1], X[1:3]" ] }, { "cell_type": "markdown", "id": "5450673b", "metadata": { "origin_pos": 50, "tab": [ "pytorch" ] }, "source": [ "Beyond reading them, (**we can also *write* elements of a matrix by specifying indices.**)\n" ] }, { "cell_type": "code", "execution_count": 11, "id": "9246619c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.176047Z", "iopub.status.busy": "2023-08-18T19:32:57.175685Z", "iopub.status.idle": "2023-08-18T19:32:57.182893Z", "shell.execute_reply": "2023-08-18T19:32:57.181890Z" }, "origin_pos": 52, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[ 0., 1., 2., 3.],\n", " [ 4., 5., 17., 7.],\n", " [ 8., 9., 10., 11.]])" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X[1, 2] = 17\n", "X" ] }, { "cell_type": "markdown", "id": "31f06903", "metadata": { "origin_pos": 55 }, "source": [ "If we want [**to assign multiple elements the same value,\n", "we apply the indexing on the left-hand side \n", "of the assignment operation.**]\n", "For instance, `[:2, :]` accesses \n", "the first and second rows,\n", "where `:` takes all the elements along axis 1 (column).\n", "While we discussed indexing for matrices,\n", "this also works for vectors\n", "and for tensors of more than two dimensions.\n" ] }, { "cell_type": "code", "execution_count": 12, "id": "0532f024", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.186970Z", "iopub.status.busy": "2023-08-18T19:32:57.186270Z", "iopub.status.idle": "2023-08-18T19:32:57.193303Z", "shell.execute_reply": "2023-08-18T19:32:57.192338Z" }, "origin_pos": 56, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[12., 12., 12., 12.],\n", " [12., 12., 12., 12.],\n", " [ 8., 9., 10., 11.]])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X[:2, :] = 12\n", "X" ] }, { "cell_type": "markdown", "id": "02cdce97", "metadata": { "origin_pos": 59 }, "source": [ "## Operations\n", "\n", "Now that we know how to construct tensors\n", "and how to read from and write to their elements,\n", "we can begin to manipulate them\n", "with various mathematical operations.\n", "Among the most useful of these \n", "are the *elementwise* operations.\n", "These apply a standard scalar operation\n", "to each element of a tensor.\n", "For functions that take two tensors as inputs,\n", "elementwise operations apply some standard binary operator\n", "on each pair of corresponding elements.\n", "We can create an elementwise function \n", "from any function that maps \n", "from a scalar to a scalar.\n", "\n", "In mathematical notation, we denote such\n", "*unary* scalar operators (taking one input)\n", "by the signature \n", "$f: \\mathbb{R} \\rightarrow \\mathbb{R}$.\n", "This just means that the function maps\n", "from any real number onto some other real number.\n", "Most standard operators, including unary ones like $e^x$, can be applied elementwise.\n" ] }, { "cell_type": "code", "execution_count": 13, "id": "6dd6724c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.197301Z", "iopub.status.busy": "2023-08-18T19:32:57.196599Z", "iopub.status.idle": "2023-08-18T19:32:57.206136Z", "shell.execute_reply": "2023-08-18T19:32:57.205188Z" }, "origin_pos": 61, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([162754.7969, 162754.7969, 162754.7969, 162754.7969, 162754.7969,\n", " 162754.7969, 162754.7969, 162754.7969, 2980.9580, 8103.0840,\n", " 22026.4648, 59874.1406])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.exp(x)" ] }, { "cell_type": "markdown", "id": "b70f353f", "metadata": { "origin_pos": 64 }, "source": [ "Likewise, we denote *binary* scalar operators,\n", "which map pairs of real numbers\n", "to a (single) real number\n", "via the signature \n", "$f: \\mathbb{R}, \\mathbb{R} \\rightarrow \\mathbb{R}$.\n", "Given any two vectors $\\mathbf{u}$ \n", "and $\\mathbf{v}$ *of the same shape*,\n", "and a binary operator $f$, we can produce a vector\n", "$\\mathbf{c} = F(\\mathbf{u},\\mathbf{v})$\n", "by setting $c_i \\gets f(u_i, v_i)$ for all $i$,\n", "where $c_i, u_i$, and $v_i$ are the $i^\\textrm{th}$ elements\n", "of vectors $\\mathbf{c}, \\mathbf{u}$, and $\\mathbf{v}$.\n", "Here, we produced the vector-valued\n", "$F: \\mathbb{R}^d, \\mathbb{R}^d \\rightarrow \\mathbb{R}^d$\n", "by *lifting* the scalar function\n", "to an elementwise vector operation.\n", "The common standard arithmetic operators\n", "for addition (`+`), subtraction (`-`), \n", "multiplication (`*`), division (`/`), \n", "and exponentiation (`**`)\n", "have all been *lifted* to elementwise operations\n", "for identically-shaped tensors of arbitrary shape.\n" ] }, { "cell_type": "code", "execution_count": 14, "id": "89bc996d", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.210417Z", "iopub.status.busy": "2023-08-18T19:32:57.209741Z", "iopub.status.idle": "2023-08-18T19:32:57.219298Z", "shell.execute_reply": "2023-08-18T19:32:57.218318Z" }, "origin_pos": 66, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(tensor([ 3., 4., 6., 10.]),\n", " tensor([-1., 0., 2., 6.]),\n", " tensor([ 2., 4., 8., 16.]),\n", " tensor([0.5000, 1.0000, 2.0000, 4.0000]),\n", " tensor([ 1., 4., 16., 64.]))" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = torch.tensor([1.0, 2, 4, 8])\n", "y = torch.tensor([2, 2, 2, 2])\n", "x + y, x - y, x * y, x / y, x ** y" ] }, { "cell_type": "markdown", "id": "04ae1d38", "metadata": { "origin_pos": 69 }, "source": [ "In addition to elementwise computations,\n", "we can also perform linear algebraic operations,\n", "such as dot products and matrix multiplications.\n", "We will elaborate on these\n", "in :numref:`sec_linear-algebra`.\n", "\n", "We can also [***concatenate* multiple tensors,**]\n", "stacking them end-to-end to form a larger one.\n", "We just need to provide a list of tensors\n", "and tell the system along which axis to concatenate.\n", "The example below shows what happens when we concatenate\n", "two matrices along rows (axis 0)\n", "instead of columns (axis 1).\n", "We can see that the first output's axis-0 length ($6$)\n", "is the sum of the two input tensors' axis-0 lengths ($3 + 3$);\n", "while the second output's axis-1 length ($8$)\n", "is the sum of the two input tensors' axis-1 lengths ($4 + 4$).\n" ] }, { "cell_type": "code", "execution_count": 15, "id": "43aa9012", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.223534Z", "iopub.status.busy": "2023-08-18T19:32:57.222711Z", "iopub.status.idle": "2023-08-18T19:32:57.233166Z", "shell.execute_reply": "2023-08-18T19:32:57.232145Z" }, "origin_pos": 71, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(tensor([[ 0., 1., 2., 3.],\n", " [ 4., 5., 6., 7.],\n", " [ 8., 9., 10., 11.],\n", " [ 2., 1., 4., 3.],\n", " [ 1., 2., 3., 4.],\n", " [ 4., 3., 2., 1.]]),\n", " tensor([[ 0., 1., 2., 3., 2., 1., 4., 3.],\n", " [ 4., 5., 6., 7., 1., 2., 3., 4.],\n", " [ 8., 9., 10., 11., 4., 3., 2., 1.]]))" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X = torch.arange(12, dtype=torch.float32).reshape((3,4))\n", "Y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])\n", "torch.cat((X, Y), dim=0), torch.cat((X, Y), dim=1)" ] }, { "cell_type": "markdown", "id": "346adeed", "metadata": { "origin_pos": 74 }, "source": [ "Sometimes, we want to \n", "[**construct a binary tensor via *logical statements*.**]\n", "Take `X == Y` as an example.\n", "For each position `i, j`, if `X[i, j]` and `Y[i, j]` are equal, \n", "then the corresponding entry in the result takes value `1`,\n", "otherwise it takes value `0`.\n" ] }, { "cell_type": "code", "execution_count": 16, "id": "91d39e58", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.237276Z", "iopub.status.busy": "2023-08-18T19:32:57.236485Z", "iopub.status.idle": "2023-08-18T19:32:57.243133Z", "shell.execute_reply": "2023-08-18T19:32:57.242117Z" }, "origin_pos": 75, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[False, True, False, True],\n", " [False, False, False, False],\n", " [False, False, False, False]])" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X == Y" ] }, { "cell_type": "markdown", "id": "00448db5", "metadata": { "origin_pos": 76 }, "source": [ "[**Summing all the elements in the tensor**] yields a tensor with only one element.\n" ] }, { "cell_type": "code", "execution_count": 17, "id": "080b0125", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.247142Z", "iopub.status.busy": "2023-08-18T19:32:57.246480Z", "iopub.status.idle": "2023-08-18T19:32:57.253117Z", "shell.execute_reply": "2023-08-18T19:32:57.252212Z" }, "origin_pos": 77, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor(66.)" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X.sum()" ] }, { "cell_type": "markdown", "id": "e6a78360", "metadata": { "origin_pos": 79 }, "source": [ "## Broadcasting\n", ":label:`subsec_broadcasting`\n", "\n", "By now, you know how to perform \n", "elementwise binary operations\n", "on two tensors of the same shape. \n", "Under certain conditions,\n", "even when shapes differ, \n", "we can still [**perform elementwise binary operations\n", "by invoking the *broadcasting mechanism*.**]\n", "Broadcasting works according to \n", "the following two-step procedure:\n", "(i) expand one or both arrays\n", "by copying elements along axes with length 1\n", "so that after this transformation,\n", "the two tensors have the same shape;\n", "(ii) perform an elementwise operation\n", "on the resulting arrays.\n" ] }, { "cell_type": "code", "execution_count": 18, "id": "be37d2de", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.256932Z", "iopub.status.busy": "2023-08-18T19:32:57.256264Z", "iopub.status.idle": "2023-08-18T19:32:57.263823Z", "shell.execute_reply": "2023-08-18T19:32:57.262881Z" }, "origin_pos": 81, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(tensor([[0],\n", " [1],\n", " [2]]),\n", " tensor([[0, 1]]))" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = torch.arange(3).reshape((3, 1))\n", "b = torch.arange(2).reshape((1, 2))\n", "a, b" ] }, { "cell_type": "markdown", "id": "6c7e8410", "metadata": { "origin_pos": 84 }, "source": [ "Since `a` and `b` are $3\\times1$ \n", "and $1\\times2$ matrices, respectively,\n", "their shapes do not match up.\n", "Broadcasting produces a larger $3\\times2$ matrix \n", "by replicating matrix `a` along the columns\n", "and matrix `b` along the rows\n", "before adding them elementwise.\n" ] }, { "cell_type": "code", "execution_count": 19, "id": "9f62e827", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.267856Z", "iopub.status.busy": "2023-08-18T19:32:57.267172Z", "iopub.status.idle": "2023-08-18T19:32:57.273497Z", "shell.execute_reply": "2023-08-18T19:32:57.272587Z" }, "origin_pos": 85, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[0, 1],\n", " [1, 2],\n", " [2, 3]])" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a + b" ] }, { "cell_type": "markdown", "id": "c5d68609", "metadata": { "origin_pos": 86 }, "source": [ "## Saving Memory\n", "\n", "[**Running operations can cause new memory to be\n", "allocated to host results.**]\n", "For example, if we write `Y = X + Y`,\n", "we dereference the tensor that `Y` used to point to\n", "and instead point `Y` at the newly allocated memory.\n", "We can demonstrate this issue with Python's `id()` function,\n", "which gives us the exact address \n", "of the referenced object in memory.\n", "Note that after we run `Y = Y + X`,\n", "`id(Y)` points to a different location.\n", "That is because Python first evaluates `Y + X`,\n", "allocating new memory for the result \n", "and then points `Y` to this new location in memory.\n" ] }, { "cell_type": "code", "execution_count": 20, "id": "754a7433", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.277697Z", "iopub.status.busy": "2023-08-18T19:32:57.277047Z", "iopub.status.idle": "2023-08-18T19:32:57.283549Z", "shell.execute_reply": "2023-08-18T19:32:57.282613Z" }, "origin_pos": 87, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "before = id(Y)\n", "Y = Y + X\n", "id(Y) == before" ] }, { "cell_type": "markdown", "id": "322d26f5", "metadata": { "origin_pos": 88 }, "source": [ "This might be undesirable for two reasons.\n", "First, we do not want to run around\n", "allocating memory unnecessarily all the time.\n", "In machine learning, we often have\n", "hundreds of megabytes of parameters\n", "and update all of them multiple times per second.\n", "Whenever possible, we want to perform these updates *in place*.\n", "Second, we might point at the \n", "same parameters from multiple variables.\n", "If we do not update in place, \n", "we must be careful to update all of these references,\n", "lest we spring a memory leak \n", "or inadvertently refer to stale parameters.\n" ] }, { "cell_type": "markdown", "id": "82880947", "metadata": { "origin_pos": 89, "tab": [ "pytorch" ] }, "source": [ "Fortunately, (**performing in-place operations**) is easy.\n", "We can assign the result of an operation\n", "to a previously allocated array `Y`\n", "by using slice notation: `Y[:] = `.\n", "To illustrate this concept, \n", "we overwrite the values of tensor `Z`,\n", "after initializing it, using `zeros_like`,\n", "to have the same shape as `Y`.\n" ] }, { "cell_type": "code", "execution_count": 21, "id": "c4d62609", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.287695Z", "iopub.status.busy": "2023-08-18T19:32:57.286964Z", "iopub.status.idle": "2023-08-18T19:32:57.293078Z", "shell.execute_reply": "2023-08-18T19:32:57.292048Z" }, "origin_pos": 92, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "id(Z): 140381179266448\n", "id(Z): 140381179266448\n" ] } ], "source": [ "Z = torch.zeros_like(Y)\n", "print('id(Z):', id(Z))\n", "Z[:] = X + Y\n", "print('id(Z):', id(Z))" ] }, { "cell_type": "markdown", "id": "d745b125", "metadata": { "origin_pos": 95, "tab": [ "pytorch" ] }, "source": [ "[**If the value of `X` is not reused in subsequent computations,\n", "we can also use `X[:] = X + Y` or `X += Y`\n", "to reduce the memory overhead of the operation.**]\n" ] }, { "cell_type": "code", "execution_count": 22, "id": "b8c13447", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.296911Z", "iopub.status.busy": "2023-08-18T19:32:57.296361Z", "iopub.status.idle": "2023-08-18T19:32:57.302754Z", "shell.execute_reply": "2023-08-18T19:32:57.301805Z" }, "origin_pos": 97, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "before = id(X)\n", "X += Y\n", "id(X) == before" ] }, { "cell_type": "markdown", "id": "b5f887dd", "metadata": { "origin_pos": 99 }, "source": [ "## Conversion to Other Python Objects\n" ] }, { "cell_type": "markdown", "id": "cd057d04", "metadata": { "origin_pos": 101, "tab": [ "pytorch" ] }, "source": [ "[**Converting to a NumPy tensor (`ndarray`)**], or vice versa, is easy.\n", "The torch tensor and NumPy array \n", "will share their underlying memory, \n", "and changing one through an in-place operation \n", "will also change the other.\n" ] }, { "cell_type": "code", "execution_count": 23, "id": "576963aa", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.306812Z", "iopub.status.busy": "2023-08-18T19:32:57.306088Z", "iopub.status.idle": "2023-08-18T19:32:57.312356Z", "shell.execute_reply": "2023-08-18T19:32:57.311478Z" }, "origin_pos": 103, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(numpy.ndarray, torch.Tensor)" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = X.numpy()\n", "B = torch.from_numpy(A)\n", "type(A), type(B)" ] }, { "cell_type": "markdown", "id": "b2def017", "metadata": { "origin_pos": 106 }, "source": [ "To (**convert a size-1 tensor to a Python scalar**),\n", "we can invoke the `item` function or Python's built-in functions.\n" ] }, { "cell_type": "code", "execution_count": 24, "id": "388c5252", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:32:57.316471Z", "iopub.status.busy": "2023-08-18T19:32:57.315825Z", "iopub.status.idle": "2023-08-18T19:32:57.322867Z", "shell.execute_reply": "2023-08-18T19:32:57.322007Z" }, "origin_pos": 108, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(tensor([3.5000]), 3.5, 3.5, 3)" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = torch.tensor([3.5])\n", "a, a.item(), float(a), int(a)" ] }, { "cell_type": "markdown", "id": "9373077d", "metadata": { "origin_pos": 111 }, "source": [ "## Summary\n", "\n", "The tensor class is the main interface for storing and manipulating data in deep learning libraries.\n", "Tensors provide a variety of functionalities including construction routines; indexing and slicing; basic mathematics operations; broadcasting; memory-efficient assignment; and conversion to and from other Python objects.\n", "\n", "\n", "## Exercises\n", "\n", "1. Run the code in this section. Change the conditional statement `X == Y` to `X < Y` or `X > Y`, and then see what kind of tensor you can get.\n", "1. Replace the two tensors that operate by element in the broadcasting mechanism with other shapes, e.g., 3-dimensional tensors. Is the result the same as expected?\n" ] }, { "cell_type": "markdown", "id": "d2776415", "metadata": { "origin_pos": 113, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/27)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.23" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }