{ "cells": [ { "cell_type": "markdown", "id": "25fee3b7", "metadata": { "origin_pos": 1 }, "source": [ "# GPUs\n", ":label:`sec_use_gpu`\n", "\n", "In :numref:`tab_intro_decade`, we illustrated the rapid growth\n", "of computation over the past two decades.\n", "In a nutshell, GPU performance has increased\n", "by a factor of 1000 every decade since 2000.\n", "This offers great opportunities but it also suggests\n", "that there was significant demand for such performance.\n", "\n", "\n", "In this section, we begin to discuss how to harness\n", "this computational performance for your research.\n", "First by using a single GPU and at a later point,\n", "how to use multiple GPUs and multiple servers (with multiple GPUs).\n", "\n", "Specifically, we will discuss how\n", "to use a single NVIDIA GPU for calculations.\n", "First, make sure you have at least one NVIDIA GPU installed.\n", "Then, download the [NVIDIA driver and CUDA](https://developer.nvidia.com/cuda-downloads)\n", "and follow the prompts to set the appropriate path.\n", "Once these preparations are complete,\n", "the `nvidia-smi` command can be used\n", "to (**view the graphics card information**).\n" ] }, { "cell_type": "markdown", "id": "92058ee6", "metadata": { "origin_pos": 3, "tab": [ "pytorch" ] }, "source": [ "In PyTorch, every array has a device; we often refer it as a *context*.\n", "So far, by default, all variables\n", "and associated computation\n", "have been assigned to the CPU.\n", "Typically, other contexts might be various GPUs.\n", "Things can get even hairier when\n", "we deploy jobs across multiple servers.\n", "By assigning arrays to contexts intelligently,\n", "we can minimize the time spent\n", "transferring data between devices.\n", "For example, when training neural networks on a server with a GPU,\n", "we typically prefer for the model's parameters to live on the GPU.\n" ] }, { "cell_type": "markdown", "id": "49bda574", "metadata": { "origin_pos": 4 }, "source": [ "To run the programs in this section,\n", "you need at least two GPUs.\n", "Note that this might be extravagant for most desktop computers\n", "but it is easily available in the cloud, e.g.,\n", "by using the AWS EC2 multi-GPU instances.\n", "Almost all other sections do *not* require multiple GPUs, but here we simply wish to illustrate data flow between different devices.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "a4ef6e6b", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:36:58.270913Z", "iopub.status.busy": "2023-08-18T19:36:58.270055Z", "iopub.status.idle": "2023-08-18T19:37:01.897059Z", "shell.execute_reply": "2023-08-18T19:37:01.896067Z" }, "origin_pos": 6, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "import torch\n", "from torch import nn\n", "from d2l import torch as d2l" ] }, { "cell_type": "markdown", "id": "0006dfe3", "metadata": { "origin_pos": 9 }, "source": [ "## [**Computing Devices**]\n", "\n", "We can specify devices, such as CPUs and GPUs,\n", "for storage and calculation.\n", "By default, tensors are created in the main memory\n", "and then the CPU is used for calculations.\n" ] }, { "cell_type": "markdown", "id": "dff8c64e", "metadata": { "origin_pos": 11, "tab": [ "pytorch" ] }, "source": [ "In PyTorch, the CPU and GPU can be indicated by `torch.device('cpu')` and `torch.device('cuda')`.\n", "It should be noted that the `cpu` device\n", "means all physical CPUs and memory.\n", "This means that PyTorch's calculations\n", "will try to use all CPU cores.\n", "However, a `gpu` device only represents one card\n", "and the corresponding memory.\n", "If there are multiple GPUs, we use `torch.device(f'cuda:{i}')`\n", "to represent the $i^\\textrm{th}$ GPU ($i$ starts at 0).\n", "Also, `gpu:0` and `gpu` are equivalent.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "d996a07b", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:01.901957Z", "iopub.status.busy": "2023-08-18T19:37:01.901006Z", "iopub.status.idle": "2023-08-18T19:37:01.911076Z", "shell.execute_reply": "2023-08-18T19:37:01.909836Z" }, "origin_pos": 12, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(device(type='cpu'),\n", " device(type='cuda', index=0),\n", " device(type='cuda', index=1))" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def cpu(): #@save\n", " \"\"\"Get the CPU device.\"\"\"\n", " return torch.device('cpu')\n", "\n", "def gpu(i=0): #@save\n", " \"\"\"Get a GPU device.\"\"\"\n", " return torch.device(f'cuda:{i}')\n", "\n", "cpu(), gpu(), gpu(1)" ] }, { "cell_type": "markdown", "id": "0a643379", "metadata": { "origin_pos": 14 }, "source": [ "We can (**query the number of available GPUs.**)\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "b20d4266", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:01.915209Z", "iopub.status.busy": "2023-08-18T19:37:01.914386Z", "iopub.status.idle": "2023-08-18T19:37:01.922363Z", "shell.execute_reply": "2023-08-18T19:37:01.921100Z" }, "origin_pos": 15, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "2" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def num_gpus(): #@save\n", " \"\"\"Get the number of available GPUs.\"\"\"\n", " return torch.cuda.device_count()\n", "\n", "num_gpus()" ] }, { "cell_type": "markdown", "id": "ab10cfc5", "metadata": { "origin_pos": 17 }, "source": [ "Now we [**define two convenient functions that allow us\n", "to run code even if the requested GPUs do not exist.**]\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "6ac547f6", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:01.926431Z", "iopub.status.busy": "2023-08-18T19:37:01.925574Z", "iopub.status.idle": "2023-08-18T19:37:01.935019Z", "shell.execute_reply": "2023-08-18T19:37:01.933960Z" }, "origin_pos": 18, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "(device(type='cuda', index=0),\n", " device(type='cpu'),\n", " [device(type='cuda', index=0), device(type='cuda', index=1)])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def try_gpu(i=0): #@save\n", " \"\"\"Return gpu(i) if exists, otherwise return cpu().\"\"\"\n", " if num_gpus() >= i + 1:\n", " return gpu(i)\n", " return cpu()\n", "\n", "def try_all_gpus(): #@save\n", " \"\"\"Return all available GPUs, or [cpu(),] if no GPU exists.\"\"\"\n", " return [gpu(i) for i in range(num_gpus())]\n", "\n", "try_gpu(), try_gpu(10), try_all_gpus()" ] }, { "cell_type": "markdown", "id": "73d23836", "metadata": { "origin_pos": 19 }, "source": [ "## Tensors and GPUs\n" ] }, { "cell_type": "markdown", "id": "b04367f7", "metadata": { "origin_pos": 20, "tab": [ "pytorch" ] }, "source": [ "By default, tensors are created on the CPU.\n", "We can [**query the device where the tensor is located.**]\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "a3e90ced", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:01.939959Z", "iopub.status.busy": "2023-08-18T19:37:01.938949Z", "iopub.status.idle": "2023-08-18T19:37:01.950067Z", "shell.execute_reply": "2023-08-18T19:37:01.949195Z" }, "origin_pos": 24, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "device(type='cpu')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = torch.tensor([1, 2, 3])\n", "x.device" ] }, { "cell_type": "markdown", "id": "c538315e", "metadata": { "origin_pos": 27 }, "source": [ "It is important to note that whenever we want\n", "to operate on multiple terms,\n", "they need to be on the same device.\n", "For instance, if we sum two tensors,\n", "we need to make sure that both arguments\n", "live on the same device---otherwise the framework\n", "would not know where to store the result\n", "or even how to decide where to perform the computation.\n", "\n", "### Storage on the GPU\n", "\n", "There are several ways to [**store a tensor on the GPU.**]\n", "For example, we can specify a storage device when creating a tensor.\n", "Next, we create the tensor variable `X` on the first `gpu`.\n", "The tensor created on a GPU only consumes the memory of this GPU.\n", "We can use the `nvidia-smi` command to view GPU memory usage.\n", "In general, we need to make sure that we do not create data that exceeds the GPU memory limit.\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "13913886", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:01.953772Z", "iopub.status.busy": "2023-08-18T19:37:01.953191Z", "iopub.status.idle": "2023-08-18T19:37:02.420258Z", "shell.execute_reply": "2023-08-18T19:37:02.419290Z" }, "origin_pos": 29, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[1., 1., 1.],\n", " [1., 1., 1.]], device='cuda:0')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X = torch.ones(2, 3, device=try_gpu())\n", "X" ] }, { "cell_type": "markdown", "id": "32ea65fc", "metadata": { "origin_pos": 32 }, "source": [ "Assuming that you have at least two GPUs, the following code will (**create a random tensor, `Y`, on the second GPU.**)\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "6f4c7aff", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.424924Z", "iopub.status.busy": "2023-08-18T19:37:02.424008Z", "iopub.status.idle": "2023-08-18T19:37:02.688334Z", "shell.execute_reply": "2023-08-18T19:37:02.687371Z" }, "origin_pos": 34, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[0.0022, 0.5723, 0.2890],\n", " [0.1456, 0.3537, 0.7359]], device='cuda:1')" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Y = torch.rand(2, 3, device=try_gpu(1))\n", "Y" ] }, { "cell_type": "markdown", "id": "f3bf740b", "metadata": { "origin_pos": 37 }, "source": [ "### Copying\n", "\n", "[**If we want to compute `X + Y`,\n", "we need to decide where to perform this operation.**]\n", "For instance, as shown in :numref:`fig_copyto`,\n", "we can transfer `X` to the second GPU\n", "and perform the operation there.\n", "*Do not* simply add `X` and `Y`,\n", "since this will result in an exception.\n", "The runtime engine would not know what to do:\n", "it cannot find data on the same device and it fails.\n", "Since `Y` lives on the second GPU,\n", "we need to move `X` there before we can add the two.\n", "\n", "![Copy data to perform an operation on the same device.](../img/copyto.svg)\n", ":label:`fig_copyto`\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "3560f0b5", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.693634Z", "iopub.status.busy": "2023-08-18T19:37:02.693201Z", "iopub.status.idle": "2023-08-18T19:37:02.701839Z", "shell.execute_reply": "2023-08-18T19:37:02.701004Z" }, "origin_pos": 39, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[1., 1., 1.],\n", " [1., 1., 1.]], device='cuda:0')\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.]], device='cuda:1')\n" ] } ], "source": [ "Z = X.cuda(1)\n", "print(X)\n", "print(Z)" ] }, { "cell_type": "markdown", "id": "5cc2252c", "metadata": { "origin_pos": 42 }, "source": [ "Now that [**the data (both `Z` and `Y`) are on the same GPU), we can add them up.**]\n" ] }, { "cell_type": "code", "execution_count": 9, "id": "2cfea6e5", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.707070Z", "iopub.status.busy": "2023-08-18T19:37:02.705679Z", "iopub.status.idle": "2023-08-18T19:37:02.735588Z", "shell.execute_reply": "2023-08-18T19:37:02.734193Z" }, "origin_pos": 43, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[1.0022, 1.5723, 1.2890],\n", " [1.1456, 1.3537, 1.7359]], device='cuda:1')" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Y + Z" ] }, { "cell_type": "markdown", "id": "4657c339", "metadata": { "origin_pos": 45, "tab": [ "pytorch" ] }, "source": [ "But what if your variable `Z` already lived on your second GPU?\n", "What happens if we still call `Z.cuda(1)`?\n", "It will return `Z` instead of making a copy and allocating new memory.\n" ] }, { "cell_type": "code", "execution_count": 10, "id": "0450cb7c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.743585Z", "iopub.status.busy": "2023-08-18T19:37:02.743275Z", "iopub.status.idle": "2023-08-18T19:37:02.750645Z", "shell.execute_reply": "2023-08-18T19:37:02.748215Z" }, "origin_pos": 49, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Z.cuda(1) is Z" ] }, { "cell_type": "markdown", "id": "5658048c", "metadata": { "origin_pos": 52 }, "source": [ "### Side Notes\n", "\n", "People use GPUs to do machine learning\n", "because they expect them to be fast.\n", "But transferring variables between devices is slow: much slower than computation.\n", "So we want you to be 100% certain\n", "that you want to do something slow before we let you do it.\n", "If the deep learning framework just did the copy automatically\n", "without crashing then you might not realize\n", "that you had written some slow code.\n", "\n", "Transferring data is not only slow, it also makes parallelization a lot more difficult,\n", "since we have to wait for data to be sent (or rather to be received)\n", "before we can proceed with more operations.\n", "This is why copy operations should be taken with great care.\n", "As a rule of thumb, many small operations\n", "are much worse than one big operation.\n", "Moreover, several operations at a time\n", "are much better than many single operations interspersed in the code\n", "unless you know what you are doing.\n", "This is the case since such operations can block if one device\n", "has to wait for the other before it can do something else.\n", "It is a bit like ordering your coffee in a queue\n", "rather than pre-ordering it by phone\n", "and finding out that it is ready when you are.\n", "\n", "Last, when we print tensors or convert tensors to the NumPy format,\n", "if the data is not in the main memory,\n", "the framework will copy it to the main memory first,\n", "resulting in additional transmission overhead.\n", "Even worse, it is now subject to the dreaded global interpreter lock\n", "that makes everything wait for Python to complete.\n", "\n", "\n", "## [**Neural Networks and GPUs**]\n", "\n", "Similarly, a neural network model can specify devices.\n", "The following code puts the model parameters on the GPU.\n" ] }, { "cell_type": "code", "execution_count": 11, "id": "8bcc281a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.756785Z", "iopub.status.busy": "2023-08-18T19:37:02.756022Z", "iopub.status.idle": "2023-08-18T19:37:02.763247Z", "shell.execute_reply": "2023-08-18T19:37:02.762013Z" }, "origin_pos": 54, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "net = nn.Sequential(nn.LazyLinear(1))\n", "net = net.to(device=try_gpu())" ] }, { "cell_type": "markdown", "id": "4fb2c254", "metadata": { "origin_pos": 57 }, "source": [ "We will see many more examples of\n", "how to run models on GPUs in the following chapters,\n", "simply because the models will become somewhat more computationally intensive.\n", "\n", "For example, when the input is a tensor on the GPU, the model will calculate the result on the same GPU.\n" ] }, { "cell_type": "code", "execution_count": 12, "id": "351af69d", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.768539Z", "iopub.status.busy": "2023-08-18T19:37:02.767413Z", "iopub.status.idle": "2023-08-18T19:37:02.809950Z", "shell.execute_reply": "2023-08-18T19:37:02.807298Z" }, "origin_pos": 58, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[0.7802],\n", " [0.7802]], device='cuda:0', grad_fn=)" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net(X)" ] }, { "cell_type": "markdown", "id": "1ad9b55b", "metadata": { "origin_pos": 60 }, "source": [ "Let's (**confirm that the model parameters are stored on the same GPU.**)\n" ] }, { "cell_type": "code", "execution_count": 13, "id": "6fdbd2c3", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.816317Z", "iopub.status.busy": "2023-08-18T19:37:02.815749Z", "iopub.status.idle": "2023-08-18T19:37:02.822467Z", "shell.execute_reply": "2023-08-18T19:37:02.821657Z" }, "origin_pos": 62, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "device(type='cuda', index=0)" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net[0].weight.data.device" ] }, { "cell_type": "markdown", "id": "eb5940ac", "metadata": { "origin_pos": 65 }, "source": [ "Let the trainer support GPU.\n" ] }, { "cell_type": "code", "execution_count": 14, "id": "1283ae3a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:37:02.826029Z", "iopub.status.busy": "2023-08-18T19:37:02.825482Z", "iopub.status.idle": "2023-08-18T19:37:02.832065Z", "shell.execute_reply": "2023-08-18T19:37:02.831156Z" }, "origin_pos": 67, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "@d2l.add_to_class(d2l.Trainer) #@save\n", "def __init__(self, max_epochs, num_gpus=0, gradient_clip_val=0):\n", " self.save_hyperparameters()\n", " self.gpus = [d2l.gpu(i) for i in range(min(num_gpus, d2l.num_gpus()))]\n", "\n", "@d2l.add_to_class(d2l.Trainer) #@save\n", "def prepare_batch(self, batch):\n", " if self.gpus:\n", " batch = [a.to(self.gpus[0]) for a in batch]\n", " return batch\n", "\n", "@d2l.add_to_class(d2l.Trainer) #@save\n", "def prepare_model(self, model):\n", " model.trainer = self\n", " model.board.xlim = [0, self.max_epochs]\n", " if self.gpus:\n", " model.to(self.gpus[0])\n", " self.model = model" ] }, { "cell_type": "markdown", "id": "4f33c768", "metadata": { "origin_pos": 69 }, "source": [ "In short, as long as all data and parameters are on the same device, we can learn models efficiently. In the following chapters we will see several such examples.\n", "\n", "## Summary\n", "\n", "We can specify devices for storage and calculation, such as the CPU or GPU.\n", " By default, data is created in the main memory\n", " and then uses the CPU for calculations.\n", "The deep learning framework requires all input data for calculation\n", " to be on the same device,\n", " be it CPU or the same GPU.\n", "You can lose significant performance by moving data without care.\n", " A typical mistake is as follows: computing the loss\n", " for every minibatch on the GPU and reporting it back\n", " to the user on the command line (or logging it in a NumPy `ndarray`)\n", " will trigger a global interpreter lock which stalls all GPUs.\n", " It is much better to allocate memory\n", " for logging inside the GPU and only move larger logs.\n", "\n", "## Exercises\n", "\n", "1. Try a larger computation task, such as the multiplication of large matrices,\n", " and see the difference in speed between the CPU and GPU.\n", " What about a task with a small number of calculations?\n", "1. How should we read and write model parameters on the GPU?\n", "1. Measure the time it takes to compute 1000\n", " matrix--matrix multiplications of $100 \\times 100$ matrices\n", " and log the Frobenius norm of the output matrix one result at a time. Compare it with keeping a log on the GPU and transferring only the final result.\n", "1. Measure how much time it takes to perform two matrix--matrix multiplications\n", " on two GPUs at the same time. Compare it with computing in in sequence\n", " on one GPU. Hint: you should see almost linear scaling.\n" ] }, { "cell_type": "markdown", "id": "b3cfc42b", "metadata": { "origin_pos": 71, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/63)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }