{ "cells": [ { "cell_type": "markdown", "id": "2c780759", "metadata": { "origin_pos": 0 }, "source": [ "# Adadelta\n", ":label:`sec_adadelta`\n", "\n", "Adadelta is yet another variant of AdaGrad (:numref:`sec_adagrad`). The main difference lies in the fact that it decreases the amount by which the learning rate is adaptive to coordinates. Moreover, traditionally it referred to as not having a learning rate since it uses the amount of change itself as calibration for future change. The algorithm was proposed in :citet:`Zeiler.2012`. It is fairly straightforward, given the discussion of previous algorithms so far.\n", "\n", "## The Algorithm\n", "\n", "In a nutshell, Adadelta uses two state variables, $\\mathbf{s}_t$ to store a leaky average of the second moment of the gradient and $\\Delta\\mathbf{x}_t$ to store a leaky average of the second moment of the change of parameters in the model itself. Note that we use the original notation and naming of the authors for compatibility with other publications and implementations (there is no other real reason why one should use different Greek variables to indicate a parameter serving the same purpose in momentum, Adagrad, RMSProp, and Adadelta).\n", "\n", "Here are the technical details of Adadelta. Given the parameter du jour is $\\rho$, we obtain the following leaky updates similarly to :numref:`sec_rmsprop`:\n", "\n", "$$\\begin{aligned}\n", " \\mathbf{s}_t & = \\rho \\mathbf{s}_{t-1} + (1 - \\rho) \\mathbf{g}_t^2.\n", "\\end{aligned}$$\n", "\n", "The difference to :numref:`sec_rmsprop` is that we perform updates with the rescaled gradient $\\mathbf{g}_t'$, i.e.,\n", "\n", "$$\\begin{aligned}\n", " \\mathbf{x}_t & = \\mathbf{x}_{t-1} - \\mathbf{g}_t'. \\\\\n", "\\end{aligned}$$\n", "\n", "So what is the rescaled gradient $\\mathbf{g}_t'$? We can calculate it as follows:\n", "\n", "$$\\begin{aligned}\n", " \\mathbf{g}_t' & = \\frac{\\sqrt{\\Delta\\mathbf{x}_{t-1} + \\epsilon}}{\\sqrt{{\\mathbf{s}_t + \\epsilon}}} \\odot \\mathbf{g}_t, \\\\\n", "\\end{aligned}$$\n", "\n", "where $\\Delta \\mathbf{x}_{t-1}$ is the leaky average of the squared rescaled gradients $\\mathbf{g}_t'$. We initialize $\\Delta \\mathbf{x}_{0}$ to be $0$ and update it at each step with $\\mathbf{g}_t'$, i.e.,\n", "\n", "$$\\begin{aligned}\n", " \\Delta \\mathbf{x}_t & = \\rho \\Delta\\mathbf{x}_{t-1} + (1 - \\rho) {\\mathbf{g}_t'}^2,\n", "\\end{aligned}$$\n", "\n", "and $\\epsilon$ (a small value such as $10^{-5}$) is added to maintain numerical stability.\n", "\n", "\n", "\n", "## Implementation\n", "\n", "Adadelta needs to maintain two state variables for each variable, $\\mathbf{s}_t$ and $\\Delta\\mathbf{x}_t$. This yields the following implementation.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "47913d6a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:25:51.129461Z", "iopub.status.busy": "2023-08-18T19:25:51.128827Z", "iopub.status.idle": "2023-08-18T19:25:53.909384Z", "shell.execute_reply": "2023-08-18T19:25:53.908480Z" }, "origin_pos": 2, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "import torch\n", "from d2l import torch as d2l\n", "\n", "\n", "def init_adadelta_states(feature_dim):\n", " s_w, s_b = torch.zeros((feature_dim, 1)), torch.zeros(1)\n", " delta_w, delta_b = torch.zeros((feature_dim, 1)), torch.zeros(1)\n", " return ((s_w, delta_w), (s_b, delta_b))\n", "\n", "def adadelta(params, states, hyperparams):\n", " rho, eps = hyperparams['rho'], 1e-5\n", " for p, (s, delta) in zip(params, states):\n", " with torch.no_grad():\n", " # In-place updates via [:]\n", " s[:] = rho * s + (1 - rho) * torch.square(p.grad)\n", " g = (torch.sqrt(delta + eps) / torch.sqrt(s + eps)) * p.grad\n", " p[:] -= g\n", " delta[:] = rho * delta + (1 - rho) * g * g\n", " p.grad.data.zero_()" ] }, { "cell_type": "markdown", "id": "6e5b2016", "metadata": { "origin_pos": 4 }, "source": [ "Choosing $\\rho = 0.9$ amounts to a half-life time of 10 for each parameter update. This tends to work quite well. We get the following behavior.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "d08ec993", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:25:53.913864Z", "iopub.status.busy": "2023-08-18T19:25:53.913039Z", "iopub.status.idle": "2023-08-18T19:25:57.653025Z", "shell.execute_reply": "2023-08-18T19:25:57.652144Z" }, "origin_pos": 5, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "loss: 0.245, 0.160 sec/epoch\n" ] }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-18T19:25:57.611063\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.7.2, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "data_iter, feature_dim = d2l.get_data_ch11(batch_size=10)\n", "d2l.train_ch11(adadelta, init_adadelta_states(feature_dim),\n", " {'rho': 0.9}, data_iter, feature_dim);" ] }, { "cell_type": "markdown", "id": "cbc1262f", "metadata": { "origin_pos": 6 }, "source": [ "For a concise implementation we simply use the Adadelta algorithm from high-level APIs. This yields the following one-liner for a much more compact invocation.\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "f0409fab", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:25:57.656890Z", "iopub.status.busy": "2023-08-18T19:25:57.656308Z", "iopub.status.idle": "2023-08-18T19:26:04.378742Z", "shell.execute_reply": "2023-08-18T19:26:04.377488Z" }, "origin_pos": 8, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "loss: 0.243, 0.119 sec/epoch\n" ] }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-18T19:26:04.337362\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.7.2, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "trainer = torch.optim.Adadelta\n", "d2l.train_concise_ch11(trainer, {'rho': 0.9}, data_iter)" ] }, { "cell_type": "markdown", "id": "4012d40c", "metadata": { "origin_pos": 10 }, "source": [ "## Summary\n", "\n", "* Adadelta has no learning rate parameter. Instead, it uses the rate of change in the parameters itself to adapt the learning rate.\n", "* Adadelta requires two state variables to store the second moments of gradient and the change in parameters.\n", "* Adadelta uses leaky averages to keep a running estimate of the appropriate statistics.\n", "\n", "## Exercises\n", "\n", "1. Adjust the value of $\\rho$. What happens?\n", "1. Show how to implement the algorithm without the use of $\\mathbf{g}_t'$. Why might this be a good idea?\n", "1. Is Adadelta really learning rate free? Could you find optimization problems that break Adadelta?\n", "1. Compare Adadelta to Adagrad and RMS prop to discuss their convergence behavior.\n" ] }, { "cell_type": "markdown", "id": "5e628909", "metadata": { "origin_pos": 12, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/1076)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }