{
"cells": [
{
"cell_type": "markdown",
"id": "21617f68",
"metadata": {
"origin_pos": 0
},
"source": [
"# Recurrent Neural Network Implementation from Scratch\n",
":label:`sec_rnn-scratch`\n",
"\n",
"We are now ready to implement an RNN from scratch.\n",
"In particular, we will train this RNN to function\n",
"as a character-level language model\n",
"(see :numref:`sec_rnn`)\n",
"and train it on a corpus consisting of \n",
"the entire text of H. G. Wells' *The Time Machine*,\n",
"following the data processing steps \n",
"outlined in :numref:`sec_text-sequence`.\n",
"We start by loading the dataset.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "87f65350",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:35.388810Z",
"iopub.status.busy": "2023-08-18T19:43:35.388288Z",
"iopub.status.idle": "2023-08-18T19:43:38.472549Z",
"shell.execute_reply": "2023-08-18T19:43:38.471587Z"
},
"origin_pos": 3,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import math\n",
"import torch\n",
"from torch import nn\n",
"from torch.nn import functional as F\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "da92bb7e",
"metadata": {
"origin_pos": 6
},
"source": [
"## RNN Model\n",
"\n",
"We begin by defining a class \n",
"to implement the RNN model\n",
"(:numref:`subsec_rnn_w_hidden_states`).\n",
"Note that the number of hidden units `num_hiddens` \n",
"is a tunable hyperparameter.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c155ec3b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.476808Z",
"iopub.status.busy": "2023-08-18T19:43:38.476085Z",
"iopub.status.idle": "2023-08-18T19:43:38.482218Z",
"shell.execute_reply": "2023-08-18T19:43:38.481388Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class RNNScratch(d2l.Module): #@save\n",
" \"\"\"The RNN model implemented from scratch.\"\"\"\n",
" def __init__(self, num_inputs, num_hiddens, sigma=0.01):\n",
" super().__init__()\n",
" self.save_hyperparameters()\n",
" self.W_xh = nn.Parameter(\n",
" torch.randn(num_inputs, num_hiddens) * sigma)\n",
" self.W_hh = nn.Parameter(\n",
" torch.randn(num_hiddens, num_hiddens) * sigma)\n",
" self.b_h = nn.Parameter(torch.zeros(num_hiddens))"
]
},
{
"cell_type": "markdown",
"id": "77e6dfc9",
"metadata": {
"origin_pos": 9
},
"source": [
"[**The `forward` method below defines how to compute \n",
"the output and hidden state at any time step,\n",
"given the current input and the state of the model\n",
"at the previous time step.**]\n",
"Note that the RNN model loops through \n",
"the outermost dimension of `inputs`,\n",
"updating the hidden state \n",
"one time step at a time.\n",
"The model here uses a $\\tanh$ activation function (:numref:`subsec_tanh`).\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cff0453b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.485916Z",
"iopub.status.busy": "2023-08-18T19:43:38.485348Z",
"iopub.status.idle": "2023-08-18T19:43:38.491143Z",
"shell.execute_reply": "2023-08-18T19:43:38.490286Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"@d2l.add_to_class(RNNScratch) #@save\n",
"def forward(self, inputs, state=None):\n",
" if state is None:\n",
" # Initial state with shape: (batch_size, num_hiddens)\n",
" state = torch.zeros((inputs.shape[1], self.num_hiddens),\n",
" device=inputs.device)\n",
" else:\n",
" state, = state\n",
" outputs = []\n",
" for X in inputs: # Shape of inputs: (num_steps, batch_size, num_inputs)\n",
" state = torch.tanh(torch.matmul(X, self.W_xh) +\n",
" torch.matmul(state, self.W_hh) + self.b_h)\n",
" outputs.append(state)\n",
" return outputs, state"
]
},
{
"cell_type": "markdown",
"id": "9d08b75f",
"metadata": {
"origin_pos": 12
},
"source": [
"We can feed a minibatch of input sequences into an RNN model as follows.\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b1edf46f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.495541Z",
"iopub.status.busy": "2023-08-18T19:43:38.494859Z",
"iopub.status.idle": "2023-08-18T19:43:38.527994Z",
"shell.execute_reply": "2023-08-18T19:43:38.527013Z"
},
"origin_pos": 13,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"batch_size, num_inputs, num_hiddens, num_steps = 2, 16, 32, 100\n",
"rnn = RNNScratch(num_inputs, num_hiddens)\n",
"X = torch.ones((num_steps, batch_size, num_inputs))\n",
"outputs, state = rnn(X)"
]
},
{
"cell_type": "markdown",
"id": "1aadd2c8",
"metadata": {
"origin_pos": 15
},
"source": [
"Let's check whether the RNN model\n",
"produces results of the correct shapes\n",
"to ensure that the dimensionality \n",
"of the hidden state remains unchanged.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "16aca390",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.532199Z",
"iopub.status.busy": "2023-08-18T19:43:38.531595Z",
"iopub.status.idle": "2023-08-18T19:43:38.536898Z",
"shell.execute_reply": "2023-08-18T19:43:38.536103Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def check_len(a, n): #@save\n",
" \"\"\"Check the length of a list.\"\"\"\n",
" assert len(a) == n, f'list\\'s length {len(a)} != expected length {n}'\n",
"\n",
"def check_shape(a, shape): #@save\n",
" \"\"\"Check the shape of a tensor.\"\"\"\n",
" assert a.shape == shape, \\\n",
" f'tensor\\'s shape {a.shape} != expected shape {shape}'\n",
"\n",
"check_len(outputs, num_steps)\n",
"check_shape(outputs[0], (batch_size, num_hiddens))\n",
"check_shape(state, (batch_size, num_hiddens))"
]
},
{
"cell_type": "markdown",
"id": "3e59a672",
"metadata": {
"origin_pos": 17
},
"source": [
"## RNN-Based Language Model\n",
"\n",
"The following `RNNLMScratch` class defines \n",
"an RNN-based language model,\n",
"where we pass in our RNN \n",
"via the `rnn` argument\n",
"of the `__init__` method.\n",
"When training language models, \n",
"the inputs and outputs are \n",
"from the same vocabulary. \n",
"Hence, they have the same dimension,\n",
"which is equal to the vocabulary size.\n",
"Note that we use perplexity to evaluate the model. \n",
"As discussed in :numref:`subsec_perplexity`, this ensures \n",
"that sequences of different length are comparable.\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "5aac6930",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.541305Z",
"iopub.status.busy": "2023-08-18T19:43:38.540715Z",
"iopub.status.idle": "2023-08-18T19:43:38.547938Z",
"shell.execute_reply": "2023-08-18T19:43:38.547045Z"
},
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class RNNLMScratch(d2l.Classifier): #@save\n",
" \"\"\"The RNN-based language model implemented from scratch.\"\"\"\n",
" def __init__(self, rnn, vocab_size, lr=0.01):\n",
" super().__init__()\n",
" self.save_hyperparameters()\n",
" self.init_params()\n",
"\n",
" def init_params(self):\n",
" self.W_hq = nn.Parameter(\n",
" torch.randn(\n",
" self.rnn.num_hiddens, self.vocab_size) * self.rnn.sigma)\n",
" self.b_q = nn.Parameter(torch.zeros(self.vocab_size))\n",
"\n",
" def training_step(self, batch):\n",
" l = self.loss(self(*batch[:-1]), batch[-1])\n",
" self.plot('ppl', torch.exp(l), train=True)\n",
" return l\n",
"\n",
" def validation_step(self, batch):\n",
" l = self.loss(self(*batch[:-1]), batch[-1])\n",
" self.plot('ppl', torch.exp(l), train=False)"
]
},
{
"cell_type": "markdown",
"id": "546a1ef8",
"metadata": {
"origin_pos": 21
},
"source": [
"### [**One-Hot Encoding**]\n",
"\n",
"Recall that each token is represented \n",
"by a numerical index indicating the\n",
"position in the vocabulary of the \n",
"corresponding word/character/word piece.\n",
"You might be tempted to build a neural network\n",
"with a single input node (at each time step),\n",
"where the index could be fed in as a scalar value.\n",
"This works when we are dealing with numerical inputs \n",
"like price or temperature, where any two values\n",
"sufficiently close together\n",
"should be treated similarly.\n",
"But this does not quite make sense. \n",
"The $45^{\\textrm{th}}$ and $46^{\\textrm{th}}$ words \n",
"in our vocabulary happen to be \"their\" and \"said\",\n",
"whose meanings are not remotely similar.\n",
"\n",
"When dealing with such categorical data,\n",
"the most common strategy is to represent\n",
"each item by a *one-hot encoding*\n",
"(recall from :numref:`subsec_classification-problem`).\n",
"A one-hot encoding is a vector whose length\n",
"is given by the size of the vocabulary $N$,\n",
"where all entries are set to $0$,\n",
"except for the entry corresponding \n",
"to our token, which is set to $1$.\n",
"For example, if the vocabulary had five elements,\n",
"then the one-hot vectors corresponding \n",
"to indices 0 and 2 would be the following.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "acec5aa4",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.551269Z",
"iopub.status.busy": "2023-08-18T19:43:38.550973Z",
"iopub.status.idle": "2023-08-18T19:43:38.558494Z",
"shell.execute_reply": "2023-08-18T19:43:38.557663Z"
},
"origin_pos": 23,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[1, 0, 0, 0, 0],\n",
" [0, 0, 1, 0, 0]])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"F.one_hot(torch.tensor([0, 2]), 5)"
]
},
{
"cell_type": "markdown",
"id": "4eb5bf29",
"metadata": {
"origin_pos": 26
},
"source": [
"(**The minibatches that we sample at each iteration\n",
"will take the shape (batch size, number of time steps).\n",
"Once representing each input as a one-hot vector,\n",
"we can think of each minibatch as a three-dimensional tensor, \n",
"where the length along the third axis \n",
"is given by the vocabulary size (`len(vocab)`).**)\n",
"We often transpose the input so that we will obtain an output \n",
"of shape (number of time steps, batch size, vocabulary size).\n",
"This will allow us to loop more conveniently through the outermost dimension\n",
"for updating hidden states of a minibatch,\n",
"time step by time step\n",
"(e.g., in the above `forward` method).\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fc8a2b64",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.561759Z",
"iopub.status.busy": "2023-08-18T19:43:38.561465Z",
"iopub.status.idle": "2023-08-18T19:43:38.565830Z",
"shell.execute_reply": "2023-08-18T19:43:38.565018Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"@d2l.add_to_class(RNNLMScratch) #@save\n",
"def one_hot(self, X):\n",
" # Output shape: (num_steps, batch_size, vocab_size)\n",
" return F.one_hot(X.T, self.vocab_size).type(torch.float32)"
]
},
{
"cell_type": "markdown",
"id": "ac6d947c",
"metadata": {
"origin_pos": 28
},
"source": [
"### Transforming RNN Outputs\n",
"\n",
"The language model uses a fully connected output layer\n",
"to transform RNN outputs into token predictions at each time step.\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "6c649018",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.568961Z",
"iopub.status.busy": "2023-08-18T19:43:38.568671Z",
"iopub.status.idle": "2023-08-18T19:43:38.574093Z",
"shell.execute_reply": "2023-08-18T19:43:38.573259Z"
},
"origin_pos": 29,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"@d2l.add_to_class(RNNLMScratch) #@save\n",
"def output_layer(self, rnn_outputs):\n",
" outputs = [torch.matmul(H, self.W_hq) + self.b_q for H in rnn_outputs]\n",
" return torch.stack(outputs, 1)\n",
"\n",
"@d2l.add_to_class(RNNLMScratch) #@save\n",
"def forward(self, X, state=None):\n",
" embs = self.one_hot(X)\n",
" rnn_outputs, _ = self.rnn(embs, state)\n",
" return self.output_layer(rnn_outputs)"
]
},
{
"cell_type": "markdown",
"id": "a6c3585b",
"metadata": {
"origin_pos": 30
},
"source": [
"Let's [**check whether the forward computation\n",
"produces outputs with the correct shape.**]\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "8d50665e",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.578519Z",
"iopub.status.busy": "2023-08-18T19:43:38.577958Z",
"iopub.status.idle": "2023-08-18T19:43:38.591265Z",
"shell.execute_reply": "2023-08-18T19:43:38.590379Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"model = RNNLMScratch(rnn, num_inputs)\n",
"outputs = model(torch.ones((batch_size, num_steps), dtype=torch.int64))\n",
"check_shape(outputs, (batch_size, num_steps, num_inputs))"
]
},
{
"cell_type": "markdown",
"id": "33ec657b",
"metadata": {
"origin_pos": 33
},
"source": [
"## [**Gradient Clipping**]\n",
"\n",
"\n",
"While you are already used to thinking of neural networks\n",
"as \"deep\" in the sense that many layers\n",
"separate the input and output \n",
"even within a single time step,\n",
"the length of the sequence introduces\n",
"a new notion of depth.\n",
"In addition to the passing through the network\n",
"in the input-to-output direction,\n",
"inputs at the first time step\n",
"must pass through a chain of $T$ layers\n",
"along the time steps in order \n",
"to influence the output of the model\n",
"at the final time step.\n",
"Taking the backwards view, in each iteration,\n",
"we backpropagate gradients through time,\n",
"resulting in a chain of matrix-products \n",
"of length $\\mathcal{O}(T)$.\n",
"As mentioned in :numref:`sec_numerical_stability`, \n",
"this can result in numerical instability, \n",
"causing the gradients either to explode or vanish,\n",
"depending on the properties of the weight matrices. \n",
"\n",
"Dealing with vanishing and exploding gradients \n",
"is a fundamental problem when designing RNNs\n",
"and has inspired some of the biggest advances\n",
"in modern neural network architectures.\n",
"In the next chapter, we will talk about\n",
"specialized architectures that were designed\n",
"in hopes of mitigating the vanishing gradient problem.\n",
"However, even modern RNNs often suffer\n",
"from exploding gradients.\n",
"One inelegant but ubiquitous solution\n",
"is to simply clip the gradients \n",
"forcing the resulting \"clipped\" gradients\n",
"to take smaller values. \n",
"\n",
"\n",
"Generally speaking, when optimizing some objective\n",
"by gradient descent, we iteratively update\n",
"the parameter of interest, say a vector $\\mathbf{x}$,\n",
"but pushing it in the direction of the \n",
"negative gradient $\\mathbf{g}$\n",
"(in stochastic gradient descent, \n",
"we calculate this gradient\n",
"on a randomly sampled minibatch).\n",
"For example, with learning rate $\\eta > 0$,\n",
"each update takes the form \n",
"$\\mathbf{x} \\gets \\mathbf{x} - \\eta \\mathbf{g}$.\n",
"Let's further assume that the objective function $f$\n",
"is sufficiently smooth. \n",
"Formally, we say that the objective \n",
"is *Lipschitz continuous* with constant $L$,\n",
"meaning that for any $\\mathbf{x}$ and $\\mathbf{y}$, we have\n",
"\n",
"$$|f(\\mathbf{x}) - f(\\mathbf{y})| \\leq L \\|\\mathbf{x} - \\mathbf{y}\\|.$$\n",
"\n",
"As you can see, when we update the parameter vector by subtracting $\\eta \\mathbf{g}$,\n",
"the change in the value of the objective\n",
"depends on the learning rate,\n",
"the norm of the gradient and $L$ as follows:\n",
"\n",
"$$|f(\\mathbf{x}) - f(\\mathbf{x} - \\eta\\mathbf{g})| \\leq L \\eta\\|\\mathbf{g}\\|.$$\n",
"\n",
"In other words, the objective cannot\n",
"change by more than $L \\eta \\|\\mathbf{g}\\|$. \n",
"Having a small value for this upper bound \n",
"might be viewed as good or bad.\n",
"On the downside, we are limiting the speed\n",
"at which we can reduce the value of the objective.\n",
"On the bright side, this limits by just how much\n",
"we can go wrong in any one gradient step.\n",
"\n",
"\n",
"When we say that gradients explode, \n",
"we mean that $\\|\\mathbf{g}\\|$ \n",
"becomes excessively large.\n",
"In this worst case, we might do so much\n",
"damage in a single gradient step that we\n",
"could undo all of the progress made over\n",
"the course of thousands of training iterations.\n",
"When gradients can be so large,\n",
"neural network training often diverges,\n",
"failing to reduce the value of the objective.\n",
"At other times, training eventually converges\n",
"but is unstable owing to massive spikes in the loss.\n",
"\n",
"\n",
"One way to limit the size of $L \\eta \\|\\mathbf{g}\\|$ \n",
"is to shrink the learning rate $\\eta$ to tiny values.\n",
"This has the advantage that we do not bias the updates.\n",
"But what if we only *rarely* get large gradients?\n",
"This drastic move slows down our progress at all steps,\n",
"just to deal with the rare exploding gradient events.\n",
"A popular alternative is to adopt a *gradient clipping* heuristic\n",
"projecting the gradients $\\mathbf{g}$ onto a ball \n",
"of some given radius $\\theta$ as follows:\n",
"\n",
"(**$$\\mathbf{g} \\leftarrow \\min\\left(1, \\frac{\\theta}{\\|\\mathbf{g}\\|}\\right) \\mathbf{g}.$$**)\n",
"\n",
"This ensures that the gradient norm never exceeds $\\theta$ \n",
"and that the updated gradient is entirely aligned \n",
"with the original direction of $\\mathbf{g}$.\n",
"It also has the desirable side-effect \n",
"of limiting the influence any given minibatch \n",
"(and within it any given sample) \n",
"can exert on the parameter vector. \n",
"This bestows a certain degree of robustness to the model. \n",
"To be clear, it is a hack. \n",
"Gradient clipping means that we are not always\n",
"following the true gradient and it is hard \n",
"to reason analytically about the possible side effects.\n",
"However, it is a very useful hack,\n",
"and is widely adopted in RNN implementations\n",
"in most deep learning frameworks.\n",
"\n",
"\n",
"Below we define a method to clip gradients,\n",
"which is invoked by the `fit_epoch` method of\n",
"the `d2l.Trainer` class (see :numref:`sec_linear_scratch`).\n",
"Note that when computing the gradient norm,\n",
"we are concatenating all model parameters,\n",
"treating them as a single giant parameter vector.\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "7e5bb036",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.596250Z",
"iopub.status.busy": "2023-08-18T19:43:38.595512Z",
"iopub.status.idle": "2023-08-18T19:43:38.600956Z",
"shell.execute_reply": "2023-08-18T19:43:38.600103Z"
},
"origin_pos": 35,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"@d2l.add_to_class(d2l.Trainer) #@save\n",
"def clip_gradients(self, grad_clip_val, model):\n",
" params = [p for p in model.parameters() if p.requires_grad]\n",
" norm = torch.sqrt(sum(torch.sum((p.grad ** 2)) for p in params))\n",
" if norm > grad_clip_val:\n",
" for param in params:\n",
" param.grad[:] *= grad_clip_val / norm"
]
},
{
"cell_type": "markdown",
"id": "f75257ea",
"metadata": {
"origin_pos": 38
},
"source": [
"## Training\n",
"\n",
"Using *The Time Machine* dataset (`data`),\n",
"we train a character-level language model (`model`)\n",
"based on the RNN (`rnn`) implemented from scratch.\n",
"Note that we first calculate the gradients,\n",
"then clip them, and finally \n",
"update the model parameters\n",
"using the clipped gradients.\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "8d4a3c21",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:43:38.605245Z",
"iopub.status.busy": "2023-08-18T19:43:38.604935Z",
"iopub.status.idle": "2023-08-18T19:47:21.800437Z",
"shell.execute_reply": "2023-08-18T19:47:21.799525Z"
},
"origin_pos": 39,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"data = d2l.TimeMachine(batch_size=1024, num_steps=32)\n",
"rnn = RNNScratch(num_inputs=len(data.vocab), num_hiddens=32)\n",
"model = RNNLMScratch(rnn, vocab_size=len(data.vocab), lr=1)\n",
"trainer = d2l.Trainer(max_epochs=100, gradient_clip_val=1, num_gpus=1)\n",
"trainer.fit(model, data)"
]
},
{
"cell_type": "markdown",
"id": "b242d710",
"metadata": {
"origin_pos": 40
},
"source": [
"## Decoding\n",
"\n",
"Once a language model has been learned,\n",
"we can use it not only to predict the next token\n",
"but to continue predicting each subsequent one,\n",
"treating the previously predicted token as though\n",
"it were the next in the input. \n",
"Sometimes we will just want to generate text\n",
"as though we were starting at the beginning \n",
"of a document. \n",
"However, it is often useful to condition\n",
"the language model on a user-supplied prefix.\n",
"For example, if we were developing an\n",
"autocomplete feature for a search engine\n",
"or to assist users in writing emails,\n",
"we would want to feed in what they \n",
"had written so far (the prefix), \n",
"and then generate a likely continuation.\n",
"\n",
"\n",
"[**The following `predict` method\n",
"generates a continuation, one character at a time,\n",
"after ingesting a user-provided `prefix`**].\n",
"When looping through the characters in `prefix`,\n",
"we keep passing the hidden state\n",
"to the next time step \n",
"but do not generate any output.\n",
"This is called the *warm-up* period.\n",
"After ingesting the prefix, we are now\n",
"ready to begin emitting the subsequent characters,\n",
"each of which will be fed back into the model \n",
"as the input at the next time step.\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "c193c4b5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:47:21.830686Z",
"iopub.status.busy": "2023-08-18T19:47:21.830348Z",
"iopub.status.idle": "2023-08-18T19:47:21.837501Z",
"shell.execute_reply": "2023-08-18T19:47:21.836643Z"
},
"origin_pos": 41,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"@d2l.add_to_class(RNNLMScratch) #@save\n",
"def predict(self, prefix, num_preds, vocab, device=None):\n",
" state, outputs = None, [vocab[prefix[0]]]\n",
" for i in range(len(prefix) + num_preds - 1):\n",
" X = torch.tensor([[outputs[-1]]], device=device)\n",
" embs = self.one_hot(X)\n",
" rnn_outputs, state = self.rnn(embs, state)\n",
" if i < len(prefix) - 1: # Warm-up period\n",
" outputs.append(vocab[prefix[i + 1]])\n",
" else: # Predict num_preds steps\n",
" Y = self.output_layer(rnn_outputs)\n",
" outputs.append(int(Y.argmax(axis=2).reshape(1)))\n",
" return ''.join([vocab.idx_to_token[i] for i in outputs])"
]
},
{
"cell_type": "markdown",
"id": "0e20a435",
"metadata": {
"origin_pos": 43
},
"source": [
"In the following, we specify the prefix \n",
"and have it generate 20 additional characters.\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "8c312327",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T19:47:21.841324Z",
"iopub.status.busy": "2023-08-18T19:47:21.840677Z",
"iopub.status.idle": "2023-08-18T19:47:21.867198Z",
"shell.execute_reply": "2023-08-18T19:47:21.866360Z"
},
"origin_pos": 44,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"'it has in the the the the '"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.predict('it has', 20, data.vocab, d2l.try_gpu())"
]
},
{
"cell_type": "markdown",
"id": "9c09cb6b",
"metadata": {
"origin_pos": 47
},
"source": [
"While implementing the above RNN model from scratch is instructive, it is not convenient.\n",
"In the next section, we will see how to leverage deep learning frameworks to whip up RNNs\n",
"using standard architectures, and to reap performance gains \n",
"by relying on highly optimized library functions.\n",
"\n",
"\n",
"## Summary\n",
"\n",
"We can train RNN-based language models to generate text following the user-provided text prefix. \n",
"A simple RNN language model consists of input encoding, RNN modeling, and output generation.\n",
"During training, gradient clipping can mitigate the problem of exploding gradients but does not address the problem of vanishing gradients. In the experiment, we implemented a simple RNN language model and trained it with gradient clipping on sequences of text, tokenized at the character level. By conditioning on a prefix, we can use a language model to generate likely continuations, which proves useful in many applications, e.g., autocomplete features.\n",
"\n",
"\n",
"## Exercises\n",
"\n",
"1. Does the implemented language model predict the next token based on all the past tokens up to the very first token in *The Time Machine*? \n",
"1. Which hyperparameter controls the length of history used for prediction?\n",
"1. Show that one-hot encoding is equivalent to picking a different embedding for each object.\n",
"1. Adjust the hyperparameters (e.g., number of epochs, number of hidden units, number of time steps in a minibatch, and learning rate) to improve the perplexity. How low can you go while sticking with this simple architecture?\n",
"1. Replace one-hot encoding with learnable embeddings. Does this lead to better performance?\n",
"1. Conduct an experiment to determine how well this language model \n",
" trained on *The Time Machine* works on other books by H. G. Wells,\n",
" e.g., *The War of the Worlds*.\n",
"1. Conduct another experiment to evaluate the perplexity of this model\n",
" on books written by other authors. \n",
"1. Modify the prediction method so as to use sampling \n",
" rather than picking the most likely next character.\n",
" * What happens?\n",
" * Bias the model towards more likely outputs, e.g., \n",
" by sampling from $q(x_t \\mid x_{t-1}, \\ldots, x_1) \\propto P(x_t \\mid x_{t-1}, \\ldots, x_1)^\\alpha$ for $\\alpha > 1$.\n",
"1. Run the code in this section without clipping the gradient. What happens?\n",
"1. Replace the activation function used in this section with ReLU \n",
" and repeat the experiments in this section. Do we still need gradient clipping? Why?\n"
]
},
{
"cell_type": "markdown",
"id": "6043ef6e",
"metadata": {
"origin_pos": 49,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/486)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}