-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdiffprog.qmd
275 lines (190 loc) · 22.1 KB
/
diffprog.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
---
execute:
echo: false
format:
html:
code-fold: true
default-image-extension: png
pdf:
default-image-extension: pdf
jupyter: python3
---
# Gradient descent {#sec-gradient-descent}
## Introduction
Moving forward with the groundwork from the previous section on automatic differentiation, we'll dive into how this enables a particular learning procedure called *gradient descent*. We'll explore what that means, then apply this to a few different scenarios to demonstrate its utility, along with highlighting some more intelligent versions that have arisen as a by-product of the recent deep learning revolution. Importantly, this will also set the stage to show the main mechanism for how neural networks are able to train.
Say we have a ball on a hill, and we wanted to roll it down. How would you go about this task? Well, I imagine you'd probably just give it some initial kick, and then let gravity do the rest. Alright then -- let's take away gravity. What then? Look for the bottom of the hill and try to push it there? But what if you can't see the hill itself? This is the situation we find ourself in when trying to optimize a workflow. We're able to see what point we're at during optimization, but we don't know where the bottom of the hill is (minimum of our objective), or what the surrounding landscape looks like (could be very expensive to scan over the space).
Ah, but here's a trick: if we can see where we are at one point on the hill, we can determine the way the slope goes that we're standing on. If the slope points up, we'll want to push the ball in the opposite direction to go down. Then, we're guaranteed to at least get *nearer* the bottom of the hill. But by how much do we push the ball?
If the ball is close to the bottom of the hill, you could imagine that the ground starts getting flatter, which would make the magnitude of the slope less. Likewise, if the slope is steep, we know we're probably not at the bottom yet, so we should move a reasonable amount. We can then just move *proportionally to the magnitude of the slope.*
To write this out in equation form: given the horizontal position on the hill $x_i$, and the vertical position $y_i$, we propose to move to a new point $x_{i+1}$ in proportion to the slope at the point that we're on, i.e.
$$
x_{i+1} = x_i - \alpha \frac{\Delta y_i}{\Delta x_i}~,
$$
where $\alpha$ is the constant of proportionality that we're free to choose (also called the **learning rate**, since it modifies the step size we take), and $\Delta$ is a small change that we assume we can calculate. Take note of the minus sign -- this is us trying to move in the *opposite direction to the slope* at $x_i$.
Instead of assuming we can numerically calculate the small change $\Delta$, we actually already have a way to calculate a small change at a point from calculus: the derivative! So if we take $y$ as a function of $x$, then we can replace the deltas with the gradient of $y(x)$ evaluated at our current point $x_i$, leaving us with
$$
x_{i+1} = x_i - \alpha \frac{\partial y(x_i)}{\partial x_i}~.
$$ {#eq-grad-descent}
@eq-grad-descent is the equation for gradient descent, and is probably the most important equation in the whole thesis, since it enables all of the downstream applications I'll talk about later. It's also the mechanism that enables most artificial intelligence systems to learn.
To reframe this equation one more time through the lens of optimization: given an **objective function** or **loss function** $L$ with parameters $\varphi$ that defines a goal relative to data $d$ (so $L=L(\varphi, x)$), we can use *gradient descent* to update the parameters $\varphi$ such that we minimize the objective evaluated at $d$:
$$
\varphi_{i+1} = \varphi_i - \alpha \frac{\partial L(\varphi_i, d)}{\partial \varphi_i}~.
$$ {#eq-grad-descent}
See a pictorial demonstration of this rule in @fig-ball-rolling.
![Gradient descent is more easily digestible when considering it as rolling a ball down a hill, but you need to play the role of gravity.](images/gdesc3){#fig-ball-rolling}
We're now specifying the data $d$ because we want to give meaning to the vertical position in the hill: it's some quantity that's assessing the quality of our workflow with respect to some data from the real world $d$, given that we're using parameters $\varphi_i$. In practice this may be a small subset of the dataset that we draw at random, since we can't computationally do this update on the whole data set due to memory restrictions on the device we run this on.
A key point to highlight: this mechanism only works if $L$ is differentiable with respect to $\varphi$, otherwise we won't be able to calculate its gradient. This restriction may look tame for common objectives, which tend to involve simple algebraic expressions involving the output of a neural network, which is already a differentiable function. However, if we want to add domain knowledge to our loss function, we may end up discovering that not everything we want to calculate has a well-defined gradient. There are various ways to get around this, including the use of surrogate operations that are **relaxed** (jargon for differentiable) versions of their non-differentiable counterparts. We'll look at this in more detail when we study applications of this nature.
Let's look at an example of gradient descent in action.
### Example: maximum likelihood estimation
Say we have some example data drawn from a bi-modal probability distribution that's somewhat normal-looking, like in @fig-bidata. We may want to try and model this with a normal distribution, but how do we choose the parameters? We can fit them to the data using maximum likelihood estimation, discussed further in @sec-mle. The basic idea is that, given a probability distribution $p$ with parameters $\mu$, we want to calculate
$$ \hat{\mu} = \underset{\mu}{\mathrm{argmin}} (-2\ln p(x|\mu))~,$$
given some observed data $x$. In other words, we want to find the value of $\mu$ such that we minimize the negative log-likelihood. We can do this via gradient descent!
Using the update rule in @eq-grad-descent with $L$ as the negative log-likelihood and $(\mu, \sigma)$ playing the role of $\varphi$, we can run gradient descent for some number of steps until we reach a result that converges within some tolerance. We'll have to pick some initial value to start for each parameter -- here we use 1 for each. In the implementation, we're using the automatic differentiation framework JAX (@jax) to calculate the gradient of the objective (online viewers can expand the code block above the figure to see the details). This gives us a result in @fig-onenorm, which isn't particularly great in my opinion.
```{python}
#| label: fig-bidata
#| fig-cap: "Example data produced using two normal distributions in unequal amounts. One is centered on 3, and the other on 0, both with unit standard deviation."
import numpy as np
import jax
import jax.scipy as jsc
import jax.numpy as jnp
import copy
num_points = 10000
data = np.concatenate(
(np.random.normal(size=num_points), np.random.normal(size=num_points // 2) + 3)
)
# want to minimize negative log likelihood
def loss(pars, data):
mu, sigma = pars
return jnp.mean(-jsc.stats.norm.logpdf(data, mu, sigma))
# gradient descent
def update_rule(pars, data, lr, loss):
return pars - lr * jax.grad(loss)(pars, data)
def learning(loss, init_pars, num_steps=1000, tol=1e-6, lr=5e-2):
pars = copy.deepcopy(init_pars)
for i in range(num_steps):
new_pars = update_rule(pars, data, lr, loss)
if not np.allclose(new_pars, pars, atol=tol):
pars = new_pars
else:
return new_pars
print("max steps reached")
return pars
pars = learning(loss, init_pars=jnp.array([1.0, 1.0]))
import matplotlib.pyplot as plt
subplot_settings = dict(figsize=[7, 3], dpi=200, tight_layout=True)
fig, ax = plt.subplots(**subplot_settings)
ax.hist(data, density=True)
ax.set_xlabel("x")
ax.set_ylabel("density");
```
```{python}
#| label: fig-onenorm
#| fig-cap: "A single normal distribution fitted to the data in @fig-bidata using gradient descent."
fig, ax = plt.subplots(**subplot_settings)
ax.hist(data, density=True)
x = np.linspace(jsc.stats.norm.ppf(0.01, *pars), jsc.stats.norm.ppf(0.99, *pars), 100)
ax.plot(
x,
jsc.stats.norm.pdf(x, *pars),
label=f"Normal($\mu$={pars[0]:.3g}, $\sigma$={pars[1]:.3g})",
)
ax.set_xlabel("x")
ax.set_ylabel("density")
ax.set_title(f"negative log-likelihood = {loss(pars, data):.3g}")
ax.legend();
```
Pretending we didn't know that the data came from a mixture of normal distributions, we can make a more sophisticated model, e.g.
$$
p(x |\mu_1, \sigma_1, \mu_2, \sigma_2) = \frac{1}{2}\mathrm{Normal}(\mu_1, \sigma_1) + \frac{1}{2}\mathrm{Normal}(\mu_2, \sigma_2)~.
$$
We can then simultaneously optimize $\mu_1, \sigma_1, \mu_2, \sigma_2$ in exactly the same way, with no modification to the procedure other than using the new likelihood as the loss function. Doing this yields the distribution in @fig-twonorm. We can tell that the shape of the distribution is represented better here, which is also indicated by the lower negative log-likelihood than in the case of the single normal distribution. Since we forced the proportions of the mixture to be half and half, the lower second peak is modelled through a wider normal distribution to match the height of the second, smaller mode.
Interestingly, if we use 1 as the init for every parameter, we recover the solution from @fig-onenorm, so we have to make sure there's a little mutual variation in the starting values so that the gradients push the different $\mu$ and $\sigma$ values from different positions, allowing the second mode to be discovered. This demonstrates the behavior of gradient descent to move to some *local* minimum of the objective, and won't always converge to something optimal or intuitive.
```{python}
#| label: fig-twonorm
#| fig-cap: "A mixture of two normal distributions fitted to the data in @fig-bidata using gradient descent."
def mixture_pdf(data, pars):
mu1, sigma1, mu2, sigma2 = pars
return 0.5 * jsc.stats.norm.pdf(data, mu1, sigma1) + 0.5 * jsc.stats.norm.pdf(
data, mu2, sigma2
)
def new_loss(pars, data):
return jnp.mean(-jnp.log(mixture_pdf(data, pars)))
new_pars = learning(
new_loss, init_pars=jnp.array([1.0, 1.0, 2.0, 2.0]), lr=1e-1, tol=1e-4
)
fig, ax = plt.subplots(**subplot_settings)
ax.hist(data, density=True)
x = np.linspace(-3, 6, 100)
mu1, sigma1, mu2, sigma2 = new_pars
ax.plot(
x,
mixture_pdf(x, new_pars),
label=f"0.5*Normal($\mu$={mu1:.2g}, $\sigma$={sigma1:.2g})"
+ f" + 0.5*Normal($\mu$={mu2:.2g}, $\sigma$={sigma2:.2g})",
)
ax.set_xlabel("x")
ax.set_ylabel("density")
ax.set_title(f"negative log-likelihood = {new_loss(new_pars, data):.3g}")
ax.legend(fontsize="small");
```
## Mini-batching and stochastic gradients
In reality, we're not generally able to update our parameters using gradients computed over all our data at once in one batch, since that data may not fit on the device we're using to compute the program. We'd like to do this if we truly want to calculate the expectation of the gradient (in practice, the empirical mean) across all the training data we have, which has some convergence guarantees for arriving at a local minimum. Instead, we often split our data up into **minibatches**, which are fixed-size partitions of the data, typically randomly selected^[There's a funny issue to do with the notion of randomly selecting here; if we select *with replacement*, then we're sampling i.i.d. uniform random variables. This makes the theory of analyzing the performance much neater. But in practice, we never do this, and just shuffle the data and stream it, making it non-i.i.d, and therefore more difficult to analyze.]. When we have seen enough batches such that we cover all the available training data, that number of steps is known as an **epoch**. This gives rise to the notion of the **batch size** -- the size of each partition -- which then inherently decides how many steps an epoch will take (smaller batch size = more steps and vice-versa).
An extreme version of splitting up into batches is **stochastic gradient descent**, which is just setting the batch size to 1. Gradients computed in this way will have very high variance as the name suggests, since parameter updates would be based on the performance on each individual training example. Surprisingly though, this high variance of the gradients can often help when wanting to explore the loss landscape more, and may help in jumping to new local minima that batch gradient descent may ignore.
## Speeding up convergence: different optimizer types
To improve upon the rate of convergence in minibatch gradient descent, several variants have been proposed that incorporate additional information to make the gradient update. The methods are generally referred to as **optimizers** -- we'll cover just a couple of the more common ones here.
### Momentum
What happens when we're in a ravine during optimization, i.e. a local minimum with two steep slopes on either side in one dimension? Without modification, gradient descent will typically bounce between the ravines: we're moving in the downward direction, and with a larger step size since the slope is steep. How may we help this converge easier? Well, we could of course modify the learning rate in an adaptive fashion to try to dampen this oscillation about a minimum in a ravine. Something else that is done is to incorporate some amount ($\gamma$) of the *previous update step*, which we can write with a modified update rule through a placeholder variable $v$, defined recursively:
$$
v_{i} = \gamma v_{i-1} + \alpha \frac{\partial L(\varphi_i, d)}{\partial \varphi_i};~~~ \varphi_{i+1} = \varphi_i - v_i~.
$$
To give some intuition as to what's happening here, we note that there's still a standard gradient update rule in there, but we're also "speeding up" the update by an amount $\gamma v_{i-1}$, meaning if we did a large update previously, then we'll go even further this time. The analogy here is really still a ball rolling down a hill, but we're compounding speed just like gravity would make us do in real life. Moreover, if the previous update had a different sign for the gradient (travelling in the opposite direction), we'll actually be *slowed* by $\gamma v_{i-1}$, which will prevent these types of oscillating situations bouncing between two steep slopes. This is termed gradient descent with **momentum**; I don't know if this was where it was originally proposed, but many seem to cite [@momentum], despite it opening with "A momentum term is usually used in...".
### Adagrad
We return to the idea of *adaptive learning rates*. In the multi-dimensional parameter setting (could even be billions if we're looking at neural networks), some parameters will have higher importance than others, i.e. the loss landscape could be flat in some dimensions, and convex in others. However, there could be parameters that are locally flat in some region, but have more interesting structure at a distance far from that region, or perhaps at least have structure that is more widely spread out. This could happen if that parameter is responsible for information relating to a sparse feature, for instance. If the learning rate makes the stepsize small in comparison to this distance, then we'll never explore that structure well. This then gives rise to the notion of adapting the learning rate on a per-parameter basis to aid more efficient exploration of the loss landscape in search of a local minimum.
Adagrad (shorthand for "adaptive gradient") is a method that endeavors to tackle this issue. It scales the learning rate for each parameter in such a way as to increase the step size for parameters in which we have moved less, and decrease it for those in which we have moved more. Specifically, it scales the learning rate by the inverse of the sum of the (squared) gradients. Denoting the gradient of one example parameter $\varphi^{(1)}$ at step $i$ as $g^{(1)}_{i}$, we can write the update step for that parameter as
$$
\varphi^{(1)}_{i+1} = \varphi^{(1)}_{i} + \Delta\varphi_i^{(1)};~~~ \Delta\varphi_{i}^{(1)} = - \frac{\alpha}{\sqrt{\sum^{i}_{j=0}\left(g^{(1)}_{j}\right)^2 + \epsilon}} g^{(1)}_{i} ~,
$$
where $\epsilon$ is a very small number just to prevent dividing by zero. We've defined the convenient notation of $\Delta\varphi$, which becomes the subject of modification for the methods hereafter. For $n$ parameters, we denote $g_{i}$ as the vector of all gradients $[g^{(1)}_{i}, \dots, g^{(n)}_{i}]$. Then, if we treat all the operations as *elementwise*, i.e. as one may expect with `numpy` arrays, we can write this again more compactly for all parameters $\varphi$ as
$$
\varphi_{i+1} = \varphi_{i} + \Delta \varphi_i ;~~~ \Delta\varphi_i = - \frac{\alpha}{\sqrt{\sum^{i}_{j=0}g^2_{j} + \epsilon}} g_{i} ~.
$$
One of the main reasons for the appeal in Adagrad is that you don't really need to tune the scalar learning rate $\alpha$ from it's initial value, since it's being done on-the-fly for each parameter. However, an issue arises in that the sum $\sum^{i}_{j=0}g^2_{j}$ is one of positive numbers, and shall grow without bound, which will eventually reduce the learning rate to a crawl and stop learning altogether. Avoiding this is the groundwork for our next optimizers: Adadelta and RMSProp.
### Adadelta/RMSProp
As mentioned, tackling the growing sum of squared gradients is the purpose behind Adadelta [@adadelta] and RMSProp [@rmsprop], where we'll start with the former.
Instead of maintaining a sum over the gradients $g_i^2$ up to step $i$, we recursively define a *decaying average* $E\left[g^2\right]_i$:
$$
E\left[g^2\right]_i=\gamma E\left[g^2\right]_{i-1}+(1-\gamma) g_i^2~,
$$
where we proportionally weight the previous average and the current squared gradient, each by a factor of $\gamma$ and $1-\gamma$ respectively. This is a similar idea to momentum: we're "remembering" the previous steps in some manner that we can tune numerically through $\gamma$. This leads to a $\Delta \varphi_i$ of
$$
\Delta \varphi_i = - \frac{\alpha}{\sqrt{E\left[g^2\right]_i+ \epsilon}} g_{i} ~.
$${#eq-adadelta1}
Coincidentally, this is actually the equation for the *RMSProp* update, developed independently from Adadelta around the same time! For Adadelta though, we're not quite done yet though; the Adadelta authors noted that this update of $\Delta \varphi_i$ -- just like the update in *any* of the methods above -- does not have the same units as the parameters (it's instead dimensionless), which is not in principle an issue, but maybe better behaviour could be had by updating by a quantity of the same scale as the parameters themselves. This led to defining a second decaying average using the definition of @eq-adadelta1 for $\Delta \varphi_i$, which we'll also define recursively:
$$
E\left[\Delta \varphi^2\right]_i=\gamma E\left[\Delta \varphi^2\right]_{i-1}+(1-\gamma) \Delta \varphi_i^2 ~.
$$
This expression is then also put under a square root with the same small $\epsilon$, and we use this in place of the static learning rate $\alpha$, i.e. replace it with $\sqrt{E\left[\Delta \varphi^2\right]_i + \epsilon}$. At least, we'd like to -- this would lead to some kind of nested recursion relation between $E\left[\Delta \varphi^2\right]_i$ and $\Delta \varphi_i$, so we need to instead need to approximate this with a value we have access to. The paper does this with the value from the previous iteration: $E\left[\Delta \varphi^2\right]_{i-1}$. This at last gives us the full relation for one Adadelta update:
$$
\Delta \varphi_i = - \frac{\sqrt{E\left[\Delta \varphi^2\right]_{i-1} + \epsilon}}{\sqrt{E\left[g^2\right]_i+ \epsilon}} g_{i} ~.
$${#eq-adadelta}
A curious thing to note in @eq-adadelta is the lack of $\alpha$ -- we've removed the scalar learning rate altogether, and instead only need to specify the decay factor $\gamma$ before undergoing optimization.
### Adam
We've reached **Adam** [@adam]: by far the most common default choice of optimizer for deep learning, and the one frequently used within this thesis. Adam draws upon ideas from all of the above methods; it keeps track of both a decaying average of gradients *and* squared gradients. We can write their forms explicitly:
$$
\begin{aligned}
m_i &=\beta_1 m_{i-1}+\left(1-\beta_1\right) g_i \\
v_i &=\beta_2 v_{i-1}+\left(1-\beta_2\right) g_i^2~,
\end{aligned}
$$
where we've introduced separate decay factors $\beta_1$ and $\beta_2$ for each quantity. If you remember earlier, we were using $E[g]_i$ notation for these types of quantities, and that's not a coincidence -- they're moving averages, and so are Monte Carlo estimations of expectation values (albeit off by decay terms etc). Specifically, $m_i$ and $v_i$ are estimators of the first moment (mean) and second moment (variance) of the gradients $g_i$, hence their variable names, and the name of Adam (adaptive *moment* estimation).
One thing the Adam authors note is the fact that they used the algorithm through initializing $m_i$ and $v_i$ as vectors of zeroes, and then found that the resulting estimates of the gradient moments were biased towards zero. The paper includes a brief derivation of the actual value of these moments, and shows that in either case of $m_i$ or $v_i$, you can correct for this bias by dividing through by $(1-\beta^i)$, where we explicitly raise $\beta$ to the power of the iteration number $i$ (the paper uses $t$ instead). This results in the bias-corrected version of the previous formula:
$$
\begin{aligned}
\hat{m}_i &=\frac{m_i}{1-\beta_1^i} \\
\hat{v}_i &=\frac{v_i}{1-\beta_2^i}~.
\end{aligned}
$$
We then use these bias-corrected moment estimates to construct an expression for the Adam update, which looks a lot like that for RMSProp in @eq-adadelta1:
$$
\Delta \varphi_i = - \frac{\alpha}{\sqrt{\hat{v}_i}+ \epsilon} \hat{m}_{i} ~.
$$
As you can see, gradient descent can get pretty complicated! The reason I went through this explicitly was to show that gradient descent has been extremely well-studied during the rise of deep learning, and we often need to go beyond simple update rules in order to effectively learn the properties we want from our parameters in practice. In fact, the main work you'll see in the first application later on initially wouldn't learn at all until we switched to using Adam!