Home

Stochastic Calculus

Sum of Centered Gaussians

Consider the sum of two independent random variables XX and YY, where each variable is normally distributed with zero mean. Denote the variance of XX as σX2\sigma^{2}_{X} and the variance of YY as σY2\sigma^{2}_{Y}. XN(0,σX2);YN(0,σY2)X \sim \mathcal{N}(0, \sigma^{2}_{X}) \quad ; \quad Y \sim \mathcal{N}(0, \sigma^{2}_{Y}) Recall that the sum of XX and YY is distributed with variance σX2+σY2\sigma^{2}_{X} + \sigma^{2}_{Y}.

Z=X+Y    ZN(0,σX2+σY2)\begin{equation} \label{eqn-1}\tag{1} Z = X + Y \implies Z \sim \mathcal{N}(0, \sigma^{2}_{X} + \sigma^{2}_{Y}) \end{equation}

A Continuous Random Walk

Imagine a continuous-time random walk W(t)W(t) for which observed differences between W(ti)W(t_{i}) and W(tj)W(t_j) are always normally distributed with zero mean:

W(tj)W(ti)N(0,σij2)\begin{equation} \label{eqn-2}\tag{2} W(t_j) - W(t_i) \sim \mathcal{N}(0, \sigma^{2}_{ij}) \end{equation}

Assume also that the random walk is uncorrelated with itself when comparing non-overlapping time intervals.

Consistency between Eq. (2) and Eq. (1), i.e.,  ti<tj<tk,σij2+σjk2=σik2\forall ~ t_{i} < t_{j} < t_{k}, \quad \sigma^{2}_{ij} + \sigma^{2}_{jk} = \sigma^{2}_{ik} dictates that σij2tjti\sigma^{2}_{ij} \propto |t_j - t_i| Let us choose to define WW such that the constant of proportionality is 1, identifying this random walk as a Wiener process. W(tj)W(ti)N(0,tjti)W(t_{j}) - W(t_{i}) \sim \mathcal{N}\big(0, |t_{j} - t_{i}|\big) For positive, infinitesimal differences in time, we may write

dWN(0,dt)\begin{equation} \label{eqn-3}\tag{3} {\rm d}W \sim \mathcal{N}(0, {\rm d} t) \end{equation}

Quadratic Variation

If we consider infinitesimal differences in time and recall that, for any random variable ZZ Var[Z]E[Z2]E[Z]2{\rm Var}[Z] \equiv \mathbb{E}[Z^{2}] - \mathbb{E}[Z]^{2} For a Wiener process, this implies Var[dW]=E[(dW)2]{\rm Var}[{\rm d}W] = \mathbb{E}[({\rm d}W)^{2}] We may therefore rewrite Eq. (3) in terms of a quadratic variation E[(dW)2]=dt\mathbb{E}[({\rm d}W)^{2}] = {\rm d}t This observation has an important consequence: First-order approximations in of a process X(t)X(t) depending on W(t)W(t) must account for the quadratic variation in WW, where, in expectation,

dX(t)dXdtdt+dXdWdW(t)+12d2XdW2(dW(t))2=(dXdt+12d2XdW2)dt+dXdWdW(t)\begin{equation} \label{eqn-4}\tag{4} \begin{aligned} {\rm d}X(t) &\approx {\frac{{\rm d} X}{{\rm d} t}}{\rm d}t + {\frac{{\rm d} X}{{\rm d} W}}{\rm d}W(t) + \frac{1}{2} {\frac{{\rm d}^{2} X}{{\rm d} W^{2}}} ({\rm d}W(t))^{2} \\ &= \bigg( {\frac{{\rm d} X}{{\rm d} t}} + \frac{1}{2} {\frac{{\rm d}^{2} X}{{\rm d} W^{2}}} \bigg) {\rm d}t + {\frac{{\rm d} X}{{\rm d} W}}{\rm d}W(t) \end{aligned} \end{equation}

Drift-Diffusion Processes

An Itô drift-diffusion process may be represented in terms of differentials in tt and WW as dX(t)=μX(t)dt+σX(t)dW(t){\rm d}X(t) = \mu_{X}(t){\rm d}t + \sigma_{X}(t) {\rm d}W(t) where μX(t)\mu_{X}(t) and σX(t)\sigma_{X}(t) are deterministic (e.g., as functions of the process up to time tt) and W(t)W(t) is a Wiener process as described above. Note that μX(t)\mu_{X}(t) and σ2(t)\sigma^{2}(t) are the mean and variance, respectively, in this example for dXdt\frac{{\rm d}X}{{\rm d} t}, not for XX!

Itô’s Lemma

Twice-differentiable functions applied to drift-diffusion stochastic processes also define drift-diffusion stochastic processes. For example, given a twice-differentiable function f(t,x)f(t, x) where the second argument is given by a stochastic process XX, dX(t)=μX(t)dt+σX(t)dW(t){\rm d}X(t) = \mu_{X}(t){\rm d}t + \sigma_{X}(t) {\rm d}W(t) it follows that df(X)=μf(t)dt+σf(t)dW(t){\rm d}f(X) = \mu_{f}(t){\rm d}t + \sigma_{f}(t) {\rm d}W(t) where, Taylor expanding ff according to Eq. (4) and applying the chain rule twice d2fdW2=d2fdX2(dXdW)2+dfdXd2XdW2\frac{{\rm d}^{2} f}{{\rm d} W^{2}} = \frac{{\rm d}^{2} f}{{\rm d} X^{2}} \bigg(\frac{{\rm d} X}{{\rm d} W}\bigg)^{2} + \frac{{\rm d} f}{{\rm d} X} \frac{{\rm d}^{2} X}{{\rm d} W^{2}} yields Itô’s Lemma: μf(t)=ft+fxμX(t)+122fx2σX2(t);σf(t)=fxσX(t)\mu_{f}(t) = {\frac{\partial f}{\partial t}} + {\frac{\partial f}{\partial x}} \mu_{X}(t) + {\frac{1}{2} \frac{\partial^{2} f}{\partial x^{2}}} \sigma^2_{X}(t) \quad ; \quad \sigma_{f}(t) = {\frac{\partial f}{\partial x}} \sigma_{X}(t) Importantly, we note that this transformation differs from the classical chain rule: df=ftdt+fxdX+122fx2(XW)2non-classical term{\rm d}f = \frac{\partial f}{\partial t} {\rm d}t + \frac{\partial f}{\partial x} {\rm d}X + \underbrace{ \frac{1}{2} \frac{\partial^{2} f}{\partial x^{2}} \bigg(\frac{\partial X}{\partial W} \bigg)^{2} }_{\text{non-classical term}}

Geometric Brownian Motion

Consider a stochastic process XX for which the proportional growth dXX\frac{{\rm d}X}{X} is an affine transformation of a Wiener process, i.e.,

dX(t)=X(t)(μdt+σdW(t))\begin{equation} \label{eqn-5}\tag{5} {\rm d}X(t) = X(t) \bigg( \mu{\rm d}t + \sigma {\rm d}W(t) \bigg) \end{equation}

for constant μ\mu and σ\sigma.

We provide two means of solving for X(t)X(t):

By Itô’s Lemma

First, we consider mapping mapping geometric Brownian motion by the function f=logxf = \log x.

Given dX=(μX)dt+(σX)dW{\rm d}X = (\mu X) {\rm d}t + (\sigma X) {\rm d}W We may use Itô’s Lemma to solve for df{\rm d}f: df=(fx(μX)+122fx2(σX)2)dt+fx(σX)dW{\rm d}f = \bigg( \frac{\partial f}{\partial x} (\mu X) + \frac{1}{2}\frac{\partial^{2} f}{\partial x^{2}} (\sigma X)^{2} \bigg) {\rm d}t + \frac{\partial f}{ \partial x} (\sigma X) {\rm d}W where fxx=X=1X;2fx2x=X=1X2\frac{\partial f}{\partial x}\bigg\rvert_{x{=}X} = \frac{1}{X} \quad ; \quad \frac{\partial^{2} f}{\partial x^{2}}\bigg\rvert_{x{=}X} = -\frac{1}{X^{2}} Therefore, df=(μ12σ2)dt+σdW{\rm d}f = \bigg( \mu - \frac{1}{2} \sigma^{2} \bigg) {\rm d}t + \sigma {\rm d}W This differential equation has solution f(t)=(μ12σ2)t+σW(t)+Cf(t) = \bigg(\mu - \frac{1}{2}\sigma^{2}\bigg) t + \sigma W(t) + C for constant CC determined by boundary conditions (i.e., X(0)X(0)). Substituting X(t)=ef(t)X(t) = e^{f(t)}, we conclude X(t)=X(0)e(μ12σ2)t+σW(t)X(t) = X(0) e^{\Big(\mu - \frac{1}{2}\sigma^{2}\Big) t + \sigma W(t)}

By Quadratic Variation

Another approach is to note that both the definition of geometric Brownian motion (Eq. (5)) and a Taylor expansion of XX (Eq. (4)) must be consistent.

dX(t)=X(t)(μdt+σdW(t))dX(t)=(dXdt+12d2XdW2)dt+dXdWdW(t)\begin{align*} {\rm d}X(t) &= X(t) \bigg( \mu{\rm d}t + \sigma {\rm d}W(t) \bigg) \\ {\rm d}X(t) &= \bigg( {\frac{{\rm d} X}{{\rm d} t}} + \frac{1}{2} {\frac{{\rm d}^{2} X}{{\rm d} W^{2}}} \bigg) {\rm d}t + {\frac{{\rm d} X}{{\rm d} W}}{\rm d}W(t) \end{align*}

By the independence of dt{\rm d}t and dW{\rm d}W, this provides two equations: μX=dXdt+12d2XdW2;σX=dXdW\mu X = {\frac{{\rm d} X}{{\rm d} t}} + \frac{1}{2} {\frac{{\rm d}^{2} X}{{\rm d} W^{2}}} \quad ; \quad \sigma X = {\frac{{\rm d} X}{{\rm d} W}} From the second equation, we note that XeσW and therefore d2XdW2=σ2XX \propto e^{\sigma W} \quad \text{ and therefore } \quad {\frac{{\rm d}^{2} X}{{\rm d} W^{2}}} = \sigma^{2} X Substituting into the first equation, it follows that dXdt=(μ12σ2)X{\frac{{\rm d} X}{{\rm d} t}} = \bigg(\mu - \frac{1}{2} \sigma^{2}\bigg) X Recognizing an exponential as the solution class for again, for X(t)X(t), we arrive at the unique solution X(t)=X(0)e(μ12σ2)t+σW(t)X(t) = X(0) e^{\Big(\mu - \frac{1}{2}\sigma^{2}\Big) t + \sigma W(t)}

Properties

Noting that X(t)X(t) describes a Galton distribution, it follows that E[Xt]=X0eμt;Var[Xt]=(X0eμt)2(e(σ2t)1)\mathbb{E}[X_{t}] = X_{0} e^{\mu t} \quad ; \quad {\rm Var}[X_{t}] = (X_{0} e^{\mu t})^{2} \bigg(e^{(\sigma^{2} t)} - 1\bigg)

vs Discrete-Time

Imagine investing in a security XX, the valuation of which (e.g., relative to USD) grows by a ratio rtN(μ~,σ~2)r_{t} \sim \mathcal{N}(\tilde{\mu}, \tilde{\sigma}^{2}) each month, for constants μ~\tilde{\mu} and σ~\tilde{\sigma}.

Should we model the security according to geometric Brownian motion, which applies in the continuous-time limit, or is this inappropriate when changes happen in discrete time?

First, what is the statistical behavior of XX after tt months when the process evolves discretely? For Xt=X0s=0t1rs;rsN(μ~,σ~2)(independently)X_{t} = X_{0} \prod_{s=0}^{t-1}r_{s} \quad ; \quad r_{s} \sim \mathcal{N}(\tilde{\mu}, \tilde{\sigma}^{2}) \quad \text{(independently)} we have E[Xt]=X0μ~t;Var[Xt]=(X0μ~t)2((σ~2μ~2+1)t1)\mathbb{E}[X_{t}] = X_{0} \tilde{\mu}^{t} \quad ; \quad {\rm Var}[X_{t}] = (X_{0} \tilde{\mu}^{t})^{2} \bigg(\Big(\frac{\tilde{\sigma}^{2}}{\tilde{\mu}^{2}} + 1\Big)^{t} - 1 \bigg)

The statistics of this process agree with those of geometric Brownian motion when we identify μ~=eμ;σ~2μ~2=e(σ2)1\tilde{\mu} = e^{\mu} \quad ; \quad \frac{\tilde{\sigma}^{2}}{\tilde{\mu}^{2}} = e^{(\sigma^{2})} - 1