EWMA 101

After receiving several inquiries about the exponential weighted moving average (EWMA) function in NumXL, we decided to dedicate this issue to exploring this simple function in greater depth.

The main objective of EWMA is to estimate the next-day (or period) volatility of a time series and closely track the volatility as it changes.

Background

Define $\sigma_n$ as the volatility of a market variable on day n, as estimated at the end of day n-1. The variance rate is The square of volatility,$\sigma_n^2$, on day n.

Suppose the value of the market variable at the end of day i is $S_i$. The continuously compounded rate of return during day i (between end of prior day (i.e. i-1) and end of day i) is expressed as:

$$r_t = \ln{\frac{S_i}{S_{i-1}}}$$

Next, using the standard approach to estimate $\sigma_n$ from historical data, we’ll use the most recent m-observations to compute an unbiased estimator of the variance:

$$\sigma_n^2=\frac{\sum_{i=1}^m (r_{n-i}-\bar r)^2}{m-1}$$

Where $\bar r$ is the mean of $r_{i}$:

$$\bar r = \frac{\sum_{i=1}^m r_{n-i}}{m}$$

Next, let’s assume $\bar r = 0$ and use the maximum likelihood estimate of the variance rate:

$$\sigma_n^2=\frac{\sum_{i=1}^m r_{n-i}^2}{m}$$

So far, we have applied equal weights to all $r_i^2$, so the definition above is often referred to as the equally-weighted volatility estimate.

Earlier, we stated our objective was to estimate the current level of volatility $\sigma_n$, so it makes sense to give higher weights to recent data than to older ones. To do so, let’s express the weighted variance estimate as follows:

$$\sigma_n^2=\sum_{i=1}^m \alpha_i \times r_{n-i}^2$$

Where:

  • $\alpha_i$ is the amount of weight given to an observation i-days ago.
  • $\alpha_i\geq 0$
  • $\sum_{i=1}^m\alpha_i=1$

So, to give higher weight to recent observations, $\alpha_i \geq \alpha_{i+1}$

Long-run average variance

A possible extension of the idea above is to assume there is a long-run average variance $V_L$, and that it should be given some weight:

$$\sigma_n^2=\gamma V_L+\sum_{i=1}^m \alpha_i \times r_{n-i}^2$$

Where:

  • $\gamma+\sum_{i=1}^m\alpha_i=1$
  • $V_L > 0 $

The model above is known as the ARCH (m) model, proposed by Engle in 1994.

$$\sigma_n^2=\omega+\sum_{i=1}^m \alpha_i \times r_{n-i}^2$$

EWMA

EWMA is a special case of the equation above. In this case, we make it so that the weights of variable $\alpha_i$ decrease exponentially as we move back through time.

$$\alpha_{i+1}=\lambda \alpha_i = \lambda^2 \alpha_{i-1} = \cdots = \lambda^{n+1}\alpha_{i-n}$$

Unlike the earlier presentation, the EWMA includes all prior observations, but with exponentially declining weights throughout time.

Next, we apply the sum of weights such that they equal the unity constraint:

$$\sum_{i=1}^\infty \alpha_i = \alpha_1 \sum_{i=1}^\infty \lambda^i=1$$

For $\left | \lambda \right | < 1$, the value of $\alpha_1=1-\lambda$

Now we plug those terms back into the equation. For the $\sigma_{n-1}^2$ estimate:

$$\sigma_{n-1}^2=\sum_{i=1}^{n-1}\alpha_i r_{n-i-1}^2=\alpha_1 r_{n-2}^2+\lambda\alpha_1 r_{n-3}^2+\cdots+\lambda^{n-3}\alpha_1 r_1^2$$ $$\sigma_{n-1}^2=(1-\lambda)(r_{n-2}^2+\lambda r_{n-3}^2+\cdots+\lambda^{n-3} r_1^2)$$

And the $\sigma_n^2$ estimate can be expressed as follows:

$$\sigma_n^2=(1-\lambda)(r_{n-1}^2+\lambda r_{n-2}^2+\cdots+\lambda^{n-2} r_1^2)$$ $$\sigma_n^2=(1-\lambda)r_{n-1}^2+\lambda(1-\lambda)(r_{n-2}^2+\lambda r_{n-3}^2+\cdots+\lambda^{n-3} r_1^2)$$ $$\sigma_n^2=(1-\lambda)r_{n-1}^2+\lambda\sigma_{n-1}^2$$

Now, to understand the equation better:

$$\sigma_n^2=(1-\lambda)r_{n-1}^2+\lambda\sigma_{n-1}^2$$ $$\sigma_n^2=(1-\lambda)r_{n-1}^2+\lambda ((1-\lambda)r_{n-2}^2+\lambda\sigma_{n-2}^2)$$ $$\cdots$$ $$\sigma_n^2=(1-\lambda)(r_{n-1}^2+\lambda r_{n-2}^2+\lambda^2 r_{n-3}^2+\cdots+\lambda^{k+1} r_{n-k}^2)+\lambda^{k+2}\sigma_{n-k}$$

For a larger data set, the $\lambda^{k+2}\sigma_{n-k}$ is sufficiently small to be ignored from the equation.

The EWMA approach has one attractive feature: it requires relatively little stored data. To update our estimate at any point, we only need a prior estimate of the variance rate and the most recent observation value.

A secondary objective of EWMA is to track changes in the volatility. For small values, recent observations affect the estimate promptly. For $\lambda$values closer to one, the estimate changes slowly based on recent changes in the returns of the underlying variable.

The RiskMetrics database (produced by JP Morgan and made public available) uses the EWMA with $\lambda=0.94$ for updating daily volatility.

IMPORTANT: The EWMA formula does not assume a long run average variance level. Thus, the concept of volatility mean reversion is not captured by the EWMA. The ARCH/GARCH models are better suited for this purpose.

Lambda

A secondary objective of EWMA is to track changes in the volatility, so for small$\lambda$ values, recent observation affect the estimate promptly, and for $\lambda$ values closer to one, the estimate changes slowly to recent changes in the returns of the underlying variable.

The RiskMetrics database (produced by JP Morgan) and made public available in 1994, uses the EWMA model with $\lambda=0.94$ for updating daily volatility estimate. The company found that across a range of market variables, this value of $\lambda$ gives forecast of the variance that come closest to realized variance rate. The realized variance rates on a particular day was calculated as an equally-weighted average of $r_i^2$ on the subsequent 25 days.

$$\sigma_n^2=\frac{\sum_{i=0}^{24} r_{n-i}^2}{25}$$

Similarly, to compute the optimal value of lambda for our data set, we need to calculate the realized volatility at each point. There are several methods, so pick one. Next, calculate the sum of squared errors (SSE) between EWMA estimate and realized volatility. Finally, minimize the SSE by varying the lambda value.

Sounds simple? It is. The biggest challenge is to agree on an algorithm to compute realized volatility. For instance, the folks at RiskMetrics chose the subsequent 25-day to compute realized variance rate. In your case, you may choose an algorithm that utilizes Daily Volume, HI/LO and/or OPEN-CLOSE prices.

FAQ

Q 1: Can we use EWMA to estimate (or forecast) volatility more than one step ahead?

The EWMA volatility representation does not assume a long-run average volatility, and thus, for any forecast horizon beyond one-step, the EWMA returns a constant value:

$$\sigma_n^2=(1-\lambda)r_{n-1}^2+\lambda\sigma_{n-1}^2$$

$$E[\sigma_{n+1}^2]=(1-\lambda)E[r_{n}^2]+\lambda \sigma_{n-1}^2$$

$$E[\sigma_{n+1}^2]=(1-\lambda)\sigma_n^2+\lambda \sigma_{n-1}^2=\sigma_n^2$$

$$E[\sigma_{n+k}^2=\sigma_n^2$$


Q 2: What is the initial value of the variance (i.e. $\sigma_1^2$) in the NumXL EWMA function? Can I set a different value?

Currently, we set the value to zero, but we set the variance at the end of first period equal to the square of return on that period to start the EWMA.

$$\sigma_0^2=0$$ $$\sigma_1^2=r_1^2$$ $$\sigma_2^2=(1-\lambda)r_1^2 + \lambda \sigma_1^2= r_1^2$$ $$\sigma_3^2=(1-\lambda)r_2^2 + \lambda \sigma_2^2= r_1^2$$ $$\cdots$$ $$\sigma_n^2=(1-\lambda)r_{n-1}^2 + \lambda \sigma_{n-1}^2$$

For a large data set, the value has very little impact on the calculated value.

Going forward, we are planning to avail an argument to accept user-defined initial volatility value.


Q 3: What is EWMA’s relationship to ARCH/GARCH Model?

EWMA is basically a special form of an ARCH() model, with the following characteristics:

  1. The ARCH order is equal to the sample data size.
  2. The weights are exponentially declining at rate $\lambda$ throughout time.

Q 4: Does EWMA revert to the mean?

NO. EWMA does not have a term for the long-run variance average; thus, it does not revert to any value.


Q 5: What is the variance estimate for horizon beyond one day (or step) ahead?

As in Q1, the EWMA function returns a constant value equal to the one-step estimate value.


Q 6: I have weekly/monthly/annual data. Which value of I should use?

You may still use 0.94 as a default value, but if you wish to find the optimal value, you’d need to set up an optimization problem for minimizing the SSE or MSE between EWMA and realized volatility.

See our volatility 101 tutorial in “Tips and Hints” on our website for more details and examples.


Q 7: if my data does not have a zero mean, how can I use the function?

For now, use the DETREND function to remove the mean from the data before you pass it to the EWMA functions.

In future NumXL releases, the EWMA will remove the mean automatically on your behalf.


References References

Related Links Related Links

Resources Related Resources