**AR, MA, and ARMA processes**

February 2018

A **deterministic**dynamical system is one where we can determine the present state uniquely from the past states. [1] In other words, there exists only a single possible trajectory for the evolution of the system. If, however, multiple trajectories are possible, one of which is ultimately realised, the process is

**random**or

**stochastic**.

In econometrics,

**autoregressive (AR)**models are used to represent random processes:

y(t) = Φy(t - 1) + ε(t)

This first order autoregressive model weights the past values of the explained variable to determine the value of the explained variable at the present time.

If the absolute value of the weight

Φ

is less than one, the value of the explained variable fluctuates around a mean of zero and the model is said to be **stationary**. Otherwise, the model either increases or decreases without bound and is said to

**nonstationary**. [2]

Starting with a initial value of

**y at 1**and iterating over

**100 periods**, let's see how the model behaves with respect to changes in

Φ

with the following Octave code:
function graph_ar_process(z, phi, y, n, v)

for i = 1:z

ar_process(phi,y,n,[],i)

endfor

endfunction

function ar_process(phi, y, n, v, i)

epsilon = unifrnd(-1,1);

g = phi * y + epsilon;

n -= 1;

v = [v g];

if n == 0

subplot(3,3,i)

plot(1:length(v), v, "linewidth", 1.1)

break

endif

ar_process(phi, g, n, v, i);

endfunction

**Φ = 0.1 (stationary)**

With

Φ = 0.1

, the process hovers around zero, bounded roughly by [-1, 1]:
**Φ = 0.9 (stationary)**

The nearer to one the phi gets, the more smoother the process becomes, and the wider its range is:

**Φ = 1**

**Φ = 1.1 (nonstationary)**

Then, after a certain threshold above one, the process quickly becomes unbounded in either direction:

**Moving average (MA) process**

Another way to introduce past randomness is a moving average model, which defines the present state in terms of current and lagged random disturbances:

y(t) = ε(t) + θε(t - 1)

The mean of a first-order MA process is less sensitive to changes in the theta coefficient [3]:

**θ = 0.1**

**θ = 5**

**Autoregressive moving average (ARMA) process**

Finally, as the name suggests, an ARMA process includes the elements of both previous models:

y(t) = Φ_1y(t - 1) + ... + Φ(p)y(t - p) + ε(t) + θ_1ε(t - 1) + ... + θ_qε(t - q)

As such, when

Φ = 0 for all Φ

, the result is an MA model, and when θ = 0 for all θ

, the result is an AR process.
Let's vary phi and theta on the following variant with a single lag for the explained variable and disturbance term respectively [4]:

y(t) = Φy(t - 1) + ε(t) + θε(t - 1)

**Φ = 0.1, θ = 0.9**

**Φ = 0.9, θ = 0.1**

**Φ = 1, θ = 1**

**Φ = 1.05, θ = 1.05**

[1] Alligood, Kathleen T. & Sauer, Tim D. & Yorke, James A. (1997). Chaos: An Introduction to Dynamical Systems. New York: Springer. ISBN: 0-387-94677-2.

[2] Harvey, A.C. (1981). The Econometric Analysis of Time Series. Oxford: Philip Allan Publishers Limited. ISBN: 0-86003-025-3.

[3]

function moving_average(theta)

for i = 1:16

e = unifrnd(-1,1);

generate_moving_average(e, theta, 100, [], i);

endfor

endfunction

function generate_moving_average(epsilon_minus, theta, n, v, i)

epsilon = unifrnd(-1,1);

y = epsilon + theta * epsilon_minus;

n -= 1;

v = [v y];

if n == 0

subplot(4,4,i);

plot(1:length(v), v, "linewidth", 1.1);

break

endif

generate_moving_average(epsilon, theta, n, v, i);

endfunction

[4]

function arma_process(phi, y, theta)

for i = 1:9

epsilon = unifrnd(-1,1);

generate_arma(phi, y, epsilon, theta, 100, [], i);

endfor

endfunction

function generate_arma(phi, y_minus, epsilon_minus, theta, n, v, i)

epsilon = unifrnd(-1,1);

y = phi * y_minus + epsilon + theta * epsilon_minus;

n -= 1;

v = [v y];

if n == 0

subplot(3,3,i);

plot(1:length(v), v, "linewidth", 1.1);

break

endif

generate_arma(phi, y, epsilon, theta, n, v, i);

endfunction