Ergodicity
August 2018
Ergodicity is a property of dynamical systems where the system is ergodic if its expectation value is equal to its time average. That is, if the properties of one trajectory followed over (infinite) time is equal to the average of (infinite) trajectories.

A trivial example is a process that stays constant for all t:



Here, obviously the average state for a single trajectory over infinite time will be one. Equivalently, the average state for an infinite number of trajectories will be 1 with any time interval.

What if the system grows linearly one-to-one with respect to time?



Now, the time average is infinite, yet the average of infinite number of processes is time divided by two. The time average and expectation value are different. The system is not ergodic. What's happening here? In essence, as time goes on, new states are discovered and old states are never returned to again.

Note that neither process contained any randomness to them: both were completely determined by their initial trajectory. We could of course add noise drawn from a uniform distribution to both:




Yet the time averages and expected values would stay unchanged for each process, because the linear processes can actually be thought of as the result where the noise has been smoothed out by averaging over many realisations. The difference is that now some states are revisited in both processes, but in the former, the probability to revisit any possible state within the range of variation is one, while in the latter, the probability to revisit any previous state disappears to 0 as time goes on.

This is a tell-tale sign of a non-ergodic process: new, never before visited states keep popping up. But there's also another sign: states where the process gets stuck and never recovers from, called absorbing states.



Once the system hits the absorbing barrier, its properties change fundamentally. No other state is ever revisited again.

An interesting feature of absorbing states is that if there exists such a barrier, and the probability that the system visits that state is nonzero, then the system will arrive at the absorbing state with probability of one when t goes to infinity.

This turns out to have dramatic consequences when comparing expected values and time averages in systems where absorbing states exist.

If we take a discrete random walk, starting at 10, with a uniformly distributed steps of 1 and -1, and setting 0 as the absorbing state, we can see that all such walks sooner or later get stuck at the barrier.



In this type of walk it's not even a question of step size or starting value: if the absorbing state exists and it is possible for the process to end up there, it will, with sufficient time.

These concepts extend to two variables.

When x and y depend on t, an ergodic process will always stay in some finite neighbourhood and neither variable gets stuck at any point.



The same idea extends to the third and higher dimensions as well.

A Markov process is a stochastic process where the probabilities to enter the possible states are only dependent on the current state. This is very conveniently represented by a matrix, where either rows or columns represent states, and the elements in those columns represent the probabilities to enter a state given by the indices of the elements.

Take, for example, the matrix A = [0.4 0.6 ; 0.1 0.9]. Here, each row represents a state. The element A(1,2) = 0.6 represents the probability to enter state 1 when the process is in state 0, if we start counting up from zero.