Home 21090308 › Forums › Re: Moving Average Crossover

Here is my 2 cents.

1) I agree with Ken that this is a trend-following strategy and we expect to make some money in the presence of a relatively long trend.

2) I would think of this indicator (and other closely related ones like MACD) as a weighted average of past data instead of an AR(p) model. We can rewrite this kind of indicators in a more general form

[tex]Y_t = sum_{i = 1}^infty a_i X_{t-i}[/tex], (*)

where

(a) Y_t is the indicator at time t, which is (t – 1) adaptive;

(b) X_t is the (close?) price at time t, X_t = 0 if t < 0;

(c) [tex]sum_{i = 1}^infty a_i = 0[/tex].

For instance, the DMAC indicator Haksun mentioned is [tex]Y_t = SMA(p1) – SMA(p2) = sum_{i = 1}^infty a_i X_{t-i}[/tex], where a_i = (p2 – p1) / (p1 * p2) for i = 1, …, p1, a_i = – (1 / p2) for i = (p1 + 1), …, p2 and a_i = 0 otherwise. We can easily see in this form that we put (equal) positive weights on the most recent prices and (equal) negative weights on the more distant ones with the sum of all weights equal to zero. Therefore, by choosing appropriate p1 and p2, we bet on the continuation of the most recent trend. Note that we’re not fitting or comparing two AR models here, but rather comparing two smoothed processes as Prof. Lee pointed out. (We could use ArmaModel.armaMean to get the indicators though.)

As a second example, the popular Moving Average Convergence-Divergence (MACD) indicator can be written as:

Y_t

= MACD(alpha, beta, gamma) (the most popular choice is (alpha, beta, gamma) = (12, 26, 9))

= [tex]sum_{j = 0}^infty c (1 – c) ^ j Z_{t – j}[/tex],

where Z_t = EMA(alpha) – EMA(beta) = [tex]sum_{i = 1}^infty [a (1 – a) ^ {i – 1} – b (1 – b) ^ {i – 1}] * X_{t – i}[/tex], a = 2 / (alpha + 1), b = 2 / (beta + 1) and c = 2 / (gamma + 1). So a_i in (*) can be expressed in terms of alpha, beta and gamma.

3) Since the predictive power of X_{t-i} deminishes as i increases, we can think of this as a constrained optimization problem with the objective function being the historical PnL (stategy specific) and the constraints being [tex]sum_{i = 1}^p a_i = 0[/tex] and [tex]|a_i| <= 1[/tex], where p is the chosen lag order. As Haksun pointed out, there's no reason to fix the parameters and we should do calibration on a rolling window basis. Depending on the nature of the strategy, we could also use different tools such as multilogit regression, HMM, neural network, etc. We may of course also incoporate other explanatory variables in our optimization or regression. Haksun, let's discuss this in Beijing this Thursday ðŸ™‚