Simulating returns from ARMA(1,0)-GARCH(1,1) model
I want to obtain a simulation of one-step ahead forecasts of stock returns process governed by ARMA(1,0)-GARCH(1,1) process. The returns are of form:
$ x_t = \mu + \delta x_{t-1} + \sigma_t z_t $
From my GARCH model I can forecast the conditional mean $ \mu + \delta x_{t-1} $ and the conditional standard deviation $ \sigma_t $ . Let’s assume that the distribution of $ z_t $ is Gaussian.
So now I am wondering how to obtain the simulation of the stock returns using the above-described approach. My initial solution would be to simulate a number of random variables from the Gaussian distribution $ N(0,1) $ and then create my one-step ahead forecast simulations as:
conditional mean (from time $ t+1 $ ) + $ N(0,1) $ random variable * conditional standard deviation (from time $ t+1 $ )
edit: what is in case of the Gaussian distribution equivalent to: $ x_{t+1} \sim N((\mu + \delta x_{t}) ,\sigma_{t+1}) $
Is this approach for simulating one-step ahead forecasts of stock returns appropriate? I need those simulations to create asset allocation strategies.
This question has already been answered on Stack Overflow. As it is important to Quant Finance, so I have added R code here. Others users may add code of other programming software to simulate ARMA(1,0)-GARCH(1,1) model.
sim.GARCH <- function( horizon=5, N=1e4, h0 = 2e-4, mu = 0, omega=0, alpha1 = 0.027, beta1 = 0.963 ){ ret <- zt <- et <- ht <- matrix(NA, nc=horizon, nr=N) ht[,1] <- h0 for(j in 1:horizon){ zt[,j] <- rnorm(N,0,1) et[,j] <- zt[,j]*sqrt(ht[,j]) ret[,j] <- mu + et[,j] if( j < horizon ) ht[,j+1] <- omega+ alpha1*et[,j]^2 + beta1*ht[,j] } apply(ret, 1, sum) } x <- sim.GARCH(N=1e5)