期權定價

GARCH 型模型的最大概似問題

  • April 6, 2020

我目前正在使用 Heston 和 Nandi (2000) 的以下 GARCH 流程: $$ \begin{align*} r_{t+1} - r_f &= \lambda h_{t+1} - \frac{h_{t+1}}{2} + \sqrt{h_{t+1}}z_{t+1} \ h_{t+1} &= \omega + \beta h_t + \alpha \left( z_t - \gamma \sqrt{h_t} \right)^2 \end{align*} $$ 給定 $ z_{t+1} \sim N(0,1) $ ,我們可以通過最大概似估計模型參數。我編寫了一些 python 程式碼來模擬這個過程,然後計算假設參數值的可能性。每個觀察的密度由下式給出:

$$ \begin{equation} f(r_{t+1} - r| h_{t+1}) = \frac{1}{\sqrt{2 \pi h_{t+1}}} \exp \left( \frac{-(r_{t+1} - r - \lambda h_{t+1} + \frac{h_{t+1}}{2})^2}{2 h_{t+1}} \right) \end{equation} $$

我的問題是我應該如何計算出最大化?明顯地, $ |1 - \beta - \alpha \gamma^2 | < 1 $ 確保條件變異數過程是共變異數平穩的。而且, $ (\omega + \alpha)/(1 - \beta - \alpha \gamma^2) > 0 $ 確保無條件變異數為正。所以,正如一些人可能懷疑的那樣,我無法確保最大化算法可以收斂到真實參數值,我正在尋找我應該如何解決這個問題。

import numpy as np
from   numpy import sqrt, exp, log
from   matplotlib.pyplot import plot, hist
from   statistics import mean
from scipy.optimize import minimize
#%%

r       = 0.05/252

param   = [-9.765e-07, 2.194e-06, 0.8986, 205.15, 3.930]
omega, alpha, beta, gamma, Lambda = param

sigma2 = (omega+alpha)/(1-beta-alpha*gamma**2)
h0     = sigma2

T = 1000
z = np.random.normal(loc=0, scale=1, size=T)
R = np.zeros(shape=T)
h = h0*np.ones(shape=T)

for tt in range(0,T-1):
  h[tt+1] = omega + beta*h[tt] + alpha*(z[tt] - gamma*sqrt(h[tt]))**2
  R[tt+1] = r + Lambda*h[tt+1] - h[tt+1]/2 + sqrt(h[tt+1])*z[tt+1]

hh = h
Rt = R - r

def TS_Loglik_HN(Rt, h0, param):
  '''
  Author: Stéphane Surprenant, UQAM
  Creation: 02/04/2020

  Description: This function returns the value of the log-likelihood for the
  Heston and Nandi (2000) process under the physical measure.

  INPUTS       DESCRIPTION
  Rt :         (float) Series of (log) returns minus the risk-free rate.
  h0 :         (float) Initial value of the variance (Daily)
  param:       (float) Parameters of the model
               [omega, alpha, beta, gamma, Lambda] = param

  OUTOUTS      DESRIPTION
  loglik       (float) Log-likelihood value

  Model:
      Rt[tt+1] := R[tt+1] - r
                = Lambda*h[tt+1] - h[tt+1]/2 + sqrt(h[tt+1])*z[tt+1]
      h[tt+1]   = omega + beta*h[tt] + alpha*(z[tt] - gamma*sqrt(h[tt]))**2
  '''
  # Assign parameter values
  omega, alpha, beta, gamma, Lambda = param

  # Initialize matrices
  T    = len(Rt)
  h    = h0*np.ones(shape=T)
  e    = np.zeros(shape=T)

  # Filtering volatility
  for tt in range(0,T-1):
      e[tt]   = (Rt[tt] - Lambda*h[tt] + h[tt]/2)/sqrt(h[tt])
      h[tt+1] = omega + beta*h[tt] + alpha*(e[tt] - gamma*sqrt(h[tt]))**2

  e[T-1]   = (Rt[T-1] - Lambda*h[T-1] + h[T-1]/2)/sqrt(h[T-1])

  # Compute Log-likelihood
  l = -0.5*(log(2*np.pi) + log(h) + e**2)
  loglik = sum(l)

  return(loglik)

# Example:
f = lambda x: -TS_Loglik_HN(Rt, h0, x)
results = minimize(f, param)

如果您有路徑可能性,您可以嘗試編寫該函式並直接對其進行優化。您可能對變異數部分有一些問題。這看起來很像 SDE 的參數推斷、數據同化等。

我認為,如果您通過一些保證為您工作的 MCMC 或 MC (Gibbs) 為所有參數編寫具有先驗的適當概似函式並且相同。

您還可以嘗試變分推理方法並針對參數的 MLE 進行優化。

如果你寫出上面的可能性(在乳膠中),可能更容易討論和注意到任何穩定性問題。

更新:

因此,對於純 MLE 方法,您可以嘗試優化對數概似。如果它沒有收斂,可以嘗試進行穩定性分析。一個快速的理智測試是,如果你從接近真實值開始(在這種情況下你知道它們,因為你生成了它們),看看它是否收斂。計算粗麻布也可以提供一些見解,但這基本上是穩定性分析。另一種調試是嘗試一次只擬合一個參數,而所有其他參數都正確給出或至少接近正確的值。我會有點擔心 $ h $ 接近於零,但我還沒有完全掌握這個過程,所以也許沒關係。

開始弄亂程式碼,要麼我引入了一個錯誤,然後修復了它,要麼你有一個錯誤的錯誤。無論哪種方式,您都可能希望添加相同的檢查。基本上我只是在檢查我是否可以退出 $ h $ 和 $ z $ (你是

$$ tt $$) 適當地。

from statistics import mean

import numpy as np
from numpy import exp, log, sqrt
from pylab import *
from scipy.optimize import minimize

r = 0.05 / 252
param = [9.765e-07, 2.194e-06, 0.8986, 205.15, 3.930]
omega, alpha, beta, gamma, Lambda = param


def get_h0(param):
   omega, alpha, beta, gamma, Lambda = param
   sigma2 = (omega + alpha) / (1 - beta - alpha * gamma ** 2)
   h0 = sigma2
   return h0


h0 = get_h0(param)

def rhs_h(param, h, z):
   omega, alpha, beta, gamma, Lambda = param
   return omega + beta * h + alpha * (z - gamma * sqrt(h)) ** 2

def rhs_R(param, h, z):
   omega, alpha, beta, gamma, Lambda = param
   return Lambda * h - h / 2 + sqrt(h) * z

def get_paths(param):
   omega, alpha, beta, gamma, Lambda = param
   assert omega &gt; 0
   assert alpha &gt; 0
   assert beta &gt; 0
   assert beta + alpha * gamma ** 2 &lt; e
   np.random.seed(10)
   T = 10
   z = np.random.normal(loc=0, scale=1, size=T)
   R = np.zeros(shape=T - 1)
   h = h0 * np.ones(shape=T)
   for i in range(0, T - 1):
       h[i + 1] = rhs_h(param, h[i], z[i])
       R[i] = r + rhs_R(param, h[i], z[i])
   return R, h, z


def get_h_z_from_R(Rt, h0, param):
   omega, alpha, beta, gamma, Lambda = param
   T = len(Rt)
   h = np.empty(shape=T)
   h[0] = h0
   z = np.zeros(shape=T)
   for i in range(0, T - 1):
       z[i] = (Rt[i] - Lambda * h[i] + h[i] / 2) / sqrt(h[i])
       h[i + 1] = omega + beta * h[i] + alpha * (z[i] - gamma * sqrt(h[i])) ** 2
   z[T - 1] = (Rt[T - 1] - Lambda * h[T - 1] + h[T - 1] / 2) / sqrt(h[T - 1])
   return h, z

R, h, z = get_paths(param)
Rt = R - r

h_check, z_check = get_h_z_from_R(Rt, h0, param)
assert np.allclose(z[:-1], z_check)
assert np.allclose(h[:-1], h_check)

引用自:https://quant.stackexchange.com/questions/53014