隨機過程

如何確定 Euler-Maruyama 方法的收斂順序?

  • June 13, 2019

為了簡單起見,讓我們考慮幾何布朗運動。

我的問題:

  • 1. 如何使用 GBM 證明 Euler-Maruyama 方法是收斂的?
  • 2.如何確定收斂的順序

根據數值分析理論**,如果誤差的期望值在時間步長中為 p 階,則 SDE 求解器的階為 p**

現在讓 $ S_t $ 成為GBM $ S_0=1, $ $ \mu=0.1 $ 和 $ \sigma=0.15 $ 然後 $ E[S_{10}] =e $

當我執行求解器時 $ 10,000 $ 不同尺寸的時間 $ dt $ 我希望我的樣本平均值和真實平均值之間的差異, $ e $ , 將減少為 $ dt $ 變小。但是,當我執行這些模擬時,這就是我得到 的結果:這並不表示誤差收斂為零,因為在此處輸入圖像描述 **$ dt $ 歸零?**這是為什麼 …..

如果有人對程式碼感興趣

import matplotlib.pyplot as plt
import numpy as np

T = 10
mu = 0.1
sigma = 0.15
S0 = 1
ns = 10000


solution = S0*np.exp( mu*T ) 


dt_ = np.array([0.1,0.05,0.01,0.005])
err = np.zeros( len(dt_ )); 
for j in range ( len(dt_ )):
   dt = dt_[j]
   Sn = np.zeros( (ns) ) 
   for i in range(ns):
       N = int(round(T/dt))
       t = np.linspace(0, T, N)
       ex= np.linspace(0, T, N)
       W = np.random.standard_normal(size = N) 
       W = np.cumsum(W)*np.sqrt(dt) ### standard brownian motion ###
       X = (mu-0.5*sigma**2)*t + sigma*W 
       S = S0*np.exp(X) ### geometric brownian motion ###
       Sn[i]= S[-1]

   mn          = np.mean(Sn)
   print(mn)
   err[j]      = abs( mn - solution)

plt.clf();
plt.loglog(dt_,err, color ="black", label = "Error (abs)");plt.xlabel("dt",fontsize = 20); plt.ylabel("Error (abs)",fontsize = 20)
plt.loglog(dt_,err, 'o', color ="black" , label = "x")
plt.title("loglog-plot",fontsize = 30);
plt.loglog(dt_,0.05*dt_**0.5, linestyle = ":")

程式碼大多只是從 StackOverflow 複製粘貼

對於 GBM,您可以通過理論(參見 Kloeden)來展示它,也可以通過如下經驗來展示它。您可以觀察到 EM 的強收斂階等於 0.5,而 Milstein 的收斂階為 1.0。並非總是後者是優越的。例如,簡單的 EM 為 Heston 擊敗了簡單的 Mistein。在這種情況下,最好使用適應方案(Giles),尤其是處理跳躍。

在下面,您必須為每個 P (Higham) 使用內循環 deltaW。這個循環對於強烈意義上的收斂時間步長變化的影響是必要的。您不需要將它應用於弱錯誤,因為我們只需要這裡的解決方案的平均值,但是在下面的強情況下,這個循環是非常可取的。此程式碼為第 p 個步長計算樣本第 m 個路徑中端點的強誤差。它很有用,例如對於某些 EU 期權定價,但是對於嚴格依賴路徑的期權,您必須儲存具有完整路徑的數組,即對於任何 t。

要查看效果,只需複制和粘貼 :)

#######+++++++++++++++++++++++++++Lucy Juicy   +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++##############
#-------------------------------------------------------------------------------------------------------------------------------------##############
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import numpy.random as npr

def slope(steps, errors):
   steps = DDt.reshape(-1,1)
   A = np.hstack((np.ones((P,1)), np.log10(steps)))
   rhs=np.log10(errors)
   sol = np.linalg.lstsq(A,rhs, rcond=None)[0]
   q =sol[1]
   resid=np.linalg.norm(np.dot(A,sol) - rhs)
   print('The rate of convergence is: %1.6f and residual = %1.6f' % (q, resid))

def a(X):
   return r*X

def b(X):
   return sigma*X

def bd(X):
   return sigma*np.ones_like(X)

# Problem definition
M =10000 ; T=1; N = 2**9
P = 9             # Number of discretizations
dt = 1.0*T/N
r = 0.07; sigma = 0.1; S0 = 80.0; K=100

dW = np.random.normal(0.0, np.sqrt(dt), (M,N+1))
dW[ : , 0] = 0.00
W = np.cumsum(dW, axis=1)
#Exact
ones = np.ones(M)
Xexact = S0*np.exp((r-0.5*sigma**2)*ones+sigma*W[:, -1])

#np.random.seed(1234)
Xemerr = np.empty((M,P))
Xmilerr = np.empty((M,P))
DDt =  np.ones(P)
for p in range(P):
   R = 2**p; L = N/R; Dt = R*dt    
   DDt[p] = Dt
   Xem = S0*ones
   Xmil = S0*ones
   Wc = W[:, ::R]
   for j in range(int(L)):
       deltaW = Wc[:, j+1]-Wc[: ,j]
       deltaW2 = Wc[:, j+1]-Wc[: ,j]
       Xem += Dt*a(Xem) + deltaW*b(Xem)
       Xmil += Dt*a(Xmil) + deltaW*b(Xmil) + 0.5*b(Xmil)*bd(Xmil)*(deltaW**2-Dt)
   Xemerr[:,p] = np.abs(Xem - Xexact)
   Xmilerr[:,p] = np.abs(Xmil - Xexact)
Xemerr_tot_strong = np.mean(Xemerr, axis=0)
Xmilerr_tot_strong = np.mean(Xmilerr, axis=0)

slope(DDt, Xemerr_tot_strong)
slope(DDt, Xmilerr_tot_strong)

fig = plt.figure(figsize=(12,6))
ax = fig.add_subplot(111)
ax.loglog(DDt,Xemerr_tot_strong, 'm-', label='EM')
ax.loglog(DDt,Xemerr_tot_strong,linestyle='None', marker='o',markersize='6', markerfacecolor='c',markeredgewidth='1.7', markeredgecolor='m')
ax.loglog(DDt,Xmilerr_tot_strong, 'r-', label='Milstain')
ax.loglog(DDt,Xmilerr_tot_strong,linestyle='None', marker='o',markersize='6', markerfacecolor='g',markeredgewidth='1.7', markeredgecolor='r')
ax.loglog(DDt,DDt, 'c--')
ax.grid(True, which='both', ls='--')
ax.legend(loc='best')
ax.set_xlabel('$\Delta t$'); ax.set_ylabel('$E| $(Price(T)) - Exact Solution(T) |$')
ax.set_title('Strong Error for Geometric Brownian Motion', fontsize=16)

引用自:https://quant.stackexchange.com/questions/45466