Risk-Neutral-Measure

Can the historical probability be the same as the risk neutral probability measure?

  • January 11, 2019

In particular lets consider a zero-beta asset $ i $ (in the CAPM sense). Let

  1. $ R_f $ be the risk free rate
  2. $ R_i $ the return on the asset $ i $
  3. $ R_m $ the return on the market portfolio
  4. $ \beta=\frac{Cov(R_i,R_m)}{Var(R_m)} $
  5. $ E_P (E_Q) $ the expectation under $ P $ the historical probability ( $ Q $ the risk neutral probability)

By the martingale properties the follwoing identity holds: $ E_Q[R_i]=R_f $

By the CAPM the following holds:

$ E_P[R_i]=R_f+\beta E_P[R_m-R_f]= E_Q[R_i]+\beta E_P[R_m-R_f] $

If we assume $ \beta=0 $ , then $ E_P[R_i]=E_Q[R_i] $

My question is: does $ E_P[R_i]=E_Q[R_i] $ imply $ P=Q $ ?

It seems you’ve got a simple linear algebra question masked by a bunch of superfluous finance theory.

  • Question: Does $ \operatorname{E}_P[R] = \operatorname{E}_Q[R] $ imply $ P = Q $ ?
  • Answer: No

For technical simplicity, let’s consider a probability space with three possible outcomes hence a random variable or probability measure can be written as a simple vector in $ \mathbb{R}^3 $ .

Counterexample: $$ P = \begin{bmatrix} \frac{1}{3} \ \frac{1}{3} \ \frac{1}{3} \end{bmatrix} \quad Q = \begin{bmatrix} \frac{1}{5} \ \frac{2}{5} \ \frac{3}{5} \end{bmatrix} \quad \quad R = \begin{bmatrix} 1 \ 1 \ \frac{1}{4} \end{bmatrix} $$

You can easily observe that $ \sum_i P_iR_i = \sum_i Q_iR_i = \frac{3}{4} $ while $ P \neq Q $ .

On the other hand, let $ \mathcal{U} = \left{ \begin{bmatrix} 1 \ 0 \ 0 \end{bmatrix}, \begin{bmatrix} 0 \ 1 \ 0 \end{bmatrix} , \begin{bmatrix} 0 \ 0 \ 1 \end{bmatrix}\right} $ (or any set of vectors that form a basis). If $ \operatorname{E}_P[R] = \operatorname{E}_Q[R] $ for every $ R \in \mathcal{U} $ then you would have $ P = Q $ .

No, for two reasons. The first is that the probabilities are conditioned on a risk-neutral actor. In some mathematical systems, it is impossible to separate utility and probability. Bruno de Finetti argued that probability does not exist. It is a concept of the mind that helps us understand our world, but it isn’t actually real. The same thing would be true for the economists’ concept of utility. Have you ever calculated your utility of purchasing toothpaste, or did you just experience the decision through your feelings?

The intent of using risk-neutral probabilities is to permit the disentanglement of probability and utility by conditioning the model on a specific utility function. In Frequentist statistics, this type of conditioning is not uncommon. After all, when you condition a model on $ \beta={0} $ you are forcing a probability structure that permits you to falsify the null that it is zero. If you change your null, you change your probability structure.

The second issue is that Frequentist probabilities are not really probabilities. At best they are worst case frequencies. Frequentist probabilities are minimax distributions. That guarantees that you have an $ \alpha $ level of protection against false positives, but it forces a material distortion in the frequencies. It guarantees you that no matter what parameter you really face, the true probabilities will be no worse than the ones expected to be seen.

Bayesian probabilities are true probabilities, and they can be gambled on, but they include subjective information. For example, imagine you had two engineers one with thirty years of experience in a broad range of projects and another who freshly graduated. The new engineer is working on a design and takes physical samples from the ground near a project.

The engineer performs statistical calculations either using a Frequentist model or a Bayesian model with a “flat” prior. The results imply a design. The senior engineer rejects the design saying it could collapse. The senior engineer recalculates the Bayesian solution using the information he or she collected over thirty years of experience and data from relevant empirical studies. The altered calculations imply a different design.

The strange thing is that both engineer’s calculations are valid to them. So their probabilities are also valid. The senior engineer’s probability statements contain more information, so they have less risk in them of a bad parameter estimate, but had the junior engineer not had access to the senior engineer they would have built it according to the original plan, which is a gamble. Due to the lack of information, it may have been a bad gamble, or possibly, the ground there is unusual, and the sample was representative, and it was a good gamble.

With a large enough sample, the effect of outside knowledge becomes nominal, but for small samples outside knowledge may strongly condition the data. The somewhat interesting upshot of this is that if you were a bookie, gambling on things such as bridge collapses and so forth is that if you used a Bayesian method, then a con-man or some other clever actor couldn’t force you into a sure loss no matter what set of events happened. You can trick a Frequentist bookie into a 100% certain loss in some cases for any given problem.

The price you pay for the assurance of being able to gamble is that two economists who were solving the same problem would get two different answers. Imagine grading student’s papers when they made two different sets of assumptions?

The good news is that Bayesian probabilities converge to true probabilities as the sample size becomes large enough.

引用自:https://quant.stackexchange.com/questions/43423