Financial Risk Management
Quantitative Analysis Fundamentals of Probability Following P. Jorion, Financial Risk Management Chapter 2
Daniel HERLEMONT
Random Variables Values, probabilities. Distribution function, cumulative probability. Example: a die with 6 faces.
Daniel HERLEMONT
Page 1 1
Random Variables
Distribution function of a random variable X F(x) = P(X ≤ x) - the probability of x or less. If X is discrete then
F ( x) =
∑ f (x ) i
xi ≤ x
x
If X is continuous then F ( x) =
dF ( x) Note that f ( x) = dx
∫ f (u )du −∞
Daniel HERLEMONT
Random Variables
Probability density function of a random variable X has the following properties
f ( x) ≥ 0 ∞
1=
∫ f (u )du
−∞
Daniel HERLEMONT
Page 2 2
Probability Density and Cumulative Functions
Daniel HERLEMONT
Multivariate Distribution Functions Joint distribution function
F12 ( x1 , x 2 ) = P ( X 1 ≤ x1 , X 2 ≤ x 2 ) x1 x2
F12 ( x1 , x 2 ) =
∫∫f
12
(u1 , u 2 ) du1 du 2
− ∞− ∞
Joint density - f12(u1,u2) Daniel HERLEMONT
Page 3 3
Independent variables
f 12 (u1 , u 2 ) = f 1 (u1 ) × f 2 (u 2 ) F12 (u1 , u 2 ) = F1 (u1 ) × F2 (u 2 ) Credit exposure in a swap depends on two random variables: default and exposure. If the two variables are independent one can construct the distribution of the credit loss easily. Daniel HERLEMONT
Conditioning Marginal density ∞
∫f
f1 ( x1 ) =
12
( x1 , u 2 )du 2
−∞
Conditional density
f 1− 2 ( x1 x 2 ) =
f 12 ( x1 , x 2 ) f 2 ( x2 )
Daniel HERLEMONT
Page 4 4
Moments Mean = Average = Expected value ∞
µ = E( X ) =
∫ xf ( x)dx
−∞
Variance ∞ 2
σ = V (X ) =
2 ( ) x − E ( X ) f ( x)dx ∫
−∞
σ = S tan dard Deviation = Variance Daniel HERLEMONT
Probabilities
Mean Variance
Daniel HERLEMONT
Page 5 5
Probabilities
0.3 30% 30%
0.2 0.1
10% 10%
20% 1
2
3
∑p
4
5
=1
i
i Daniel HERLEMONT
Probabilities
0.3 0.2 0.1 1
2
3
4
∫ dp
=1
5
∞
0 Daniel HERLEMONT
Page 6 6
Cov ( X 1 , X 2 ) = E [( X 1 − E [ X 1 ])( X 2 − E [ X 2 ])]
ρ(X1, X 2 ) =
Cov ( X 1 , X 2 )
σ 1σ 2
Skewness (non-symmetry)
Kurtosis (fat tails)
Its meaning ...
γ =
δ =
1
σ
3
1
σ4
[
E ( X − E [ X ])
[
3
E ( X − E [ X ])
]
4
]
Daniel HERLEMONT
Main properties
E (a + bX ) = a + bE ( X )
σ (a + bX ) = bσ ( X ) E( X 1 + X 2 ) = E( X 1 ) + E( X 2 )
σ 2 ( X1 + X 2 ) = σ 2 ( X1 ) + σ 2 ( X 2 ) + 2Cov( X1 , X 2 ) Daniel HERLEMONT
Page 7 7
Portfolio of Random Variables
N
Y = ∑ wi X i = wT X i =1
N
E (Y ) = µ p = w E ( X ) = w µ X = ∑ wi µ i T
T
i =1 N
N
σ (Y ) = w Σw = ∑∑ wiσ ij w j 2
T
i =1 j =1 Daniel HERLEMONT
Portfolio of Random Variables
σ 2 (Y ) = σ 11 σ 12 [w1 , w2 ,K, wN ] M σ N 11 σ N 2
w1 K σ 1N w2 M K σ NN wN
Daniel HERLEMONT
Page 8 8
Product of Random Variables Credit loss derives from the product of the probability of default and the loss given default.
E ( X 1 X 2 ) = E ( X 1 ) E ( X 2 ) + Cov ( X 1 , X 2 ) When X1 and X2 are independent
E( X 1 X 2 ) = E( X 1 )E( X 2 )
Daniel HERLEMONT
Transformation of Random Variables Consider a zero coupon bond
V =
100 (1 + r ) T
If r=6% and T=10 years, V = $55.84, we wish to estimate the probability that the bond price falls below $50. This corresponds to the yield 7.178%.
Daniel HERLEMONT
Page 9 9
Example The probability of this event can be derived from the distribution of yields. Assume that yields change are normally distributed with mean zero and volatility 0.8%. Then the probability of this change is 7.06%
Daniel HERLEMONT
Quantile Quantile (loss/profit x with probability c) x
F ( x) =
∫ f (u )du = c −∞
50% quantile is called
median
Very useful in VaR definition.
Daniel HERLEMONT
Page 10 10
Quantile
1%
quantile
µ
Daniel HERLEMONT
VAR
Daniel HERLEMONT
Page 11 11
Uniform Distribution Uniform distribution defined over a range of values a≤ ≤x≤ ≤b.
a+b (b − a) 2 2 E( X ) = , σ (X ) = 2 12
f ( x) =
1 , a≤ x≤b b−a
x≤a 0, x − a F ( x) = , a≤ x≤b b a − b≤x 1, Daniel HERLEMONT
Uniform Distribution
1 1 b−a
a
b
Daniel HERLEMONT
Page 12 12
Normal Distribution Is defined by its mean and variance.
f ( x) =
1
−
e
σ 2π
( x − µ )2 2σ 2
E( X ) = µ, σ 2 ( X ) = σ 2 Cumulative is denoted by N(x).
Daniel HERLEMONT
Normal Distribution 68% of events lie between -1 and 1
0.4
0.3
95% of events lie between -2 and 2
0.2
0.1
-3
-2
-1
1
2
3
Daniel HERLEMONT
Page 13 13
Normal Distribution 1 0.8 0.6 0.4 0.2
-3
-2
-1
1
2
3
Daniel HERLEMONT
Normal Distribution symmetric around the mean mean = median skewness = 0 kurtosis = 3 linear combination of normal is normal
Daniel HERLEMONT
Page 14 14
Central Limit Theorem The mean of n independent and identically distributed variables converges to a normal distribution as n increases.
1 n X = ∑ Xi n i =1 σ2 X → N µ , n
Daniel HERLEMONT
Lognormal Distribution The normal distribution is often used for rate of return. Y is lognormally distributed if X=lnY is normally distributed. No negative values!
f ( x) = E( X ) = e
µ+
1
σx 2π
−
e
(ln( x ) − µ ) 2 2σ 2
σ2 2
2
, σ 2 ( X ) = e 2 µ + 2σ − e 2 µ +σ
2
E (Y ) = E (ln X ) = µ , σ 2 (Y ) = σ 2 (ln X ) = σ 2 Daniel HERLEMONT
Page 15 15
Lognormal Distribution If r is the expected value of the lognormal variable X, the mean of the associated normal variable is r-0.5σ σ2.
0.6 0.5 0.4 0.3 0.2 0.1
0.5
1
1.5
2
2.5
3
Daniel HERLEMONT
Student t Distribution Arises in hypothesis testing, as it describes the distribution of the ratio of the estimated coefficient to its standard error. k - degrees of freedom.
k +1 Γ 1 2 1 f ( x) = k +1 k kπ 2 Γ x 2 1 + 2 ∞ k Γ(k ) = ∫ x k −1e − x dx 0 Daniel HERLEMONT
Page 16 16
Student t Distribution As k increases t-distribution tends to the normal one. This distribution is symmetrical with mean zero and variance (k>2)
σ 2 ( x) =
k k −2
The t-distribution is fatter than the normal one. Daniel HERLEMONT
Binomial Distribution Discrete random variable with density function:
n f ( x ) = p x (1 − p ) n − x , x = 0,1,.K, n x
E ( X ) = pn, σ 2 ( X ) = p(1 − p)n For large n it can be approximated by a normal.
z=
x − pn p(1 − p)n
~ N (0,1)
Daniel HERLEMONT
Page 17 17
FRM Exam questions
Daniel HERLEMONT
FRM questions
Daniel HERLEMONT
Page 18 18
Daniel HERLEMONT
Daniel HERLEMONT
Page 19 19
Daniel HERLEMONT
Daniel HERLEMONT
Page 20 20