Optimized localization and hybridization to filter ensemble-based

... misspecifications. • Question: how to tackle both errors? 1 ... Introduction. Questions: 2 ...... Extension of the theory to account for systematic errors in ˜B⋆. 25 ...
6MB taille 0 téléchargements 267 vues
Optimized localization and hybridization to filter ensemble-based covariances Benjamin Ménétrier and Tom Auligné NCAR - Boulder - Colorado NASA GSFC - 07/21/2015

Acknowledgement: AFWA

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context:

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances.

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts.

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size.

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Systematic error arises because of ensemble misspecifications.

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Systematic error arises because of ensemble misspecifications. • Question: how to tackle both errors?

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Systematic error arises because of ensemble misspecifications. • Question: how to tackle both errors?

Usual methods:

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Systematic error arises because of ensemble misspecifications. • Question: how to tackle both errors?

Usual methods: • Covariance localization

→ tapering with a localization matrix

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Systematic error arises because of ensemble misspecifications. • Question: how to tackle both errors?

Usual methods: • Covariance localization

→ tapering with a localization matrix • Covariance hybridization

→ linear combination with a static covariance matrix

1

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions:

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions: 1. Can we compute an optimized localization with a method:

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions: 1. Can we compute an optimized localization with a method: •

using data from the ensemble only,

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions: 1. Can we compute an optimized localization with a method: •

using data from the ensemble only,



affordable for high-dimensional systems.

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions: 1. Can we compute an optimized localization with a method: •

using data from the ensemble only,



affordable for high-dimensional systems.

2. Can localization and hybridization be considered together, and optimized simultaneously?

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions: 1. Can we compute an optimized localization with a method: •

using data from the ensemble only,



affordable for high-dimensional systems.

2. Can localization and hybridization be considered together, and optimized simultaneously? 3. Is hybridization always improving the accuracy of forecast error covariances?

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Introduction Questions: 1. Can we compute an optimized localization with a method: •

using data from the ensemble only,



affordable for high-dimensional systems.

2. Can localization and hybridization be considered together, and optimized simultaneously? 3. Is hybridization always improving the accuracy of forecast error covariances? 4. Can ensemble systematic error be taken into account?

2

Introduction

Localization

Hybridization

Systematic error

Conclusions

Outline

Introduction Objectively optimized localization Jointly optimized localization and hybridization Accounting for systematic error Conclusions

3

Introduction

Localization

Hybridization

Systematic error

Conclusions

Outline

Introduction Objectively optimized localization Jointly optimized localization and hybridization Accounting for systematic error Conclusions

4

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance sampling

5

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance sampling e An ensemble of N forecasts {e xbp } is used to sample B: e= B

N T 1 δe xb δe xb ∑ N − 1 p=1

where: δe xbp = e xbp − he xb i and he xb i =

1 N

N

∑ exbp

p=1

5

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance sampling e An ensemble of N forecasts {e xbp } is used to sample B: e= B

N T 1 δe xb δe xb ∑ N − 1 p=1

where: δe xbp = e xbp − he xb i and he xb i =

1 N

N

∑ exbp

p=1

e →B e? Asymptotic behavior: if N → ∞ , then B

5

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance sampling e An ensemble of N forecasts {e xbp } is used to sample B: e= B

N T 1 δe xb δe xb ∑ N − 1 p=1

where: δe xbp = e xbp − he xb i and he xb i =

1 N

N

∑ exbp

p=1

e →B e? Asymptotic behavior: if N → ∞ , then B ee = B e −B e? In practice, N < ∞ ⇒ sampling noise B

5

Introduction

Localization

Hybridization

Systematic error

Conclusions

Sampling noise properties Simple 1D example: homogeneous covariances

6

Introduction

Localization

Hybridization

Systematic error

Conclusions

Sampling noise properties Simple 1D example: heterogeneous variances

6

Introduction

Localization

Hybridization

Systematic error

Conclusions

Sampling noise properties Simple 1D example: heterogeneous variances

Sampling noise amplitude related to the asymptotic variance

6

Introduction

Localization

Hybridization

Systematic error

Conclusions

Sampling noise properties Simple 1D example: homogeneous covariances

6

Introduction

Localization

Hybridization

Systematic error

Conclusions

Sampling noise properties Simple 1D example: heterogeneous length-scales

6

Introduction

Localization

Hybridization

Systematic error

Conclusions

Sampling noise properties Simple 1D example: heterogeneous length-scales

Sampling noise length-scale related to the asymptotic length-scale

6

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory of sampling noise Covariance of sampled covariances:    ?  1  ? ? e e ?, B e? + 1 E Ξ e e ,B e = Cov B e B − E B Cov B ij kl ij kl ijkl ij kl N N     ? ?  1 e e B e? + E B e? B + E B il jk ik jl N(N − 1)

7

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory of sampling noise Covariance of sampled covariances:    ?  1  ? ? e e ?, B e? + 1 E Ξ e e ,B e = Cov B e B − E B Cov B ij kl ij kl ijkl ij kl N N     ? ?  1 e e B e? + E B e? B + E B il jk ik jl N(N − 1) involving: • the ensemble size N, e ?, • the asymptotic covariance B e ?. • the asymptotic fourth-order centered moment Ξ

7

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory of sampling noise Covariance of sampled covariances:    ?  1  ? ? e e ?, B e? + 1 E Ξ e e ,B e = Cov B e B − E B Cov B ij kl ij kl ijkl ij kl N N     ? ?  1 e e B e? + E B e? B + E B il jk ik jl N(N − 1) involving: • the ensemble size N, e ?, • the asymptotic covariance B e ?. • the asymptotic fourth-order centered moment Ξ e : Expectation of the sample fourth-order centered moment Ξ ijkl   (N − 1)(N 2 − 3N + 3)  ?  e e E Ξ = E Ξ ijkl ijkl N3     ? ?  ? ?  (N − 1)(2N − 3) e ?B e? + E B e B e +E B e B e + E B ij kl ik jl il jk N3

7

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix:

b = L◦B e B

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Covariance localization Localization = Schur product with a localization matrix L • Covariance matrix: • Increment:

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

8

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization An "optimal" localization should minimize the expected quadratic error: i h 2 e? e − B k e=E k L ◦ B |{z} | {z } e Localized B

e Asymptotic B

9

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization An "optimal" localization should minimize the expected quadratic error: i h 2 e? e − B k e=E k L ◦ B |{z} | {z } e Localized B

e Asymptotic B

Light assumptions:

9

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization An "optimal" localization should minimize the expected quadratic error: i h 2 e? e − B k e=E k L ◦ B |{z} | {z } e Localized B

e Asymptotic B

Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B

9

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization An "optimal" localization should minimize the expected quadratic error: i h 2 e? e − B k e=E k L ◦ B |{z} | {z } e Localized B

e Asymptotic B

Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B

e ? and • The two random processes generating the asymptotic B the sample distribution are independent.

9

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization An "optimal" localization should minimize the expected quadratic error: i h 2 e? e − B k e=E k L ◦ B |{z} | {z } e Localized B

e Asymptotic B

Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B

e ? and • The two random processes generating the asymptotic B the sample distribution are independent. Use of the covariance sampling theory... and a lot of calculus!

9

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization An "optimal" localization should minimize the expected quadratic error: i h 2 e? e − B k e=E k L ◦ B |{z} | {z } e Localized B

e Asymptotic B

Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B

e ? and • The two random processes generating the asymptotic B the sample distribution are independent. Use of the covariance sampling theory... and a lot of calculus!     e e B e E Ξ E B (N − 1)2 N N −1 ijij ii jj Lij = −  2 +  2 e e N(N − 3) (N − 2)(N − 3) E B N(N − 2)(N − 3) E B ij

ij

9

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization This formula of optimal localization L involves: • the ensemble size N,

e • the sample covariance B, e • the sample fourth-order centered moment Ξ.

10

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization This formula of optimal localization L involves: • the ensemble size N,

e • the sample covariance B, e • the sample fourth-order centered moment Ξ. An ergodicity assumption is required to estimate the statistical expectations E in practice:

10

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization This formula of optimal localization L involves: • the ensemble size N,

e • the sample covariance B, e • the sample fourth-order centered moment Ξ. An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average,

10

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization This formula of optimal localization L involves: • the ensemble size N,

e • the sample covariance B, e • the sample fourth-order centered moment Ξ. An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average,

→ This assumption is independent from the basic theory.

10

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization optimization This formula of optimal localization L involves: • the ensemble size N,

e • the sample covariance B, e • the sample fourth-order centered moment Ξ. An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average,

→ This assumption is independent from the basic theory. Theory and results published recently in Ménétrier et al. 2015 (Monthly Weather Review).

10

Introduction

Localization

Hybridization

Systematic error

Conclusions

Illustration of localization Data: • Ensemble: mature WRF-ARW EnKF over the CONUS domain

11

Introduction

Localization

Hybridization

Systematic error

Conclusions

Illustration of localization Data: • Ensemble: mature WRF-ARW EnKF over the CONUS domain • Field: temperature at level 10 (∼ 1 km height)

11

Introduction

Localization

Hybridization

Systematic error

Conclusions

Illustration of localization Data: • Ensemble: mature WRF-ARW EnKF over the CONUS domain • Field: temperature at level 10 (∼ 1 km height)

11

Introduction

Localization

Hybridization

Systematic error

Conclusions

Illustration of localization Data: • Ensemble: mature WRF-ARW EnKF over the CONUS domain • Field: temperature at level 10 (∼ 1 km height)

11

Introduction

Localization

Hybridization

Systematic error

Conclusions

Outline

Introduction Objectively optimized localization Jointly optimized localization and hybridization Accounting for systematic error Conclusions

12

Introduction

Localization

Hybridization

Systematic error

Conclusions

From localization to hybridization Localization by L (Schur product) Covariance matrix b = L◦B e B

13

Introduction

Localization

Hybridization

Systematic error

Conclusions

From localization to hybridization Localization by L (Schur product) Covariance matrix

Increment

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

13

Introduction

Localization

Hybridization

Systematic error

Conclusions

From localization to hybridization Localization by L (Schur product) Covariance matrix

Increment

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

Localization by L + hybridization with B Increment e

δ x = β δ xe + β c B1/2 vc

13

Introduction

Localization

Hybridization

Systematic error

Conclusions

From localization to hybridization Localization by L (Schur product) Covariance matrix

Increment

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

Localization by L + hybridization with B Covariance matrix bh

e 2

c 2

e + (β ) B B = (β ) L ◦ B

Increment e

δ x = β δ xe + β c B1/2 vc

13

Introduction

Localization

Hybridization

Systematic error

Conclusions

From localization to hybridization Localization by L (Schur product) Covariance matrix

Increment

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

Localization by L + hybridization with B Covariance matrix bh

e 2

c 2

Gain Lh

Offset

e + (β ) B B = (β ) L ◦ B | {z } | {z }

Increment e

δ x = β δ xe + β c B1/2 vc

13

Introduction

Localization

Hybridization

Systematic error

Conclusions

From localization to hybridization Localization by L (Schur product) Covariance matrix

Increment

b = L◦B e B

N  1 δ xe = √ δe xbp ◦ L1/2 vpα ∑ N − 1 p=1

Localization by L + hybridization with B Covariance matrix bh

e 2

c 2

Gain Lh

Offset

e + (β ) B B = (β ) L ◦ B | {z } | {z }

Increment e

δ x = β δ xe + β c B1/2 vc

e Localization + hybridization = linear filtering of B Lh and β c have to be optimized together

13

Introduction

Localization

Hybridization

Systematic error

Conclusions

Joint optimization of localization and hybridization An "optimal" hybridization should minimize the expected quadratic error:   e + (β c )2 B − B e? e h = E k Lh ◦ B k2 | {z } |{z} e Localized / hybridized B

e Asymptotic B

14

Introduction

Localization

Hybridization

Systematic error

Conclusions

Joint optimization of localization and hybridization An "optimal" hybridization should minimize the expected quadratic error:   e + (β c )2 B − B e? e h = E k Lh ◦ B k2 | {z } |{z} e Localized / hybridized B

e Asymptotic B

Same assumptions as before.

14

Introduction

Localization

Hybridization

Systematic error

Conclusions

Joint optimization of localization and hybridization An "optimal" hybridization should minimize the expected quadratic error:   e + (β c )2 B − B e? e h = E k Lh ◦ B k2 | {z } |{z} e Localized / hybridized B

e Asymptotic B

Same assumptions as before. Result of the minimization: a linear system in Lh and (β c )2   e E B ij h Lij = Lij −  2  B ij (β c )2 e E Bij h e  B 1 − L E Bij ∑ ij ij ij (β c )2 = 2 ∑ij B ij where Lij is the localization optimized alone.

14

Introduction

Localization

Hybridization

Systematic error

Conclusions

Hybridization benefits Comparison of: b = L ◦ B, e with an optimal L minimizing e • B b h = Lh ◦ B e + (β c )2 B, with optimal Lh and β c minimizing e h • B

15

Introduction

Localization

Hybridization

Systematic error

Conclusions

Hybridization benefits Comparison of: b = L ◦ B, e with an optimal L minimizing e • B b h = Lh ◦ B e + (β c )2 B, with optimal Lh and β c minimizing e h • B We can show that:  e B 2ij Var B ij e h − e = −(β c )2 ∑  2 e E B ij ij | {z } ≤0

15

Introduction

Localization

Hybridization

Systematic error

Conclusions

Hybridization benefits Comparison of: b = L ◦ B, e with an optimal L minimizing e • B b h = Lh ◦ B e + (β c )2 B, with optimal Lh and β c minimizing e h • B We can show that:  e B 2ij Var B ij e h − e = −(β c )2 ∑  2 e E B ij ij | {z } ≤0

With optimal parameters, whatever the static B: Localization + hybridization is better than localization alone

15

Introduction

Localization

Hybridization

Systematic error

Conclusions

Experimental setup • WRF-ARW model, large domain, 25 km-resolution, 40 levels

16

Introduction

Localization

Hybridization

Systematic error

Conclusions

Experimental setup • WRF-ARW model, large domain, 25 km-resolution, 40 levels • Initial conditions randomized from a homogeneous static B

16

Introduction

Localization

Hybridization

Systematic error

Conclusions

Experimental setup • WRF-ARW model, large domain, 25 km-resolution, 40 levels • Initial conditions randomized from a homogeneous static B • Reference and test ensembles (1000 / 100 members)

16

Introduction

Localization

Hybridization

Systematic error

Conclusions

Experimental setup • WRF-ARW model, large domain, 25 km-resolution, 40 levels • Initial conditions randomized from a homogeneous static B • Reference and test ensembles (1000 / 100 members) • Forecast ranges: 12, 24, 36 and 48 h

16

Introduction

Localization

Hybridization

Systematic error

Conclusions

Experimental setup • WRF-ARW model, large domain, 25 km-resolution, 40 levels • Initial conditions randomized from a homogeneous static B • Reference and test ensembles (1000 / 100 members) • Forecast ranges: 12, 24, 36 and 48 h Temperature at level 7 (∼ 1 km above ground), 48 h-range forecasts

Standard-deviation (K)

Correlations functions

16

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level.

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Localization length-scale:

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Hybridization coefficients for zonal wind:

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Impact of the hybridization:

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Impact of the hybridization: e ? is estimated with the reference ensemble • B

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Impact of the hybridization: e ? is estimated with the reference ensemble • B

• Expected quadratic errors e and e h are computed

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Impact of the hybridization: e ? is estimated with the reference ensemble • B

• Expected quadratic errors e and e h are computed

Error reduction from e to e h for 25 members Zonal wind 4.5 %

Meridian wind 4.2 %

Temperature 3.9 %

Specific humidity 1.7 %

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Localization and hybridization • Optimization of the horizontal localization Lh and of the hor

hybridization coefficient β c at each vertical level. e • Static B = horizontal average of B • Impact of the hybridization: e ? is estimated with the reference ensemble • B

• Expected quadratic errors e and e h are computed

Error reduction from e to e h for 25 members Zonal wind 4.5 %

Meridian wind 4.2 %

Temperature 3.9 %

Specific humidity 1.7 %

→ Hybridization with B improves the accuracy of the forecast error covariance matrix

17

Introduction

Localization

Hybridization

Systematic error

Conclusions

Outline

Introduction Objectively optimized localization Jointly optimized localization and hybridization Accounting for systematic error Conclusions

18

Introduction

Localization

Hybridization

Systematic error

Conclusions

An imperfect ensemble Sample covariance matrix decomposition: e= e? e B e? B B + B |{z} | − {z } e Asymptotic B

ee Sampling error B

19

Introduction

Localization

Hybridization

Systematic error

Conclusions

An imperfect ensemble Sample covariance matrix decomposition: e= e? e B e? B B + B |{z} | − {z } e Asymptotic B

= |{z} Bt + "Truth"

ee Sampling error B

e ? − Bt B | {z }

+ sys

e Systematic error B

e B e? B | − {z }

e

e Sampling error B

19

Introduction

Localization

Hybridization

Systematic error

Conclusions

An imperfect ensemble Sample covariance matrix decomposition: e= e? e B e? B B + B |{z} | − {z } e Asymptotic B

= |{z} Bt + "Truth"

ee Sampling error B

e ? − Bt B | {z }

+ sys

e Systematic error B

e B e? B | − {z }

e

e Sampling error B

Systematic error is coming from ensemble misspecifications: • oversimplified observation and model error models, • missrepresentation of some uncertainty sources (ex: SST), • inconsistencies of the ensemble update scheme, • ...

19

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory extension Expected quadratic error to minimize: h i bh − B e ? k2 e h = E kB

20

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory extension Expected quadratic error to minimize: h i h i bh − B e ? k2 b h − Bt k 2 e h = E kB → e t = E kB

20

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory extension Expected quadratic error to minimize: h i h i bh − B e ? k2 b h − Bt k 2 e h = E kB → e t = E kB Linear system to solve:   e E B ij = Lij −  2  B ij (β c )2 e E Bij    e B ij 1 − Lhij E B ∑ ij ij c 2 (β ) = 2 ∑ij B ij Lhij

20

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory extension Expected quadratic error to minimize: h i h i bh − B e ? k2 b h − Bt k 2 e h = E kB → e t = E kB Linear system to solve:     e sys e e B E B E B ij ij ij c 2 = Lij −  2  B ij (β ) −  2 e e E B E B ij ij  sys  h e  e B ij 1 − Lij E Bij B ij E B ∑ ∑ ij ij ij c 2 (β ) = − 2 2 ∑ij B ij ∑ij B ij Lhij

20

Introduction

Localization

Hybridization

Systematic error

Conclusions

Theory extension Expected quadratic error to minimize: h i h i bh − B e ? k2 b h − Bt k 2 e h = E kB → e t = E kB Linear system to solve:     e sys e e B E B E B ij ij ij c 2 = Lij −  2  B ij (β ) −  2 e e E B E B ij ij  sys  h e  e B ij 1 − Lij E Bij B ij E B ∑ ∑ ij ij ij c 2 (β ) = − 2 2 ∑ij B ij ∑ij B ij Lhij

   sys  e B e sys and E B e ... We just have to precompute E B ij ij ij

20

Introduction

Localization

Hybridization

Systematic error

Conclusions

Systematic error modeling There is a problem:

21

Introduction

Localization

Hybridization

Systematic error

Conclusions

Systematic error modeling There is a problem: • Sampling noise had know statistical properties...

21

Introduction

Localization

Hybridization

Systematic error

Conclusions

Systematic error modeling There is a problem: • Sampling noise had know statistical properties... • But the truth Bt is unknown, and so is the systematic error!

21

Introduction

Localization

Hybridization

Systematic error

Conclusions

Systematic error modeling There is a problem: • Sampling noise had know statistical properties... • But the truth Bt is unknown, and so is the systematic error! • However, a "target" Bt can be modeled.

21

Introduction

Localization

Hybridization

Systematic error

Conclusions

Systematic error modeling There is a problem: • Sampling noise had know statistical properties... • But the truth Bt is unknown, and so is the systematic error! • However, a "target" Bt can be modeled.

Consider multiplicative inflation, dealing with ensemble spread issue only: δe xbk ← α ◦ δe xbk where α is a vectorial inflation factor.

21

Introduction

Localization

Hybridization

Systematic error

Conclusions

Systematic error modeling There is a problem: • Sampling noise had know statistical properties... • But the truth Bt is unknown, and so is the systematic error! • However, a "target" Bt can be modeled.

Consider multiplicative inflation, dealing with ensemble spread issue only: δe xbk ← α ◦ δe xbk where α is a vectorial inflation factor. This model relies on the implicit assumption that: e? Bt ' α α B ij

i

j

ij

21

Introduction

Localization

Hybridization

Systematic error

Conclusions

Model of systematic error In this case, we can obtain:    sys  e e B = (1 − α α )E E B ij ij i j  2  sys  e e e E Bij Bij = (1 − αi αj )Lij E B ij

22

Introduction

Localization

Hybridization

Systematic error

Conclusions

Model of systematic error In this case, we can obtain:    sys  e e B = (1 − α α )E E B ij ij i j  2  sys  e e e E Bij Bij = (1 − αi αj )Lij E B ij Linear system to solve:   e E B ij h Lij = Lij −  2  B ij (β c )2 e E Bij h e  B 1 − L E Bij ∑ ij ij ij (β c )2 = 2 ∑ij B ij

22

Introduction

Localization

Hybridization

Systematic error

Conclusions

Model of systematic error In this case, we can obtain:    sys  e e B = (1 − α α )E E B ij ij i j  2  sys  e e e E Bij Bij = (1 − αi αj )Lij E B ij Linear system to solve:   e E B ij h Lij = αi αj Lij −  2  B ij (β c )2 e E Bij h e  B α α − L E Bij ∑ ij ij i j ij (β c )2 = 2 ∑ij B ij

22

Introduction

Localization

Hybridization

Systematic error

Conclusions

Model of systematic error In this case, we can obtain:    sys  e e B = (1 − α α )E E B ij ij i j  2  sys  e e e E Bij Bij = (1 − αi αj )Lij E B ij Linear system to solve:   e E B ij h Lij = αi αj Lij −  2  B ij (β c )2 e E Bij h e  B α α − L E Bij ∑ ij ij i j ij (β c )2 = 2 ∑ij B ij b = Lh B e + (β c )2 B is The resulting localized-hybridized covariance B ij ij different from an inflation of the original hybrid covariance by αi αj

22

Introduction

Localization

Hybridization

Systematic error

Conclusions

Outline

Introduction Objectively optimized localization Jointly optimized localization and hybridization Accounting for systematic error Conclusions

23

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method:

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method: •

based on properties of the ensemble only,

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method: •

based on properties of the ensemble only,



affordable for high-dimensional systems,

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method: •

based on properties of the ensemble only,



affordable for high-dimensional systems,



tackling the sampling noise issue only.

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method: •

based on properties of the ensemble only,



affordable for high-dimensional systems,



tackling the sampling noise issue only.

2. Localization and hybridization are two joint aspects of the linear filtering of sample covariances and can be optimized simultaneously.

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method: •

based on properties of the ensemble only,



affordable for high-dimensional systems,



tackling the sampling noise issue only.

2. Localization and hybridization are two joint aspects of the linear filtering of sample covariances and can be optimized simultaneously. 3. If done optimally, hybridization always improves the accuracy of forecast error covariances.

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Conclusions 1. Localization can be optimized with a method: •

based on properties of the ensemble only,



affordable for high-dimensional systems,



tackling the sampling noise issue only.

2. Localization and hybridization are two joint aspects of the linear filtering of sample covariances and can be optimized simultaneously. 3. If done optimally, hybridization always improves the accuracy of forecast error covariances. Ménétrier, B. and T. Auligné: Optimized Localization and Hybridization to Filter Ensemble-Based Covariances Monthly Weather Review, 2015, accepted

24

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper:

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc → Requires the solution of a nonlinear system A(Lh , β c ) = 0, performed by a bound-constrained minimization.

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc → Requires the solution of a nonlinear system A(Lh , β c ) = 0, performed by a bound-constrained minimization. • Heterogeneous optimization: local averages over subdomains

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc → Requires the solution of a nonlinear system A(Lh , β c ) = 0, performed by a bound-constrained minimization. • Heterogeneous optimization: local averages over subdomains • 3D optimization: joint computation of horizontal and vertical

localizations, and hybridization coefficients

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc → Requires the solution of a nonlinear system A(Lh , β c ) = 0, performed by a bound-constrained minimization. • Heterogeneous optimization: local averages over subdomains • 3D optimization: joint computation of horizontal and vertical

localizations, and hybridization coefficients New preliminary results: e? • Extension of the theory to account for systematic errors in B

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc → Requires the solution of a nonlinear system A(Lh , β c ) = 0, performed by a bound-constrained minimization. • Heterogeneous optimization: local averages over subdomains • 3D optimization: joint computation of horizontal and vertical

localizations, and hybridization coefficients New preliminary results: e? • Extension of the theory to account for systematic errors in B To be done: tests in a cycled quasi-operational configuration

25

Introduction

Localization

Hybridization

Systematic error

Conclusions

Perspectives Thank you

for your attention! Any question? Already done in the paper: • Extension to vectorial hybridization weights:

δ x = β e ◦ δ xe + β c ◦ δ xc → Requires the solution of a nonlinear system A(Lh , β c ) = 0, performed by a bound-constrained minimization. • Heterogeneous optimization: local averages over subdomains • 3D optimization: joint computation of horizontal and vertical

localizations, and hybridization coefficients New preliminary results: e? • Extension of the theory to account for systematic errors in B To be done: tests in a cycled quasi-operational configuration

25