Optimized localization and hybridization to lter ensemble-based covariances Benjamin Ménétrier and Tom Auligné NCAR - Boulder - Colorado Roanoke - 06/04/2015
Acknowledgement: AFWA
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context:
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances.
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts.
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size.
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Question: how to lter this sampling noise?
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Question: how to lter this sampling noise? Usual methods:
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Question: how to lter this sampling noise? Usual methods: • Covariance localization → tapering with a localization matrix
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Context: • DA often relies on forecast error covariances. • This matrix can be sampled from an ensemble of forecasts. • Sampling noise arises because of the limited ensemble size. • Question: how to lter this sampling noise? Usual methods: • Covariance localization → tapering with a localization matrix • Covariance hybridization → linear combination with a static covariance matrix
1
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions:
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together?
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together? 2. Is it possible to optimize localization and hybridization coecients objectively and simultaneously?
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together? 2. Is it possible to optimize localization and hybridization coecients objectively and simultaneously? The method should:
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together? 2. Is it possible to optimize localization and hybridization coecients objectively and simultaneously? The method should:
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together? 2. Is it possible to optimize localization and hybridization coecients objectively and simultaneously? The method should: •
use data from the ensemble
only.
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together? 2. Is it possible to optimize localization and hybridization coecients objectively and simultaneously? The method should: •
use data from the ensemble
only.
•
be aordable for high-dimensional
systems.
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Introduction Questions: 1. Can localization and hybridization be considered together? 2. Is it possible to optimize localization and hybridization coecients objectively and simultaneously? The method should: •
use data from the ensemble
only.
•
be aordable for high-dimensional
systems.
3. Is hybridization always improving the accuracy of forecast error covariances?
2
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Outline Introduction Linear ltering of sample covariances Joint optimization of localization and hybridization Results Conclusions
3
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Outline Introduction Linear ltering of sample covariances Joint optimization of localization and hybridization Results Conclusions
4
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances
5
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances e: An ensemble of N forecasts {exbp } is used to sample B e B
where:
=
1
N
b δe x N −1 ∑ p=1
b δe x
T
b xb − hexb i and hexb i = δe x =e p p
1 N b e x
p N p∑ =1
5
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances e: An ensemble of N forecasts {exbp } is used to sample B e B
where:
=
1
N
b δe x N −1 ∑ p=1
b δe x
T
b xb − hexb i and hexb i = δe x =e p p
1 N b e x
p N p∑ =1
e →B e? Asymptotic behavior: if N → ∞ , then B
5
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances e: An ensemble of N forecasts {exbp } is used to sample B e B
where:
=
1
N
b δe x N −1 ∑ p=1
b δe x
T
b xb − hexb i and hexb i = δe x =e p p
1 N b e x
p N p∑ =1
e →B e? Asymptotic behavior: if N → ∞ , then B
ee = B e −B e? In practice, N < ∞ ⇒ sampling noise B
5
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances e: An ensemble of N forecasts {exbp } is used to sample B e B
where:
=
N
1
b δe x N −1 ∑ p=1
b δe x
T
b xb − hexb i and hexb i = δe x =e p p
1 N b e x
p N p∑ =1
e →B e? Asymptotic behavior: if N → ∞ , then B
ee = B e −B e? In practice, N < ∞ ⇒ sampling noise B
Theory of sampling error: 2 N (N − 3) ?2 1 e e e e = E B 2 E Bij − (N − 1)(N − 2) E Bii Bjj ij (N − 1) +
N2
(N − 1) (N − 2) 2
e E Ξ ijij
5
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances Localization by L (Schur product) Covariance matrix b B
e = L◦B
6
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances Localization by L (Schur product) Covariance matrix b B
e = L◦B
δ xe = √
1
Increment N
b δe x ◦ ∑ p N −1 p=1
L
1/2 α v
p
6
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances Localization by L (Schur product) Covariance matrix b B
e = L◦B
δ xe = √
1
Increment N
b δe x ◦ ∑ p N −1 p=1
L
1/2 α v
p
Localization by L + hybridization with B Increment
δ x = β e δ xe + β c
B
1/2
v
c
6
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances Localization by L (Schur product) Covariance matrix b B
δ xe = √
1
Increment N
b δe x ◦ ∑ p N −1
e = L◦B
p=1
L
1/2 α v
p
Localization by L + hybridization with B Covariance matrix
b h = (β e )2 B
e + (β c )2 L◦B
Increment
B
δ x = β e δ xe + β c
B
1/2
v
c
6
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances Localization by L (Schur product) Covariance matrix b B
δ xe = √
1
Increment N
b δe x ◦ ∑ p N −1
e = L◦B
p=1
L
1/2 α v
p
Localization by L + hybridization with B Covariance matrix
b h = (β e )2 L ◦ B e + (β c )2 B | {z } Gain Lh
Increment
B
δ x = β e δ xe + β c
B
1/2
v
c
| {z } Offset
6
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Linear ltering of sample covariances Localization by L (Schur product) Covariance matrix b B
δ xe = √
1
Increment N
b δe x ◦ ∑ p N −1
e = L◦B
p=1
L
1/2 α v
p
Localization by L + hybridization with B Covariance matrix
b h = (β e )2 L ◦ B e + (β c )2 B | {z } Gain Lh
Increment
B
δ x = β e δ xe + β c
B
1/2
v
c
| {z } Offset
e Localization + hybridization = linear ltering of B h c L and β have to be optimized together
6
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Outline Introduction Linear ltering of sample covariances Joint optimization of localization and hybridization Results Conclusions
7
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 Step 1: optimizing the localization
only,
without hybridization
8
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 Step 1: optimizing the localization
only,
without hybridization
Goal: to minimize the expected quadratic error:
e=E
h e − k L ◦B | {z } e Localized B
e B
?
|{z}
k2
i
(1)
e Asymptotic B
8
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 Step 1: optimizing the localization
only,
without hybridization
Goal: to minimize the expected quadratic error:
e=E
h e − k L ◦B | {z } e Localized B
e B
?
|{z}
k2
i
(1)
e Asymptotic B
Light assumptions:
8
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 Step 1: optimizing the localization
only,
without hybridization
Goal: to minimize the expected quadratic error:
e=E
h e − k L ◦B | {z } e Localized B
e B
?
k2
i
|{z}
(1)
e Asymptotic B
Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B
8
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 Step 1: optimizing the localization
only,
without hybridization
Goal: to minimize the expected quadratic error:
e=E
h e − k L ◦B | {z } e Localized B
e B
?
|{z}
k2
i
(1)
e Asymptotic B
Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B e ? and • The two random processes generating the asymptotic B the sample distribution are independent.
8
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 Step 1: optimizing the localization
only,
without hybridization
Goal: to minimize the expected quadratic error:
e=E
h e − k L ◦B | {z } e Localized B
e B
?
k2
i
|{z}
(1)
e Asymptotic B
Light assumptions: ee = B e −B e ? is not correlated • The unbiased sampling noise B e ?. with the asymptotic sample covariance matrix B e ? and • The two random processes generating the asymptotic B the sample distribution are independent. An explicit formula for the optimal localization L is given in Ménétrier et al. 2015 (Montly Weather Review). 8
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 This formula of optimal localization L involves: • the ensemble size N e • the sample covariance B e • the sample fourth-order centered moment Ξ
9
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 1 This formula of optimal localization L involves: • the ensemble size N e • the sample covariance B e • the sample fourth-order centered moment Ξ
Lij =
(N − 1)2 N (N − 3)
N
e E Ξ ijij − 2 e (N − 2)(N − 3) E B ij e B e E B N −1 ii jj + N (N − 2)(N − 3) EBe 2
ij
9
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 2 Step 2: optimizing localization and hybridization together
10
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 2 Step 2: optimizing localization and hybridization together Goal: to minimize the expected quadratic error
eh = E
k
h ◦B e + (β c )2 B
L
|
{z
}
e Localized / hybridized B
−
e B
?
k2
|{z}
e Asymptotic B
10
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 2 Step 2: optimizing localization and hybridization together Goal: to minimize the expected quadratic error
eh = E
k
h ◦B e + (β c )2 B
L
|
{z
}
e Localized / hybridized B
−
e B
?
k2
|{z}
e Asymptotic B
Same assumptions as before.
10
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Joint optimization: step 2 Step 2: optimizing localization and hybridization together Goal: to minimize the expected quadratic error
eh = E
k
h ◦B e + (β c )2 B
L
|
{z
}
e Localized / hybridized B
−
e B
?
k2
|{z}
e Asymptotic B
Same assumptions as before. Result of the minimization: a linear system in Lh and (β c )2 e E B Lhij = Lij − eij2 B ij (β c )2 E Bij e B 1 − Lhij E B ∑ ij ij ij c 2 (β ) = 2 ∑ij B ij
(2a) (2b) 10
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Hybridization benets Comparison of: b = L◦B e, • B
with an optimal L minimizing e
b h = Lh ◦ B e + (β c )2 B, • B
with optimal Lh and β c minimizing e h
11
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Hybridization benets Comparison of: b = L◦B e, • B
with an optimal L minimizing e
b h = Lh ◦ B e + (β c )2 B, • B
with optimal Lh and β c minimizing e h
We can show that:
B 2ij Var Beij h c 2 e − e = −(β ) ∑ 2 e E B ij ij |
{z
≤0
(3) }
11
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Hybridization benets Comparison of: b = L◦B e, • B
with an optimal L minimizing e
b h = Lh ◦ B e + (β c )2 B, • B
with optimal Lh and β c minimizing e h
We can show that:
B 2ij Var Beij h c 2 e − e = −(β ) ∑ 2 e E B ij ij |
{z
≤0
(3) }
With optimal parameters, whatever the static B: Localization + hybridization is better than localization alone 11
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Outline Introduction Linear ltering of sample covariances Joint optimization of localization and hybridization Results Conclusions
12
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Practical implementation An ergodicity assumption is required to estimate the statistical expectations E in practice:
13
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Practical implementation An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average, • etc.
13
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Practical implementation An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average, • etc. → This assumption is independent from earlier theory.
13
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Practical implementation An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average, • etc. → This assumption is independent from earlier theory. Localization Lh and hybridization coecient β c can be computed: • from the ensemble at each assimilation window, • climatologically from an archive of ensembles.
13
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Practical implementation An ergodicity assumption is required to estimate the statistical expectations E in practice: • whole domain average, • local average, • scale dependent average, • etc. → This assumption is independent from earlier theory. Localization Lh and hybridization coecient β c can be computed: • from the ensemble at each assimilation window, • climatologically from an archive of ensembles.
13
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Experimental setup •
WRF-ARW model, large domain, 25 km-resolution, 40 levels
14
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Experimental setup • •
WRF-ARW model, large domain, 25 km-resolution, 40 levels Initial conditions randomized from a homogeneous static B
14
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Experimental setup • • •
WRF-ARW model, large domain, 25 km-resolution, 40 levels Initial conditions randomized from a homogeneous static B Reference and test ensembles (1000 / 100 members)
14
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Experimental setup • • • •
WRF-ARW model, large domain, 25 km-resolution, 40 levels Initial conditions randomized from a homogeneous static B Reference and test ensembles (1000 / 100 members) Forecast ranges: 12, 24, 36 and 48 h
14
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Experimental setup • • • •
WRF-ARW model, large domain, 25 km-resolution, 40 levels Initial conditions randomized from a homogeneous static B Reference and test ensembles (1000 / 100 members) Forecast ranges: 12, 24, 36 and 48 h Temperature at level 7 (∼ 1 km above ground), 48 h-range forecasts
Standard-deviation (K)
Correlations functions
14
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level.
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Localization length-scale:
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Hybridization coecients for zonal wind:
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Impact of the hybridization:
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Impact of the hybridization: e? • B
is estimated with the reference ensemble
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Impact of the hybridization: is estimated with the reference ensemble Expected quadratic errors e and e h are computed
e? • B •
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Impact of the hybridization: is estimated with the reference ensemble Expected quadratic errors e and e h are computed Error reduction from e to e h for 25 members
e? • B •
Zonal wind Meridian wind Temperature Specic humidity 4.5 % 4.2 % 3.9 % 1.7 %
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Localization and hybridization • • •
Optimization of the horizontal localization Lhhor and of the hybridization coecient β c at each vertical level. e Static B = horizontal average of B Impact of the hybridization: is estimated with the reference ensemble Expected quadratic errors e and e h are computed Error reduction from e to e h for 25 members
e? • B •
Zonal wind Meridian wind Temperature Specic humidity 4.5 % 4.2 % 3.9 % 1.7 % → Hybridization with
B improves the accuracy of the forecast error covariance matrix
15
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Outline Introduction Linear ltering of sample covariances Joint optimization of localization and hybridization Results Conclusions
16
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
2. We have developed a new objective method to optimize localization and hybridization coecients together:
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
2. We have developed a new objective method to optimize localization and hybridization coecients together: •
Based on properties of the ensemble
only
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
2. We have developed a new objective method to optimize localization and hybridization coecients together: •
Based on properties of the ensemble
•
Aordable for high-dimensional
only
systems
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
2. We have developed a new objective method to optimize localization and hybridization coecients together: •
Based on properties of the ensemble
•
Aordable for high-dimensional
•
Tackling the sampling noise issue only
only
systems
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
2. We have developed a new objective method to optimize localization and hybridization coecients together: •
Based on properties of the ensemble
•
Aordable for high-dimensional
•
Tackling the sampling noise issue only
only
systems
3. If done optimally, hybridization always of forecast error covariances.
improves
the accuracy
17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Conclusions 1. Localization and hybridization are two linear ltering of sample covariances.
joint aspects
of the
2. We have developed a new objective method to optimize localization and hybridization coecients together: •
Based on properties of the ensemble
•
Aordable for high-dimensional
•
Tackling the sampling noise issue only
only
systems
3. If done optimally, hybridization always of forecast error covariances.
improves
the accuracy
Ménétrier, B. and T. Auligné: Optimized Localization and Hybridization to Filter Ensemble-Based Covariances Monthly Weather Review, 2015, accepted 17
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper:
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0,
performed by a bound-constrained minimization.
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0, •
performed by a bound-constrained minimization. Heterogeneous optimization: local averages over subdomains
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0, • •
performed by a bound-constrained minimization. Heterogeneous optimization: local averages over subdomains 3D optimization: joint computation of horizontal and vertical localizations, and hybridization coecients
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0, • •
performed by a bound-constrained minimization. Heterogeneous optimization: local averages over subdomains 3D optimization: joint computation of horizontal and vertical localizations, and hybridization coecients
To be done:
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0, • •
performed by a bound-constrained minimization. Heterogeneous optimization: local averages over subdomains 3D optimization: joint computation of horizontal and vertical localizations, and hybridization coecients
To be done: • Tests in a cycled quasi-operational conguration
18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0, • •
performed by a bound-constrained minimization. Heterogeneous optimization: local averages over subdomains 3D optimization: joint computation of horizontal and vertical localizations, and hybridization coecients
To be done: • Tests in a cycled quasi-operational conguration e? • Extension of the theory to account for systematic errors in B (theory is ready, tests are underway...) 18
Introduction
Linear ltering
Joint optimization
Results
Conclusions
Perspectives Thank you for your attention! Any question? Already done in the paper: • Extension to vectorial hybridization weights: δ x = β e ◦ δ xe + β c ◦ δ xc
→ Requires the solution of a nonlinear system A(Lh , β c ) = 0,
• •
performed by a bound-constrained minimization. Heterogeneous optimization: local averages over subdomains 3D optimization: joint computation of horizontal and vertical localizations, and hybridization coecients
To be done: • Tests in a cycled quasi-operational conguration e? • Extension of the theory to account for systematic errors in B (theory is ready, tests are underway...) 18