Reference material for the MVA Master 2 class Maureen Clerc Th´eo Papadopoulo (section on fMRI analysis adapted from Bertrand Thirion’s PhD thesis) February 5, 2009

2

Contents 1 Introduction

5

I

7

Magneto-electroencephalography models

2 Electromagnetic propagation 2.1 Maxwell equations . . . . . . . . . . . . . . . . . . . 2.1.1 Current density . . . . . . . . . . . . . . . . . 2.1.2 Maxwell-Gauss equation . . . . . . . . . . . . 2.1.3 Maxwell-Ampere equation . . . . . . . . . . . 2.1.4 Maxwell-Faraday equation . . . . . . . . . . . 2.1.5 Maxwell-Gauss equation for magnetism . . . 2.1.6 Summary . . . . . . . . . . . . . . . . . . . . 2.2 Quasistatic approximation . . . . . . . . . . . . . . . 2.2.1 Poisson equation . . . . . . . . . . . . . . . . 2.2.2 Biot and Savart law . . . . . . . . . . . . . . 2.3 Neural current sources . . . . . . . . . . . . . . . . . 2.3.1 Action potentials and postsynaptic potentials 2.3.2 Estimates of dipole strengths . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

9 9 9 10 10 11 12 12 12 14 14 15 15 16

3 Geometric modeling of the head 3.1 Magnetic Resonance Imaging . . . . 3.1.1 Basic principle of NMR . . . 3.1.2 MRI scanning . . . . . . . . . 3.2 Segmentation of Magnetic Resonance 3.2.1 Region labelling . . . . . . . 3.2.2 Segmentation . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

19 19 19 20 22 22 22

4 Forward problem computation 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Simple geometries . . . . . . . . . . . . . . . . . . . . 4.2.1 Current dipole in infinite homogeneous medium 4.2.2 Silent sources . . . . . . . . . . . . . . . . . . . 4.3 Semi-realistic model . . . . . . . . . . . . . . . . . . . 4.3.1 Magnetic field computation . . . . . . . . . . . 4.3.2 Electric potential computation . . . . . . . . . 4.4 Realistic model . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

23 23 23 23 24 25 25 26 28

3

. . . . . . . . . . . . . . . . . . . . . . . . . . . Images (MRI) . . . . . . . . . . . . . . . . . .

4

CONTENTS 4.4.1 4.4.2 4.4.3

II

A variational formulation of the forward problem . . . . . . . Discretization of the FEM forward problem . . . . . . . . . . Solving the FEM forward problem . . . . . . . . . . . . . . .

Analysis of functional imaging data

5 Functional Magnetic Resonance Imaging 5.1 Origins of the fMRI signal: the BOLD effect . 5.2 Image Acquisition and Experimental Design . 5.2.1 Fast MRI sequences . . . . . . . . . . 5.3 Data Preprocessing . . . . . . . . . . . . . . . 5.3.1 Registration . . . . . . . . . . . . . . . 5.3.2 Smoothing . . . . . . . . . . . . . . . 5.3.3 Removing global effects . . . . . . . . 5.3.4 Selecting voxels of interest . . . . . . . 5.3.5 Detrending . . . . . . . . . . . . . . . 5.3.6 Temporal registration or slice timing . 5.4 Generalized Linear Model . . . . . . . . . . .

30 32 36

37 . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

39 39 40 42 42 42 43 44 44 44 44 44

6 Localizing cortical activity from MEEG 6.1 Pseudoinverse solution . . . . . . . . . . . . . . . . . . . . 6.2 Dipole fitting . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Scanning methods . . . . . . . . . . . . . . . . . . . . . . 6.3.1 MUltiple SIgnal Classification (MUSIC) . . . . . . 6.3.2 Beamforming methods . . . . . . . . . . . . . . . . 6.4 Estimating distributed activity: imaging approach . . . . 6.4.1 Tikhonov regularization . . . . . . . . . . . . . . . 6.4.2 Selecting the regularization parameter: the L-curve

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

45 46 47 48 49 51 53 54 54

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

III A Useful mathematic formulae and lemma A.1 Differential operators in R3 . . . . . . . . . . . . . . . . . . . . . A.1.1 Conversion from volume to surface integrals . . . . . . . . A.1.2 The Green function for the Laplacian in R3 . . . . . . . . A.2 The Poincar´e and H¨ older inequalities . . . . . . . . . . . . . . . . A.3 Integral equalities . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Minimization under contraints: the Lagrange multiplier approach A.5 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . A.5.1 Moore-Penrose pseudoinverse . . . . . . . . . . . . . . . . A.5.2 SVD and least-squares problems . . . . . . . . . . . . . .

55 . . . . . . . . .

. . . . . . . . .

57 57 57 58 59 60 61 61 62 63

Chapter 1

Introduction The study of human bioelectricity was initiated with the discovery of electrocardiography at the turn of the twentieth century, followed by electroencephalography (EEG) in the 1920’s, magnetocardiography in the 1960’s, and magnetoencephalography (MEG) in the 1970’s. Biomagnetic and bioelectric fields have the same biological origin: the displacement of charges within active cells called neurons [14]. Nowadays, EEG is relatively inexpensive and used commonly to detect and qualify neural activities (epilepsy detection and characterisation, neural disorder qualification, Brain Computer Interfaces, . . . ). MEG is, comparatively, much more expensive as SQUIDS work in very challenging conditions (at liquid helium temperature) and as a specially shielded room must be used to separate the signal of interest from the ambient noise. However, as it reveals a complementary vision to that of EEG and as it is less sensitive to the head structure, it also bears great hopes and more and more MEG machines are installed throughout the world. There are several scales at which bioelectricity can be described and measured: the microscopic scale, with microelectrodes placed inside or in a very close vicinity to neurons, and the mesoscopic scale, with intracortical recording of local field potentials (i.e. the electric potential within the cortex), below a square millimeter. Non-invasive measurements of the electric potential via EEG or the magnetic field via MEG are taken on the scalp, and the spatial extent of brain activity to which these measurements can be related has not yet been elucidated, but lies between a square millimeter and a square centimeter. Given the size of the head, and the time scale of interest - the millisecond - the quasistatic approximation can be applied to the Maxwell equations [11]. The electromagnetic field is thus related to the electric sources by two linear equations: the Poisson equation for the electric potential, and the Biot-Savart equation for the magnetic field. MEG and EEG can be measured simultaneously and reveal complementary properties of the electrical fields. The two techniques have temporal resolutions of about the millisecond, which is the typical granularity of the measurable electrical phenomenons that arise in the brain. This high temporal resolution is what makes MEG and EEG attractive for the functional study of the brain. The spatial resolution, on the contrary, is somewhat poor as only a few hundreds of simultaneous data points can be acquired simultaneously (about 300-400 for MEG and up to 256 for EEG). MEG and EEG are somewhat complementary with fMRI and 5

6

CHAPTER 1. INTRODUCTION

SPECT in that those provide a very good spatial resolution but a rather poor temporal one (about the second for fMRI and the minute for SPECT). Contrarily to fMRI, which “only” measures an haemodynamic response linked to the metabolic demand, MEG and EEG also measure a direct consequence of the electrical activity of the brain: it is admitted that the MEG and EEG measured signals correspond to the variations of the post-synaptic potentials of the pyramidal cells in the cortex. Pyramidal neurons compose approximately 80% of the neurons of the cortex, and it requires at least about 50,000 active such neurons to generate some measurable signal. Reconstructing the cortical sources from the electromagnetic field measured on EEG and MEG sensors requires that the inverse problem of MEG and EEG (denoted collectively as MEEG for short) be solved. It is a difficult problem because it is illposed, and this has led to a large body of literature, concerning both theoretical [3, 5] and computational aspects [2, 6, 8]. There are two main domains of application for EEG and MEG: clinical applications and the study of cognition. Clinical research in neurophysiology aims at understanding the mechanisms leading to disorders of the brain and the central nervous system, in order to improve diagnosis and eventually propose new therapies. The clinical domains in which EEG and MEG are most routinely used include epilepsy, schizophrenia, depression, attention deficit disorders. Clinicians are especially interested in the time courses of the measured signals: their experience in the visual analysis of superimposed sensor time courses allows them to detect abnormal patterns. The source localization performed by clinicians is generally limited to simple dipole scanning methods. EEG and MEG rely on passive measurements, with no applied electromagnetic field. In contrast, active techniques using bioelectric stimulation, are currently being developed and tested on patients, to treat disorders such as Parkinson’s disease, chronic pain, dystonia and depression. Implanted intracortical or extradural electrodes can deliver a electrical stimulation to specific brain areas (Deep Brain Stimulation or DBS). Less invasive, Transcranial Magnetic Stimulation (TMS), uses a time-varying magnetic field to induce a current within the cortex. All of these techniques can be studied with the same equations, models, and numerical tools as the ones used for MEEG. Moreover, in order to understand the physiological mechanisms triggered by these stimulations, simultaneous TMS/EEG and DBS/MEEG can be performed and analyzed [17]. In cognitive neuroscience, much of our knowledge of the brain has been acquired from intracortical recordings in cat and monkey brains, as well as peroperative recordings on human brains. Although the advent of functional Magnetic Resonance Imaging (fMRI) in the 1980’s has opened a unique perspective on the localization of human brain cognitive function, timing issues remain difficult to resolve. Because of their high time resolution, EEG and MEG are very useful for analyzing oscillatory activity, and the timings of activations between different brain regions. And because of its strictly non-invasive nature, MEEG is also well-suited to the study of the development of human brain function, from infancy to adulthood. This class material is divided in two parts: the first on models for MEG and EEG, which goes into details in explaining geometrical, physiological and numerical models used in this field. The second part deals with the analysis of functional imaging data, coming from fMRI or from MEG/EEG.

Part I

Magnetoelectroencephalography models

7

Chapter 2

Electromagnetic propagation Neuronal currents generate magnetic and electric fields according to principles stated in Maxwell’s equations. The neural current distribution can be described as the primary current, and viewed as the “battery” in a resistive circuit. The postsynaptic currents in the cortical pyramidal cells are the main primary currents which give rise to measurable MEEG signals.

2.1 2.1.1

Maxwell equations Current density

The current density J represents the current crossing a unit surface normal to J. Its unit is A · m−2 . The total intensity crossing an oriented surface S is Z J · n ds . I= S

The electric charge conservation principle attributes the variation of charge inside a closed surface exclusively to exchanges with the outside medium. Let ρ denote the volumic charge density. For a closed surface, the orientation convention is for the normal vector to point outward. The charge conservation principle implies that, if Ω is a volume with boundary ∂Ω, Z Z d J · n ds (2.1) ρ dr = − dt Ω ∂Ω For a fixed volume Ω, d dt

Z

Ω

ρ dr =

Z

Ω

∂ρ dr ∂t

The Green identity implies that Z Z ∇ · J dr J · n ds = Ω

∂Ω

9

(2.2)

(2.3)

10

CHAPTER 2. ELECTROMAGNETIC PROPAGATION

and replacing (2.2) and (2.3) in (2.1), Z Z ∂ρ dr = − ∇ · J dr Ω ∂t Ω As this is true for any fixed volume Ω, we obtain the local charge conservation equation: ∇·J= −

2.1.2

∂ρ ∂t

(2.4)

Maxwell-Gauss equation

The electric field generated at a position M by a single charge qi at position Pi is equal to −−→ Pi M 1 qi 4πε0 kPi M k3 where ε0 is the electrical permitivity of R the vacuum. The flow of the electric field across a surface S is defined by ψ = S E · n ds. The electric flow on S due to a charge q positioned at P (coordinate p) is hence equal to Z q r−p q · n ds = Ω ψ= 3 4πε0 S kr − pk 4πε0

where Ω is the solid angle spanning S from position P . For a closed surface S, Ω = 0 if P is outside S, and Ω = 4π if P is inside S. The electric flow generated on a surface S by a set of charges qi is, by summation, equal to 1 X ψ= qi Ωi 4πε0 where Ωi is equal to 0 (resp. 4π) if the corresponding charge is outside (resp. inside) S. This result leads to the Gauss theorem: Z Z ρ Qint = dr E · n ds = ε ε 0 Ω 0 ∂Ω

where ρ is the (volumic) charge density. Using the Green identity, the above relation becomes Z Z ρ dr , ∇ · E dr = Ω ε0 Ω which provides, in its local version, the Maxwell-Gauss equation: ∇·E =

2.1.3

ρ . ε0

(2.5)

Maxwell-Ampere equation

Ampere’s law is first established for a time-invariant setting. Given a closed loop enclosing an open surface S, the magnetic field integrated along ∂S is proportional to the current I crossing S: Z − → B · dl = µ0 I . ∂S

11

2.1. MAXWELL EQUATIONS

The coefficient µ0 is the magnetic susceptibility of the vacuum (ε0 and µ0 statisfy the relation ε0 µ0 c2 = 1). Using Green’s (Stokes) theorem and introducing the current density J, the above relation becomes Z Z ∇ × B · n ds = µ0 J · n ds S

S

As this must hold for any open surface S, this implies the local relationship: ∇ × B = µ0 J .

(2.6)

As a consequence, the current density must be divergence-free: ∇·J= 0 . We have seen in 2.1.1 that the charge conservation principle implies ∇·J =−

∂ρ . ∂t

Using the Maxwell-Gauss equation (2.5), ∇ · J = −∇ · ε0

∂E ∂t

In the time-variant case, the quantity which is divergence-free is no longer the current density, but ∂E . J + ε0 ∂t Ampere’s law (2.6) must be adapted to account for the additional term ε0 ∂E ∂t , sometimes called “displacement current”. This leads to the Maxwell-Ampere equation: ∂E (2.7) ∇ × B = µ0 J + ε0 ∂t

2.1.4

Maxwell-Faraday equation

The Maxwell-Faraday equation is a structural relationship between the electric and magnetic fields. Consider the electric force Z − → E · dl e(t) = ∂S

induced by a magnetic field B on a lineic circuit enclosing an open surface S. The law of induction states that dφ e(t) = − dt where Z B · n ds. φ(t) = S

The Green (Stokes) theorem provides the local form of the induction theorem, called the Maxwell-Faraday equation: ∇×E = −

∂B ∂t

(2.8)

12

CHAPTER 2. ELECTROMAGNETIC PROPAGATION

2.1.5

Maxwell-Gauss equation for magnetism

The last Maxwell equation is a conservation equation for the magnetic field, which basically states the absence of magnetic monopoles. In its local form, it is written as: ∇·B= 0 .

(2.9)

The integral form of this equation being: Z B · n ds = 0 . ∂Ω

2.1.6

Summary

The local and integral forms of the Maxwell equations are summarized in table: Name

Differential form ρ ε0

Gauss’s law

∇·E =

Gauss’s law for magnetism

∇·B=0

Faraday’s law

∇ × E = − ∂B ∂t

Amp`ere’s circuital law

∇ × B = µ0 J + ε0 ∂E ∂t

2.2

Quasistatic approximation

Integral form R ρ R ∂Ω E · n ds = Ω ε0 dr R B · n ds = 0 ∂Ω R ∂B R − → ∂S E · dl = − S ∂t ds R R − → ∂E ∂S B · dl = µ0 S J + ε0 ∂t · n ds

The Maxwell equations were presented above in a general, time-varying, setting. For EEG and MEG modeling, the spatial scale, the frequencies, and the medium properties make it possible to neglect the inductive, capacitive and displacement effects, and to effectively omit the time-derivatives in (2.7) and (2.8). Omitting the time-derivatives in (2.7) is called quasi-stationariry while omitting the timederivatives in both (2.7) and (2.8) is called the quasistatic regime. This considerably simplifies the resulting system, because the magnetic and electric fields become uncoupled, and can be solved separately. Let us briefly justify the quasistatic assumption. We note that, in a magnetic medium free of charges or current generators, the volumic current density J is the sum of an volumic ohmic current and a polarization current: J = σE +

∂P ∂t

(2.10)

where P = (ǫ − ǫ0 ) E is the polarization vector, ǫ the permittivity of the medium, and σ the conductivity. Let us examine the Maxwell-Ampere equation (2.7). Using (2.10), we are able to express the right-hand side as a function of E only: ∂E ∇ × B = µ0 σ E + ε ∂t

13

2.2. QUASISTATIC APPROXIMATION

Let us further assume that the electrical field can be modelled using a planar waveform at frequency f , E(r, t) = E0 ei(ωt−k·r) , with ω = 2πf and the added condition E0 · k = 0. Using this model, we have: ∂E = iωE ∂t and

∂E

ε

∂t = ωεkEk kσEk = σkEk The term ε ∂E ∂t is negligible compared to σ E, at a frequency f , if 2π f ε ≪1. σ For the brain, σ = 0.3 Ω−1 m−1 , the permittivity ε is of the order of 105 ε0 = 8.8510−7, and the frequencies of interest are typically lower than f = 100 Hz. With these values, 2πσf ε is of the order of 2 · 10−3 . Therefore, the time derivative in (2.7) can be neglected. The Maxwell-Ampere equation becomes time-invariant: ∇ × B = µ0 J ,

(2.11)

and the current density is consequently divergence-free ∇·J= 0 .

(2.12)

To show that the time-derivative can be neglected in (2.8), we compute the rotational of this equation. The left hand side becomes: ∇ × (∇ × E) = ∇(∇ · E) − ∆E . But with our choice for E, we have: ∇ · E = E0 ∇ei(ωt−k·r) = −iE0 · kei(ωt−k·r) = 0 . Thus: ∇ × (∇ × E) = −∆E = −kkk2 E

∂ ∇×B ∂t ∂ ∂E = −µ0 σE + ε ∂t ∂t = −µ0 iω(σ + εiω)E = −

Consequently, we get: kkk2 = |µ0 iω(σ + εiω)| and the corresponding wavelength is 1 1 = √ λ = kkk . For the head, this quantity is equal to 65m ≫ ∅head. |µ0 iω(σ+εiω)|

This means that the time-derivative can be neglected in (2.8), leading to the timeinvariant Maxwell-Faraday equation ∇×E=0 .

(2.13)

14

2.2.1

CHAPTER 2. ELECTROMAGNETIC PROPAGATION

Poisson equation

A consequence of the time-invariant Maxwell-Faraday equation (2.13) is that the electric field E derives from a potential, which we call the electric potential and denote V : E = −∇V . It is useful to divide the current density J, into two components: the passive ohmic or return current σE and the remaining primary current Jp J = −σ∇V + Jp .

(2.14)

Although this equation holds at different scales, it is not possible to include all the microscopic conductivity details in models of MEEG activity and thus σ refers to macroscopic conductivity with at least 1mm scale. The division of neuronal currents to primary and volume currents is physiologically meaningful. For instance, chemical transmitters in a synapse give rise to primary current mainly inside the postsynaptic cell, whereas the volume current flows passively in the medium, with a distribution depending on the conductivity profile. By finding the primary current, we can locate the active brain regions. The current density is divergence-free, and using the decomposition (2.14) shows that the electric potential and the primary current are related by a simple equation, called a Poisson equation: ∇ · (σ∇V ) = ∇ · Jp .

2.2.2

(2.15)

Biot and Savart law

We derive in this section the Biot and Savart law, relating the magnetic field to the current density. Recall the time-invariant Maxwell-Ampere equation ∇ × B = µ0 J and take its curl: ∇ × ∇ × B = µ0 ∇ × J .

(2.16)

The left-hand side can be rewritten ∇ × ∇ × B = −∆B + ∇(∇ · B) where the Laplacian acts coordinatewise on the vector field B. Since ∇ · B = 0 (Maxwell equation expressing absence of magnetic charges), (2.16) rewrites: −∆B = µ0 ∇ × J . 1 Recalling that a fundamental solution fo the Laplacian in R3 is − 4πkrk (see appendix A.1.2), in the sense that 1 = δ0 ∆ − 4πkrk

15

2.3. NEURAL CURRENT SOURCES this implies that B=

µ0 4π

Z

∇ × J(r′ )

1 dr′ + BH kr − r′ k

where BH is a harmonic function, i.e. such that ∆BH = 0 .

With the condition that B vanishes at infinity, the harmonic term can be discarded, an integration by parts leads to the Biot and Savart law: Z r − r′ µ0 dr′ . (2.17) J(r′ ) × B= 4π kr − r′ k3 Replacing the total current J by its decomposition (2.14), the Biot-Savart law becomes Z µ0 r − r′ B = B0 − dr′ , (2.18) σ∇V × 4π kr − r′ k3 where B0 is the contribution to the magnetic field coming from the primary current: Z µ0 r − r′ B0 = dr′ . Jp × 4π kr − r′ k3 Note: if the medium is infinite, with homogeneous conductivity σ, a simplification occurs in the Biot-Savart equation, since ∇ × J(r′ ) = ∇ × (Jp − σ∇V ), and in a homogeneous medium, ∇ × (σ∇V ) = σ∇ × ∇V = 0. The magnetic field hence becomes independent of the ohmic contribution: Z r − r′ µ0 dr′ . (2.19) Jp × B = B0 = 4π kr − r′ k3

2.3 2.3.1

Neural current sources Action potentials and postsynaptic potentials

Electric signals propagate within the brain along nerve fibers (axons) as a series of action potentials (APs). The corresponding primary current can be approximated by a pair of opposite current dipoles, one at the depolarization and one at the repolarization front (figure), and this quadrupolar source moves along the axon as the activation propagates. The separation of the two dipoles depends on the duration of the AP and on the conduction velocity of the fiber. For a cortical axon with a conduction speed of 5 m/s, the opposite dipoles would be about 5mm apart. In synapses, the chemical transmitter molecules change the ion permeabilities of the postsynaptic membrane and a postsynaptic potential (PSP) and current are generated. In contrast to the currents associated with an action potential, the postsynaptic current can be adequately described by a single current dipole oriented along the dendrite. The magnetic field of a current dipole falls off with distance

16

CHAPTER 2. ELECTROMAGNETIC PROPAGATION

Figure 2.1: The organisation of the cortex. more slowly (in order 1/r2 ) than the field associated with the quadrupolar AP currents (in order 1/r3 ). Furthermore, temporal summation of currents flowing in neighboring fibers is more effective for synaptic currents, which last up to tens of milliseconds, than for the about 1 ms-long action potentials. Thus the electromagnetic signals observed outside and on the surface of the head seem to be largely due to the synaptic current flow. In special cases, currents related to action potentials might also significantly contribute to cortical MEG and EEG signals, such as, e.g., high-frequency somatosensory responses. The pyramidal cells are the principal types of neurons in the cortex, with their apical dendrites oriented parallel to each other and perpendicular to the cortical surface. Since neurons guide the current flow, the resultant direction of the electrical current flowing in the dendrites is also perpendicular to the cortical sheet of gray matter.

2.3.2

Estimates of dipole strengths

Each PSP may contribute as little as a 20 fAm current dipole, probably too small to measure in MEEG. The current dipole moments required to explain the measured MEEG fields outside the head are on the order of 10 nAm [11, 10]. This would correspond to about a million of synapses simultaneously active during a typical

2.3. NEURAL CURRENT SOURCES

17

evoked response. Although such a synchronous activity only forms a part of the total activity, it can be functionally highly important. For example, invasive recordings from monkeys have shown suprisingly large temporal overlap of neuronal firing in many visual cortical areas (Schmolesky et al, 1998). Epileptic discharges also are typically associated with strong current densities due to highly synchronous activity.

18

CHAPTER 2. ELECTROMAGNETIC PROPAGATION

Chapter 3

Geometric modeling of the head 3.1

Magnetic Resonance Imaging

Magnetic resonance imaging (MRI) was developed from knowledge gained in the study of nuclear magnetic resonance. The acronym NMR (Nuclear Magnetic Resonance) is still often used to describe the technique.

3.1.1

Basic principle of NMR

MRI relies (most often) on the relaxation properties of excited hydrogen nuclei in water. Since hydrogen nuclei have a single proton, they have a spin, which is an intrinsic angular moment. It can be associated with a magnetic dipole moment: each hydrogen nucleus behaves as a tiny magnet, with the north/south axis parallel to the spin axis. The sum of the moments of a sample of molecules is zero in the absence of a magnetic field. When an object to be imaged is placed in the powerful, uniform magnetic field B0 , the spins of the nuclei within the tissue precess around the direction of B0 . The resulting magnetic moment of a sample is oriented in the direction of B0 . The frequency ν0 (Larmor frequency) of the precession is linearly related to the field by the gyromagnetic ratio γ, whose value depends on the nature of the nuclei. ν0 = γ|B0 |

(3.1)

Besides the precession of the nuclei, a second phenomenon is important to us: the relaxation of the nuclei. In the presence of a constant field B0 , the spin axes of the nuclei slowly tend to align with B0 . The Radio Frequency pulse (RF pulse) technique consists in applying in addition to B0 a transient field pulse B1 , orthogonal to B0 , rotating at the resonance frequency of the nuclei ν0 , and several orders of magnitude smaller. When such an RF pulse is applied the resulting moments M of the nuclei are flipped (usually by 30 or 90 degrees, according to the duration of the pulse). After the pulse, M precesses around B0 and finally aligns with B0 : the transient transversal moment iMT also called Free Induction Decay (FID) cancels 19

20

CHAPTER 3. GEOMETRIC MODELING OF THE HEAD

Equilibrium Magnetization

M0

RF Excitation

B0

Precession

Flip angle α

α

Relaxation

ML ML = M0 (1 − e−t/T1 ) Signal ∝ MT ν0

B1

MT ν0

MT ∝ e−t/T2

ν0

Figure 3.1: The basic physics of the NMR experiment: in a magnetic field B0 , an equilibrium magnetization M0 forms due to the alignment of nuclear dipoles (left). An RF pulse tips over M0 creating a longitudinal component ML and a transverse component MT (middle). MT precesses around the direction of B0 , generating a detectable MR signal. Over time MT decays to zero with a relaxation time T2 and ML returns to M0 with a relaxation time T1 (right). This picture is taken from [4]. with a time constant T2 while the longitudinal moment ML reaches its equilibrium with a time constant T1 . The values of ML and MT are measured using coils. T1 and T2 depend on the environment, so that their local value can be used to discriminate between the tissues, proton density being a third signature to discriminate between tissues. An illustration of the phenomenon is given in figure 3.1. Measuring ML (resp. MT ) leads to T1 − (resp. T2 −) weighted images. A subtle but important variant of the T2 technique is called T2 ∗ imaging. In order to obtain an image, RF pulses are applied repeatedly: such a repetition is called a pulse sequence. Many kinds of pulse sequences are possible, and the sensitivity of the MR images to the different parameters can be adjusted by tuning the repetition time (TR) between consecutive pulses and the time between the RF pulse and the measurement TE (time to echo). Typical sequences may be of several kinds: • The Gradient Echo Pulse Sequence simply consists of the repetition of the FID described previously. It is simply described by the value of the flip angle α and the repetition time TR. • The Spin Echo Pulse Sequence consists in applying a first 90 degrees pulse, then after a time TE/2 a 180 degrees pulse in the transverse plane. The effect of this pulse is to refocus the signal whose phase has been quickly dispersed by local field inhomogeneities. Thus, an echo of signal appears at time TE and is measured. This echo can be repeated many times to sample the T2 decay. • The Inversion Recovery Pulse Sequence begins with a 180 degrees pulse and after a delay TI a 90 degrees pulse. It enhances the T1 weighting of the image.

3.1.2

MRI scanning

To selectively image different voxels (picture elements) of a subject, magnetic gradients are applied. Because of the relation (3.1), the spatial variation of the magnetic

3.1. MAGNETIC RESONANCE IMAGING

21

RF

Gx

Gy

Gz

signal

data acquisition

Figure 3.2: A basic imaging pulse sequence. During the RF excitation pulse a gradient in z is applied (slice selection), and during read-out of the signal a gradient in x is applied (frequency encoding). Between these gradient pulses, a gradient pulse in y is applied, and the amplitude of this pulse is stepped through a different value each time the pulse is repeated (phase encoding). Typically 128 or 256 phase-encoding steps (repeats of the pulse sequence) are required to collect sufficient information to reconstruct an image. This figure is taken from [4]. field magnitude induces a Larmor frequency variation which can be used to localize the piece of material that generated it. Gradients are typically applied during the RF pulses, during the recording of the generated signal and between these two time instants to encode a slice selection, and a position in the slice with a frequency and a phase (see figure 3.2). The same coils are used for the transmission of gradients and the reception of the signal. Since a coil typically encompasses the body (the head of the subject), it measures a sum of the signals from each tissue in the head. More precisely, the sequence of events that occurs is: • The magnetic field B0 is added with a gradient in the z direction. The selection of a particular frequency at the receiver part is then equivalent to the selection of a slice -a plane with a thickness of typically 1 to 10 mm- along the z direction. This procedure is called the slice selection. • Then, within each slice or plane spanned by the resulting directions (x and y), two gradients are applied during the relaxation. – In the x direction, a negative gradient is applied after the RF pulse, and a positive one during acquisition, which creates a gradient echo during data acquisition, halfway through the second gradient pulse; The effect is that the precession frequency varies along the x axis, so that a Fourier

22

CHAPTER 3. GEOMETRIC MODELING OF THE HEAD transform of the signal gives its amplitude along the axis. This procedure is called frequency encoding. – In the remaining y direction, a gradient field is applied for a short interval between the RF pulse and data acquisition. After cancellation of this field, the precession is at the uniform frequency, but with a phase shift determined by the position on y. Repeating the frequency encoding many times with different phase shifts creates information on the y position. This third procedure is known as phase encoding.

For a Repetition Time TR of about 1s, acquiring a single 256 × 256 slice (which needs 256 phase encoding steps so 256 RF pulses) would require 256s (4min and 16s). Acquiring a full volume like in such a way is impractical. Fortunately, there is a lot of “dead time” in a TR time slice. This “dead time” can be used to acquire several slices in parallel leading to much more reasonnable whole head acquisition times. The amount of slices that can be acquired in parallel as well as the exact gradient patterns that have to be applied can vary. Theses parameters can be tuned to optimize various aspects (acquisition time, contrast, resolution, . . . ). Designing pulse sequences for a given effect is a complicated task, which is achieved by specialists. After acquisition of the data in the frequency space, also known as k−space, the data is mapped into the 3D space by Fourier transform.

3.2

Segmentation of Magnetic Resonance Images (MRI)

3.2.1

Region labelling

3.2.2

Segmentation

Chapter 4

Forward problem computation 4.1

Introduction

The forward problem of magneto-electro-encephalography aims at computing the electromagnetic field produced by a known primary current, in a known geometry. This chapter is organized in increasing model complexity. Section 4.2 presents the forward problem in simple geometrical settings for which the calculations can be done by hand. Section 4.3 considers more general nested surface models, which we call “semi-realistic”, in which subject-dependent surfaces are designed to match the main tissue interfaces in the head. Computations in the semi-realistic setting are performed using Boundary Element Methods (BEM). Finally, Section 4.4 presents the most sophisticated model, which we call “realistic”, which models the tissue conductivity voxel-wise, does not necessitate to define interfaces between tissues of homogeneous conductivity, and allows for tensor-valued conductivity.

4.2 4.2.1

Simple geometries Current dipole in infinite homogeneous medium

Electrical potential A current dipole with moment q and position p is represented by Jp (r) = q δp (r). The potential created by such a dipole follows the Poisson equation ∇ · (σ∇V ) = q · ∇δ(r − p) . As the medium is infinite with constant conductivity σ, σ∆V = q · ∇δ(r − p) . 23

24

CHAPTER 4. FORWARD PROBLEM COMPUTATION

As detailed in Appendix A.1.2, in three dimensions, the fundamental solution to −1 the Laplacian is 4πkrk in the sense that ∆

−1 4πkrk

= δ0 .

Hence, V (r) = −

1 4πσ

Z

q · ∇′ δ(r′ − p)

1 r−p 1 . dr′ = q· kr − r′ k 4πσ kr − pk3

Magnetic field In an infinite, homogeneous domain, only the primary current contributes to the magnetic field as wass shown in the derivation of 2.19, therefore, for a dipolar source, B=

4.2.2

µ0 r−p . q× 4π kr − pk3

Silent sources

Helmholtz, in 1853, was the first to point out the existence of sources which are electromagnetically silent, i.e. produce a null electro-magnetic field. First note that a solenoidal source, such that ∇ · Jp = 0, is electrically silent, since the source term of the Poisson equation vanishes in this case. Next, we exhibit an electromagnetically silent source, in the form of a primary current Jp , supported on a surface S, and such that Jp = q n where n is the normal vector to S and q is a constant. We prove that Jp is electromagnetically silent if the medium is infinite and homogeneous. We will extend this result to more general domains in the course of this chapter. In an infinite homogeneous medium the potential V can be written as an integral over the support of the primary current: V (r) = −

1 4πσ

Z

∇′ · Jp (r′ )

1 1 dr′ = ′ kr − r k 4πσ

r − r′ dr′ . kr − r′ k3

Z

Jp (r′ ) ·

Z

r − r′ · nds kr − r′ k3

For the particular case of Jp = q n δS , 1 V (r) = 4πσ

q r − r′ ds = qn · ′ k3 kr − r 4πσ S

Z

S

The integral on the right represents the solid angle of S viewed from r, and vanishes if r is exterior to S. The magnetic field can be represented by the Biot-Savart equation, yielding, if ΩS denotes the volume contained inside S: Z Z µ0 q µ0 1 1 ′ ′ ds = dr′ = 0 . B(r) = ∇ × ∇ q n × ∇′ 4π S kr − r′ k 4π ΩS kr − r′ k

25

4.3. SEMI-REALISTIC MODEL

Ω2 σ2

Ω1 σ1

ΩN σN

S1 S2

SN ΩN+1

σN+1

Figure 4.1: The head is modeled as a set of nested regions Ω1 , . . . , ΩN +1 with constant isotropic conductivities σ1 , . . . , σN +1 , separated by interfaces S1 , . . . , SN . Arrows indicate the normal directions (outward).

4.3

Semi-realistic model

In an infinite, homogeneous domain, the electric potential and the magnetic field decay at the same rate. However, when measured on the scalp, the two fields have very different spatial properties: the magnetic field apears more “focal”, and the electric potential more “diffuse”. The main reason for this qualitative difference is that the magnetic field is less sensitive than the electric potential to conductivity differences in the tissues of the head. The electric potential, in particular, is subject to diffusion because of the low conductivity of the skull. In this section, we will consider a piecewise-constant conductivity, organised in layers, as depicted in Figure 4.1.

4.3.1

Magnetic field computation

Section 2.2.2 has established the Biot and Savart law, decomposing the magnetic field into a primary current contribution and an ohmic contribution Z µ0 r − r′ B = B0 − dr′ , σ∇V × 4π kr − r′ k3 where

Z µ0 r − r′ dr′ . Jp × 4π kr − r′ k3 With the piecewise-constant conductivity model, the ohmic term can be decomposed as a sum over volumes of constant conductivity: Z X Z X r − r′ r − r′ ′ σ∇V × σ σi Ii (4.1) ∇V × dr = dr′ = i ′ 3 ′ 3 kr − r k kr − r k Ωi i i B0 =

In the above identity, note that the conductivities must not only be assumed constant in each domain Ωi , but also isotropic, in order to take σi out of the integral over Ωi . The volume integral Ii can be expressed as a surface integral on ∂Ωi = Si−1 ∪ Si . With this in view, we use the Stokes formula, and the identity ∇ × (V ∇g) = ∇V × ∇g .

26

CHAPTER 4. FORWARD PROBLEM COMPUTATION

Thus Z r − r′ r − r′ ′ ′ n × V (r ) dr = ∇ × V (r ) ds Ii = kr − r′ k3 kr − r′ k3 ∂Ωi Ω Z i Z r − r′ r − r′ ′ n × V (r′ ) = n × V (r ) ds − ds kr − r′ k3 kr − r′ k3 Si Si−1 Z

′

This expression is then inserted in (4.1) and, recalling that σN +1 = 0, B(r) = B0 (r) +

Z N r − r′ µ0 X V (r′ ) (σi − σi+1 ) × n ds 4π i=1 kr − r′ k3 Si

(4.2)

In the case where the surfaces Si are spherical, and concentric, the above expression shows that the radial component of the magnetic field is independent of the conductivity profile: B(r) · r = B0 (r) · r . This results from the identity ((r − r′ ) × r′ ) · r = 0 . In a spherical geometry, the independence to the conductivity profile can be extended to the three components of the magnetic field, if the source is a single dipole. Indeed, outside Ω, since σ = 0 and Jp = 0, J = 0 and hence ∇ × B = 0. The magnetic field thus derives from a “potential” which we denote U : B = −∇U The potential U is only defined up to a constant, but since B → 0 at infinity, we adjust the constant so that U → 0 at infinity. Suppose the dipolar source to be located ar r0 , with dipolar moment q. Denote er = r/krk the unit radial vector. For r outside Ω, Z ∞ U (r) = − ∇U (r + t er ) · er dt Z ∞0 Z ∞ = B(r + t er ) · er dt = B0 (r + t er ) · er dt 0 Z ∞ 0 µo dt = q × (r − r0 ) · er 3 4π kr + t e r − r0 k 0 The above expression shows that B is independent of σ. Moreover, for a radial dipole, q × (r − r0 ) · er = 0, hence U (r) = 0 and B(r) vanishes outside Ω.

4.3.2

Electric potential computation

The geometrical setting is again the one of Figure 4.1. In each domain Ωi , the potential follows a Poisson equation σi ∆V = fi

27

4.3. SEMI-REALISTIC MODEL

where fi is the restriction of ∇ · Jp to Ωi . At the interface Si between Ωi and Ωi+1 , the following jump conditions hold: [V ]Si = 0 [σ∂n V ]Si = 0 .

(4.3) (4.4)

We define the jump of a function f : R3 → R at interface Sj as [f ]j = fS−j − fS+j , the functions f − and f + on Sj being respectively the interior and exterior limits of f: for r ∈ Sj , fS±j (r) = lim± f (r + αn). α→0

Note that these quantities depend on the orientation of n. Using the same type of technique as for the magnetic field, one can show that the values of the potential (and of the normal current flow) on surfaces Si are related by integral operators. Green formula We recall the Green formula Z Z ′ (u ∆v − v ∆u) dr = Ω

∂Ω

(u∂n′ v − v∂n′ u) ds(r′ ) .

1 ′ Consider v = − 4πkr−r ′ k = −G(r − r ) and a harmonic function u.

R The left-hand side integral I(r) = Ω (u ∆v − v ∆u) dr′ takes different values according to the position of r with respect to Ω, as summarized below: r∈Ω r ∈ R3 \Ω r ∈ ∂Ω

I(r) = u(r) I(r) = 0 − I(r) = u 2(r)

The first two lines of the above table are trivial to prove, and the third relies on solid angle computations (refer to [16] for the proof). Thus seen from inside, Z −u− (r) ∂n′ G(r − r′ ) + G(r − r′ ) ∂n−′ (r′ ) ds(r′ ) (4.5) I(r) = ∂Ω

The same treatment can be applied to the volume Ω′ = R3 \Ω, and seen from Ω, this yields Z u+ (r) ∂n′ G(r − r′ ) − G(r − r′ ) ∂n+′ (r′ ) ds(r′ ) (4.6) J(r) = ∂Ω

with the integral term J(r) equal to r ∈ R3 \Ω r∈Ω r ∈ ∂Ω

J(r) = u(r) J(r) = 0 − J(r) = u 2(r)

28

CHAPTER 4. FORWARD PROBLEM COMPUTATION Summing (4.5) and (4.6), for r ∈ Ω, Z Z ′ ′ ′ [u] ∂n G(r − r )ds(r ) + u(r) = −

∂Ω

∂Ω

[∂n′ u] G(r − r′ )ds(r′ )

and, for r ∈ ∂Ω, u+ (r) + u− (r′ ) =− 2

Z

∂Ω

[u] ∂n′ G(r − r′ )ds(r′ ) +

Z

∂Ω

[∂n′ u] G(r − r′ )ds(r′ )

To simplify notation, we introduce two integral operators, called the “double-layer” and “single-layer” operators, which map a scalar function f on ∂Ω to another scalar function on ∂Ω : R Df (r) = ∂n′ G(r − r′ )f (r′ ) ds(r′ ) ∂Ω

Sf (r)

R

=

∂Ω

G(r − r′ )f (r′ ) ds(r′ ) .

The two above relations become, for r ∈ Ω, u(r)

= −D [u] + S [∂n u]

and for r ∈ ∂Ω, u∓ (r) = ± 2I − D [u] + S [∂n u] This also holds when Ω = Ω1 ∪ Ω2 ∪ . . . ΩN , with ∂Ω = S1 ∪ S2 ∪ . . . SN . In this case, for r ∈ Si , u− (r) + u+ (r) 2

=

Geselowitz formula

PN

j=1

−Dij [u]Sj + Sij [∂n u]Sj

(4.7)

Supposing the primary current Jp to be restricted to one volume Ωi , consider V∞ sich that σi ∆V∞ = ∇ · Jp holds in all R3 . Across each surface Sj , the potential V∞ and its normal derivative ∂n V are continuous. Consider the function u = σ V − σi V∞ ; it is harmonic in each Ωj , therefore satisfies (4.7). Since [u]Sj = (σj −σj+1 )Vj and [∂n u] = 0, we obtain, on each surface Sj , N

X σj + σj+1 Vj + (σk − σk+1 ) Djk Vk = σi V∞ , 2

(4.8)

k=1

a formula which was established in 1967 by Geselowitz.

4.4

Realistic model

For even more realistic models, the piecewise constancy of conductivity that has been made in the previous section needs to be relaxed. Indeed, the brain is known to have strong anisotropies in the conductivities at least in two domains:

29

4.4. REALISTIC MODEL

• the skull is a non-homogeneous material. It is a porous material with marrow insertions and all kinds of holes filled with air or various liquids (eg. sinuses). Also, its shape is extremely complex and difficult to extract from MRI images, so that it is often “guessed” by using its relative position with respect to the other interfaces. In practice, researchers have found that its conductivity plays a fundamental role in EEG and that it would best modelled (in absence of more direct measurements) with distinct radial and tangential conductivities. • the white matter is even less homogeneous: it is made of an entanglement of fibers connecting different pieces of the cortex. It has thus a strong anisotropic behavior. While the importance of taking into account this anisotropy for MEG/EEG reconstruction has less been investigated, it is certainly interesting to evaluate its effects and fortunately, there exists a way of measuring it (contrarily to the case of the skull). Diffusion MRI is able to measure the diffusion of water molecules in various directions. Intuitively, the water flow more easily along the direction of the fibers in the white matter than across them. This anisotropy of diffusion of water can be used to model an anisotropic conductivity as currents are certainly better conducted along the fibers than across them. Dealing with such anisotropies with a BEM like method is impossible most of the time (it would be possible to deal with radial and tangential anisotropies for a spherical head but not much more). Thus, this problem needs to be tackled using directly the Maxwell equation in the quasistatic case. So we start again with the Poisson equation ∇ · (σ∇V ) = f = ∇ · Jp . To obtain a unique solution, this equation needs to be supplemented with a boundary condition. To do so, we hypothesize that no current flows outside of the head (which is mostly true except at the spinal column which is “far” from most EEG/MEG measurements). We thus have to solve the following problem:

∇ · (σ∇V ) =

∂V = σ∇V · n = σ ∂n

∇ · Jp

in

0

on S = ∂Ω.

Ω (4.9)

This problem will be solved using a Finite Element Approach. We will first show that the PDE 4.9 can be formulated as a variational problem (section 4.4.1), which is then discretized to obtain a linear system (section 4.4.2), and then solved (section 4.4.3). Anisotropic model Note that in the above formulation, σ can either be taken as a simple scalar function of r but it can as well be a function that associates a 3D symmetric definite positive matrix at each point of the space. This matrix is a

30

CHAPTER 4. FORWARD PROBLEM COMPUTATION

tensorial description of the anisotropic conductivity (the eigenvalues represent the conductivity along the corresponding eigenvector). Denoting by Σ this matrix, the anisotropic system becomes:

∇ · (Σ∇V )

= ∇ · Jp

in

∂V = Σ∇V · n = 0 Σ ∂n

Ω (4.10)

on S = ∂Ω.

For simplicity, we will develop the scalar model hereafter but most of the results can be trivially adapted to the anisotropic case. Notationnally, almost nothing changes except that σ(r)∇V (r) · ∇w(r) is replaced by ∇V (r)T Σ∇w(r) and σ(r)k∇φ(r)k2 is replaced by ∇φ(r)T Σ∇φ(r).

4.4.1

A variational formulation of the forward problem

Let us first define some functional spaces that will be needed hereafter. H 1 (Ω) = w ∈ L2 (Ω), ∇w ∈ L2 (Ω)3 . H 2 (Ω) = w ∈ L2 (Ω), ∇w ∈ H 1 (Ω)3 .

These spaces simply provide functions that can be simply plugged within the equations that will be used (with all integrals and differentiations well defined). We first show that the following three problems are equivalent: ➀ V ∈ H 2 (Ω) is solution of: ∇ · (σ∇V ) = ∂V σ = σ∇V · n = ∂n

f

in

Ω

g

on S = ∂Ω.

➁ V ∈ H 1 (Ω) is such that Z Z Z 1 ∀w ∈ H (Ω) σ(r)∇V (r)·∇w(r) dr+ f (r)w(r) dr− g(r)w(r) ds = 0 . Ω

Ω

S

➂ V = arg minφ∈H 1 (Ω) E(φ) with: Z Z Z 1 g(r)φ(r) ds . E(φ) = f (r)φ(r) dr − σ(r)k∇φ(r)k2 dr + 2 Ω S Ω

31

4.4. REALISTIC MODEL

Notice that the PDE in ➀ is exactly the same as the one in 4.9: we just have renamed f = ∇ · Jp and allowed for a more general Neumann boundary condition g. This makes the presentation slightly more general and shows that the basic method will remain the same even if we were able to model the currents in the neck. The functions f and g are supposed to be square integrable, that is f ∈ L2 (Ω) and g ∈ L2 (Ω). Theorem 4.1. Problems ➀, ➁ and ➂ are equivalent. Proof. Problem ➁ will be used as a pivot. The proof is thus in two parts: equivalence of ➀ and ➁, and equivalence of ➁ and ➂. ➀ =⇒ ➁ Using the formula: ∇ · (σw∇V ) = σ∇V · ∇w + w∇ · σ∇V and integrating it over the domain Ω, we have: Z Z Z σ(r)∇V (r) · ∇w(r) dr ∇ · (σ(r)w(r)∇V (r)) dr − w(r)∇ · σ(r)∇V (r) dr = Ω

Ω

Ω

In the left hand side of this equation, ∇ · σ∇V can be replaced by f because of ➀. The Green theorem can be used to transform the first term of the right hand side giving: Z Z Z σ(r)∇V (r) · ∇w(r) dr + f (r)w(r) dr − w(r)σ(r)∇V (r) · n ds = 0 . Ω

Ω

S

Replacing σ(r)∇V (r) · n by its value on S as given by the boundary condition of ➀ yields the result. Z Z Z g(r)w(r) ds = 0 . f (r)w(r) dr − σ(r)∇V (r) · ∇w(r) dr + Ω

Ω

S

➁ =⇒ ➀ If ➁ is true for any w ∈ H 1 (Ω), it is also true for w ∈ D(Ω) the space of C ∞ functions with compact support in Ω. The dual of D(Ω) is the space of distributions over Ω, D′ (Ω). If ∇ · (σ∇V ) ∈ L2 (Ω), then ∇ · (σ∇V ) − f ∈ L2 (Ω) since f ∈ L2 (Ω) by hypothesis. Denoting by < . . . > the duality bracket between the spaces L2 (Ω) and D′ (Ω). Thus Eq. ➁ can be written as < ∇ · (σ∇V ) − f, w >= 0. From a standard result in functionnal analysis [Brezis 88], ∇ · (σ∇V ) − f is zero almost everywhere. ➂ =⇒ ➁ If ➂ is true, then for all w ∈ H 1 (Ω) and for any real number λ, we have: E(V ) ≤ E(V + λw) .

(4.11)

This is true because: ∀V ∈ H 2 (Ω)

∀w ∈ H 1 (Ω) ∀λ ∈ R

V + λw ∈ H 1 (Ω)

32

CHAPTER 4. FORWARD PROBLEM COMPUTATION

By definition of E: E(V + λw)

=

1 2 Z

Z

2

σ(r)k∇(V + λw)(r)k dr +

Ω

Z

Ω

f (r)(V + λw)(r) dr −

(4.12)

g(r)(V + λw)(r) ds Z Z Z 1 g(r)V (r) ds + f (r)V (r) dr − σ(r)k∇V (r)k2 dr + 2 Ω S Ω Z Z Z f (r)w(r) dr − λ g(r)w(r) ds + σ(r)∇V (r) · ∇w(r) dr + λ λ S Ω Ω Z λ2 σ(r)k∇w(r)k2 dr . 2 Ω S

=

E(V + λw)

=

Z λ2 E(V ) + σ(r)k∇w(r)k2 dr (4.13) 2 Ω Z Z Z λ σ(r)∇V (r) · ∇w(r) dr + f (r)w(r) dr − g(r)w(r) ds Ω

Ω

S

For λ sufficiently small and positive, Eq. 4.11 implies: Z Z Z g(r)w(r) ds ≥ 0 . f (r)w(r) dr − σ(r)∇V (r) · ∇w(r) dr + Ω

Ω

S

For λ sufficiently small and negative, Eq. 4.11 implies: Z Z Z g(r)w(r) ds ≤ 0 . f (r)w(r) dr − σ(r)∇V (r) · ∇w(r) dr + Ω

Ω

S

Thus: Z

Ω

σ(r)∇V (r) · ∇w(r) dr +

Z

Ω

f (r)w(r) dr −

Z

g(r)w(r) ds = 0 .

S

➁ =⇒ ➂ From Eq. 4.13 with ➁, denoting by φ the value V + λw, it is clear that E(V ) is the minimum value of E(φ). This is true since when w spans H 1 (Ω) and for all real λ, φ = V + λw spans H 1 (Ω).

4.4.2

Discretization of the FEM forward problem

General discrete framework The FEM forward problem is implemented using the variational formulation ➂. The continuous functional spaces are approximated using the Galerkin method (see section ??) yielding a discrete problem. The 3D space Ω is tessellated with bounded cells (eg tetrahedra or hexahedra) (Ci ), i = 1 . . . NC . This tessellation Ωh also introduces a set of points (Vi ), i =

33

4.4. REALISTIC MODEL

1 . . . NV (the vertices of the cells) and the space of the continuous functions over Ω is approximated by a vectorial space using some basis functions (wi ), i = 1 . . . NV defined at each vertex. ) ( NV X i NV 1 φi w (r) , Hh (Ωh ) = φh , ∃(φ1 , . . . , φNV ) ∈ R , φh (r) = i=1

The boundary of the tessellation Sh also defines a tessellation of S the boundary of Ω. Without loss of generality, we assume that the vertices of the tessellation that are on the boundary of the tessellation are (Vi ), i = 1 . . . NS with NS < NV . ) ( NS X i NS 1 φi wSh (r) , Hh (Sh ) = φh , ∃(φ1 , . . . , φNS ) ∈ R , φh (r) = i=1

wSi h

i

where is the restriction to Sh of the function w . The discretization of the criterion E in ➂ is obtained by using the discretized versions of all the involved functions φ, f and g. For σ, we will use a different dicretization scheme where σ is given by a constant σi over the cell Ci .

! ! 2 Z Z NV NV

X X 1

i i E(φh ) = φi w (r) dr − φi w (r) dr + f (r) σ(r) ∇

2 Ω Ω i=1 i=1 ! Z NV X i φi wSh (r) g(r) ds S

=

i=1

Eh (Φ) ,

where Φ = (φ1 , . . . , φNV ) ∈ RNV . Z Z Z NS NV NV X X 1 X i i j Eh (Φ) = φi wSi h (r)g(r) ds φi f (r)w (r) dr− φi φj σ(r)∇w (r)·∇w (r) dr+ 2 i,j=1 S Ω Ω i=1 i=1 The minimization of E(φ) then becomes a simple problem of minimizing in finite dimension the quadratic criterion Eh (Φ). Denoting by Aij the second order derivative ∂ 2 Eh , we have: of this criterion ∂φ i ∂φj Z Aij = σ(r)∇wi (r) · ∇wj (r) dr . Ω

Note that the matrix A is naturally symmetric. Usually the basis functions will have a very local support, so that A will also be very sparse. We also introduce the vector B: (R R f (r)wi (r) dr − S wSi h (r)g(r) ds for i ≤ NS Bi = RΩ i otherwise . Ω f (r)w (r) dr

The criterion Eh (Φ) can be written as 21 ΦT AΦ + B · Φ and optimality is obtained when AΦ + B = 0. Z NV Z NV X X 2 σ(r) k∇φh (r)k dr φi φj Aij = φi φj σ(r)∇wi (r)·∇wj (r) dr = ΦT AΦ = i,j=1

i,j=1

Ω

Ω

(4.14)

34

CHAPTER 4. FORWARD PROBLEM COMPUTATION

This proves that the matrix A is positive because σ > 0 over Ω. Note, however that the matrix is not definite. Indeed, ΦT AΦ is zero iff ∇φh (r) = 0 on Ω almost everywhere. This is natural as the original equation is insensitive to the addition to V of a constant function over Ω. On our discretized spaces, this happens for Φ = Cst1 (this is the case whenever the constant function over Ωh belongs the space Hh1 (Ωh ) which is the case for the standard basis functions P 1 or Q1 used for tetrahedric or hexahedric cells respectively). Similarly to Eq. 4.14, we can prove R that ΦT AΨ = Ω σ(r)∇φh (r) · ∇ψh (r) dr. Applying this result to Ψ = 1, proves that the kernel of the matrix A is the constant vector 1. Rewriting this result for each line of the matrix gives: NV X

∀i = 1 . . . NV

Aij = 0 .

(4.15)

j=1

This result can be used to reduce the amount of memory used to store the matrix A. Indeed, Eq 4.15 can be rewritten as: X Aii = − Aij . (4.16) j6=i

This can be used to rewrite conveniently the part of the criterion Eh (φh ) containing A. NV NV X X X 1 1 φi φj Aij = 1 C(Φ) = ΦT AΦ = Aii φ2i + Aij φi φj . 2 2 i,j=1 2 i=1 i6=j

Replacing Aii by its value given by Eq. 4.16 yields: N

C(Φ) =

V X 1X −Aij φ2i + Aij φi φj 2 i=1

i6=j

=

1 2

NV X X i=1 i6=j

Aij (φj − φi ) φi

N

=

V X 1X Aij [(φj − φi ) φi + (φi − φj ) φj ] 2 i=1 i 0 |uv| dr ≤ u2 dr + λ v 2 dr 2 λ Ω Ω Ω Z Z 1 1 u2 dr + λ v 2 dr ≤ min λ>0 2 λ Ω Ω q Since the minimum of λA + λ1 B for λ > 0 is obtained for λ = B A , we get: s s Z Z Z Ω

|uv| dr ≤

u2 dr

v 2 dr

Ω

Ω

Remark A.1. Applying the H¨ older inequality for v = 1, we obtain: Z 2 Z u2 dr . |u| dr ≤ vol(Ω)

(A.5)

Ω

Ω

Lemma A.2. Poincar´ e inequality: If Ω is bounded then there is a constant C(Ω) > 0 such that Z Z ∀w ∈ H01 (Ω) w2 (r) dr ≤ C(Ω) k∇w(r)k2 dr Ω

Ω

Proof. The proof is established here only in the 1D case for Ω = [a, b]. Since w ∈ H01 (Ω), w(a) = w(b) = 0. We have: Z x Z x Z b |w(x)| = |w(x) − w(a)| = |w′ (r)| dr |w′ (r)| dr ≤ w′ (r) dr ≤ a

a

a

Integrating the previous equation squared yields: !2 Z b Z b 2 ′ w (r) dr ≤ (b − a) |w (r)| dr a

a

Using Eq. A.5 for the right hand side of the previous equation, we get: Z b Z b w2 (r) dr ≤ (b − a)2 w′ (r)2 dr a

a

60

APPENDIX A. USEFUL MATHEMATIC FORMULAE AND LEMMA

A.3

Integral equalities

On a face f defined by the vertices function wi for R Vi k , k = 1..d + 1, given a base i some index i, the integral Ai = f w (r)dr is zero (because w = 0 over f ) if the index i does not correspond to one of the vertices defining f . Otherwise, without loss of generality, we assume that i corresponds to the vertex V1 . Parameterizing by the affine basis defined by the vertices Vk , we have the spaceP Pd d r = 1 − j=1 λj Vd+1 + j=1 λj Vj , where λ = (λj , j = 1 . . . d) is the vector of affine parameters. r is in the domain delimited by f iff all the coefficients in the previous formula are between 0 and 1. Furthermore, dr = |V1 . . . Vd+1 | dλ (the determinant is written with homogenous coordinates for the vectors Vk ).

Ai

= =

=

|r V2 . . . Vd+1 | dr f |V1 . . . Vd+1 | Z 1−Pd−1 Z 1 Z 1−λ1 d d X X i=1 λi 1 − λ V V . . . V V + λ ... j j 2 d+1 dλd . . . dλ1 d+1 j 0 0 0 j=1 j=1 Z 1−Pd−1 Z 1 Z 1−λ1 i=1 λi λ1 dλd . . . dλ1 ... |V1 . . . Vd+1 | Z

= = = =

1 |V1 . . . Vd+1 | p!

0

0

0

Z

1

λ1

1−λ1

Z

...

0

0

1 |V1 . . . Vd+1 | (d − 1)! 1 |V1 . . . Vd+1 | (d + 1)! 1 V olume(f ) d+1

Z

Z

1−

Pd−p−1 i=1

λi

1−

0

1

d−p X i=1

λi

!p

dλd−p . . . dλ1

λ1 (1 − λ1 )d−1 dλ1

0

Similarly :

Aki

= =

k |r V2 . . . Vd+1 | dr |V1 . . . Vd+1 | f Z Z 1 Z 1−λ1 ... |V1 . . . Vd+1 | Z

0

= = = =

Pd−1 i=1

λi

0

0

1 |V1 . . . Vd+1 | (d − j)!

1−

Z

0

1

λk1

Z

...

Z

1−

λk1 dλd . . . dλ1

Pj−1 i=1

0

Z 1 1 |V1 . . . Vd+1 | λk1 (1 − λ1 )d−1 dλ1 (d − 1)! 0 k!(d − 1)! 1 |V1 . . . Vd+1 | (d − 1)! (d + k)! k!d! V olume(f ) (d + k)!

λi

(1 −

j X i=1

λj )d−j dλj . . . dλ1

A.4. MINIMIZATION UNDER CONTRAINTS: THE LAGRANGE MULTIPLIER APPROACH61

Bikl

k l |r V2 . . . Vd+1 | |V1 r V3 . . . Vd+1 | dr |V1 . . . Vd+1 | |V1 . . . Vd+1 | f Z 1−Pd−1 Z 1 Z 1−λ1 i=1 λi λk1 λl2 dλd . . . dλ1 ... = |V1 . . . Vd+1 |

=

Z

0

= =

A.4

0

1

0 1−λ1

1 λk1 λl2 (1 − λ1 − λ2 )d−2 dλ2 dλ1 |V1 . . . Vd+1 | (d − 2)! 0 0 1 |V1 . . . Vd+1 | for k = l = 1 (d + 2)! Z

Z

Minimization under contraints: the Lagrange multiplier approach

Suppose one want to solve a constrained minimisation problem such as: x = argmin C(x) ,

(A.6)

x,f (x)=O

where x can represent a single or vectorial variable, C(x) is the criterion to be minimized and f (x) = 0 represents a constraint on the solution x (again this constraint can be either scalar or vectorial). For simplicity, only the scalar version of the problem is developped hereafter. The Lagrange multiplier approach states that problem A.6 can be expressed equivalently as the un-constrained problem: x = argmin C(x) − λf (x); ,

(A.7)

x,λ

The normal equations associated to problem A.7 are: ′ C (x) − λf ′ (x) = 0 , f (x) = 0. which clearly shows that the constraint f (x) = 0 is taken into account for the solution of the minimization problem. λ is called the Lagrangian parameter. In the vectorial case, as many Lagrangian parameters as constraints must be introduced and the term λf (x) is replaced by a scalar product.

A.5

Singular Value Decomposition

A basic theorem of linear algebra states that any real M ×N matrix A with M ≥ N can be written as the product of an M × N column orthogonal matrix U, an N × N diagonal matrix D with non-negative diagonal elements (known as the singular values), and the transpose of an N × N orthogonal matrix V [9]. In other words, A = UDVT =

N X i=1

di Ui ViT ,

(A.8)

62

APPENDIX A. USEFUL MATHEMATIC FORMULAE AND LEMMA

where di refers to the i-th non-zero element of a diagonal matrix D and Mi designates the i-th column of a matrix M (applied here to the matrices U and V). The singular values are the square roots of the eigenvalues of the matrix AAT (or AT A since these matrices share the same non-zero eigenvalues) while the columns of U and V (the singular vectors) correspond to the eigenvectors of AAT and AT A respectively. As defined in Eq.(A.8), the SVD is not unique since: • It is invariant to arbitrary permutations of the singular values and their corresponding left and right singular vectors. Sorting the singular values (usually by decreasing magnitude order) solves this problem unless there exist equal singular values. • Simultaneous changes in the signs of the vectors Ui and Vi do not have any impact on the leftmost part of Eq.(A.8). In practice, this has no impact on most numerical computations involving the SVD. In the case where M < N , the above theorem can be applied to AT yielding basically the same result, with the matrices D and U being M × M and matrix V being M × N . If A is considered as a linear operator from a vector space EN of dimension N to a vector space EM of dimension M , then SVD can be interpreted as chosing specific orthogonal bases for EM (given by U eventually completed if non square) and EN (given by V eventually completed if non square), such that A is diagonal (given by D) when expressed in the coordinate frames associated with those bases. If A has null singular values (ie D has null diagonal elements), then this means that A is singular. Its rank R is exactly equal to the number of non-null singular values. From Eq. (A.8) is then possible to obtain a reduced form of the SVD, in which UR , DR and VR are respectively M × R, R × R and N × R matrices yielding the general formula: T A = UR DR VR =

R X

dRi URi VR Ti .

i=1

The matrices UR , DR , VR are obtained by taking the columns of matrices U, D, V corresponding to the non-null elements di . Thus, UR and VR provide respectively orthogonal bases of the image of A and of the orthogonal to the kernel of A. Standard libraries such as lapack provide efficient ways of computing the SVD of a matrix A without having to rely on the matrices AAT and AT A (which is an advantage for both the numerical stability of the result and the computational burden). [9] describes the algorithms for computing such a decomposition. Usually singular values are ordered by decreasing order to remove the permutation ambiguity depicted above.

A.5.1

Moore-Penrose pseudoinverse

If the matrix A is square and invertible, its inverse is very easily obtained as A−1 = VD−1 UT . When some singular values are null, D−1 does not exist, but it is still possible to define D† as the diagonal matrix such that:

63

A.5. SINGULAR VALUE DECOMPOSITION

d†i

=

(

1/di 0

di = 6 0 di = 0

(A.9)

The N × M matrix A† defined as: A† = VD† UT , is defined whatever is the matrix A (even for non-square and non-invertible matrices) and is called the Moore-Penrose pseudoinverse of matrix A. From the consideration about the reduced SVD, it can be seen that basically the pseudo inverse behaves as a regular inverse between the sub-spaces defined by UR and VR and has the same kernel as the original matrix A. Actually, the Moore-Penrose pseudoinverse can be defined as the unique N × M matrix A† that satisfies the following relations: AA† A = †

†

A AA ∗ AA† ∗ A† A

=

(A.10)

A †

(A.11)

A

†

A A

=

AA

(A.12)

=

†

(A.13)

where A∗ is the conjugate transpose of A. Equation (A.10) simply states that even if AA† is not the identity, its restriction to the image of A (defined by its column vectors) is the identity. Equation (A.11) states that A† is a weak inverse of A for the multiplicative semigroup. Equations (A.12) and (A.13) state respectively that AA† and A† A are Hermitian matrices. Another property of interest for us is: −1 T −1 A† = lim AT A + λI A = lim AT AAT + λI . λ→0

λ→0

Various other properties of the Moore-Penrose pseudoinverse and some proofs of the above claims can be found from http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse.

A.5.2

SVD and least-squares problems

Singular Value Decomposition is an important decomposition for least-squares methods because of the orthogonal properties of the matrices U and V. Indeed, if one has to solve the problem: x = argmin kAxk2 = argmin x,kxk=1

x

kAxk2 . kxk2

(A.14)

Lagrange multiplier (see section A.4) are used to solve this problem. The above minimisation is thus equivalent to solving the problem: x = argmin kAxk2 − λ kx, k2 − 1 . x,λ

64

APPENDIX A. USEFUL MATHEMATIC FORMULAE AND LEMMA

Writing the normal equations of this problem yields: AT A − λI x = 0 kx, k2 = 1 Thus the solution x is an eigenvector of AT A. In such a case, the value of the criterion is precisely the corresponding eigenvalue. Since singular values and singular vectors of A are precisely the eigenvalues and eigenvectors of AT A, the above problem is thus minimized for the right singular vector corresponding to the smallest singular value of A. Indeed, introducing the SVD of A yields1 : x =

argmin kAxk2 x,kxk=1

=

argmin kUDVT xk2 x,kxk=1

=

argmin kDVT xk2 since U is an orthogonal matrix x,kxk=1

=

argmin kDx′ k2 with x′ = VT x and since VT is an orthogonal matrix

x′ ,kx′ k=1

The last two transforms are true since orthogonal transforms (corresponding to orthogonal matrices) preserve the norm, which means that kUzk = kzk and that VT maps the unit sphere to itself. Assuming that the smallest singular value has index l, then the solution to this last problem is x′ = el (the vector with all zero components except at position l where the coordinate is 1). Consequently, the solution x = Vx′ = Vel = Vl . The solution of th problem is thus given by Vl the l-th column vector of V, where the index l corresponds to the index of the smallest singular value dl .

1 Here, the SVD of an M × N matrix A is written in such a way that the matrices U, V and D are respectively of sizes M × M , N × N and M × N . This can always be done by completing U and V with some addtional orthogonal columns and D with zero columns or lines (depending on whether M < N or M > N )

Bibliography [1] A. El Badia and T. Ha-Duong. An inverse source problem in potential analysis. Inverse Problems, 16:651–663, 2000. [2] Sylvain Baillet and Line Garnero. A bayesian approach to introducing anatomo-functional priors in the EEG/MEG inverse problem. IEEE Transactions on Biomedical Engineering, 44(5):374–385, May 1997. [3] L. Baratchart, J. Leblond, F. Mandrea, and E.B. Saff. How can the meromorphic approximation help to solve some 2D inverse problems for the Laplacian? Inverse Problems, 15:79–90, 1999. [4] Richard B. Buxton. Introduction to Functional Magnetic Resonance Imaging. Cambridge University Press, 2002. [5] M. Chafik, A. El Badia, and T. Ha-Duong. On some inverse EEG problems. In M. Tanaka and G. S. Dulikravich, editors, Inverse Problem in Engineering Mechanics II, pages 537–544. Elsevier Science Ltd, 2000. [6] A.M. Dale and M.I. Sereno. Improved localization of cortical activity by combining EEG and MEG with MRI cortical surface reconstruction: A linear approach. Journal of Cognitive Neuroscience, 5(2):162–176, 1993. [7] V. Della-Maggiore, W. Chau, P.R. Peres-Neto, and A.R. McIntosh. An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data. NeuroImage, 17(1):19–28, 2002. [8] O. Faugeras, F. Cl´ement, R. Deriche, R. Keriven, T. Papadopoulo, J. Roberts, T. Vi´eville, F. Devernay, J. Gomes, G. Hermosillo, P. Kornprobst, and D. Lingrand. The inverse EEG and MEG problems: The adjoint space approach I: The continuous case. Technical Report 3673, INRIA, May 1999. [9] G.H. Golub and C.F. van Loan. Matrix Computations. The John Hopkins University Press, Baltimore, Maryland, second edition, 1989. [10] Matti H¨ am¨ al¨ ainen and Riitta Hari. Magnetoencephalographic (meg) characterization of dynamic brain activation: Basic principles and methods of data collection and source analysis. chapter 10, pages 227–253. Academic Press, 2nd edition edition, 2002. 65

66

BIBLIOGRAPHY

[11] Matti H¨ am¨ al¨ ainen, Riitta Hari, Risto J. IImoniemi, Jukka Knuutila, and Olli V. Lounasmaa. Magnetoencephalography— theory, instrumentation, and applications to noninvasive studies of the working human brain. Reviews of Modern Physics, 65(2):413–497, April 1993. [12] Per Christian Hansen. Rank-deficient and discrete ill-posed problems: Numerical aspects of linear inversion. SIAM Monographs on Mathematical Modeling and Computation. SIAM, Philadelphia, 1998. [13] William James. The Principles of Psychology. Harvard: Cambridge, MA, 1890. [14] Jaakko Malmivuo and Robert Plonsey. Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields. Oxford University Press, 1995. [15] John C. Mosher, Paul S. Lewis, and Richard M. Leahy. Multiple dipole modeling and localization from spatio-temporal MEG data. IEEE Transactions on Biomedical Engineering, 39(6):541–553, 1992. [16] Jean-Claude N´ed´elec. Acoustic and Electromagnetic Equations. Springer Verlag, 2001. [17] G. Thut, J.R. Ives, F. Kampmann, M. Pastor, and A. Pascual-Leone. A device and protocol for combining TMS and online recordings of EEG/evoked potentials. J. Neurosci. Methods, 2005.

Index action potentials, 15, 39

source localization, 45 Tikhonov regularization, 54 uniqueness, 45

Biot and Savart law, 14 boundary elements, 25

Lagrange multiplier, 61 Lagrangian, 61 Laplacian fundamental solution, 58 Green function, 58 lead-field matrix, 46

conductivity, 12 current ohmic, 14 primary, 14 return, 14 current density, 9, 12 DBS, 6 Deep Brain Stimulation, 6 EEG, 6 Electroencephalography, 6 finite elements, 28 fMRI, 6 forward model realistic, 28 semi-realistic, 25 forward problem, 23 functional Magnetic Resonance Imaging, 6 gain matrix, 53 Generalized Linear Model, 44 Green function for Laplacian, 58 harmonic function, 15 ill-posedness, 45 inverse problem beamforming, 51 dipole fitting, 47 imaging approach, 53 minimum norm solution, 46 MUSIC, 49 scanning methods, 48, 51

Magnetoencephalography, 6 Maxwell equations, 9 Maxwell-Ampere equation, 10 Maxwell-Faraday equation, 11 Maxwell-Gauss equation, 10 MEG, 6 Moore-Penrose pseudoinverse, 47, 62 MRI, 19 functional, 39 MUSIC method, 49 permittivity, 12 PET, 6 Poisson equation, 14 Positron Emitted Tomography, 6 postsynaptic potentials, 15, 39 Preprocessing fMRI data, 42 quasistatic, 12 regularization L-curve, 54 Tikhonov, 54 silent sources, 24 Singular Value Decomposition (SVD), 47, 61 source models, 15 TMS, 6 Transcranial Magnetic Stimulation, 6

67