hfi answer to esa's ao this is the color cover page final ... .fr

Level 3 processing uses the Level 2 sky maps at each Planck frequency to .... telescope temperature below 60 K with a temperature stability better than ...... Surface density of FIR sources for Slim = 5 tot. ..... should determine the spectrum till the fourth peak and the LFI till the 5th peak. ...... He is the leader of the LWS Solar.
8MB taille 29 téléchargements 187 vues
HFI ANSWER TO ESA'S AO

THIS IS THE COLOR COVER PAGE FINAL COMPLETE VERSION 1998, FEBRUARY 15TH

High Frequency Instrument for the Planck Mission A proposal to the European Space Agency Cover page PRINCIPAL INVESTIGATOR

PROJECT MANAGER

Institut d'Astrophysique Spatiale B^at 121 - Universite Paris Sud F-91405, Orsay, France tel: 33 1 69 85 86 65 fax:33 1 69 85 86 75 e-mail: [email protected]

Institut d'Astrophysique Spatiale B^at 121 - Universite Paris Sud F-91405, Orsay, France tel: 33 1 69 85 85 83 fax:33 1 69 85 86 75 e-mail: [email protected]

Jean-Loup Puget

Jacques Charra

SCIENCE CONSORTIUM COORDINATOR

SURVEY SCIENTIST George Efstathiou

Francois R. Bouchet

INSTRUMENT SCIENTIST

DATA PROCESSING CENTER MANAGER

Institut d'Astrophysique Spatiale B^at 121 - Universite Paris Sud F-91405, Orsay, France tel: 33 1 69 85 85 77 fax:33 1 69 85 86 75 e-mail: [email protected]

Institut d'Astrophysique Spatiale B^at 121 - Universite Paris Sud F-91405, Orsay, France tel: 33 1 69 85 85 68 fax:33 1 69 85 86 75 e-mail: [email protected]

Institute of Astronomy Madingley Road CB3 OHA Cambridge, United Kingdom tel: 44 12 23 33 75 30 fax: 44 12 23 33 99 10 e-mail: [email protected]

Institut d'Astrophysique de Paris 98bis Bd Arago F-75014 Paris, France tel: 33 1 44 32 80 95 fax: 33 1 44 32 80 01 e-mail:[email protected] Richard Gispert

Jean-Michel Lamarre

CO-INVESTIGATORS Peter Ade

Jamie Bock

Kevin Bennett

Thomas Bradshaw

Queen Mary and Westeld College Physics Department, Mile End Road E1 4NS London, United Kingdom tel: 44 1 71 975 50 32 fax: 44 1 71 980 09 86 e-mail: [email protected] Space Science Department European Space Research and Technology Centre Astrophysics Division, Keplerlaan 1 - P.O. Box 299 NL-2200 AG Noordwijk tel: 31 71 565 35 59 fax: 31 71 565 46 90 e-mail: [email protected] Alain Benoit

Centre de Recherche sur les Tres Basses Temperatures 25, avenue des Martyrs - BP 166 F-38042 Grenoble Cedex 9, France tel: 33 4 76 88 90 72 fax: 33 4 76 87 50 60 e-mail: [email protected] Paolo de Bernardis

University La Sapienza Dipartimento di Fisica G. Marconi Gruppo di Cosmologia Sperimentale Piazzale A. Moro 2 - I-00185 Roma, Italy tel: 39 64 99 14 271 fax: 39 64 95 76 97 e-mail: [email protected]

Jet Propulsion Laboratory Mail 424-49 - Caltech CA 91125 Pasadena, U.S.A. tel: 1 818 354 07 15 fax: 1 818 584 99 29 e-mail: [email protected] Rutherford Appelton Laboratory Chilton - Didcot OX 11 OQX Oxfordshire, United Kingdom tel: 44 12 35 44 61 49 fax: 44 12 35 44 68 63 e-mail: [email protected] Sarah Church

California Institute of Technology Observational Cosmology - Mail Stop 59-33 1200 E. California Blvd CA 91125 Pasadena, U.S.A tel: 1 626 395 2018 fax: 1 626 584 9929 e-mail: [email protected] Francois Couchot

IN2P3, Laboratoire de l'Accelerateur Lineaire B^at 200 - Universite Paris XI F-91405 Orsay, France tel: 33 1 64 46 89 40 fax: 33 1 64 46 83 20 e-mail: [email protected]

Integral Science Data Center Chemin d'Ecogia 16 1290 Versoix, Switzerland tel: 41 22 950 91 01 fax: 41 22 950 91 33 e-mail: [email protected]

Danish Space Research Institute Juliane Mariesvej 30 2100 Copenhagen, Denmark tel: 45 42 88 22 77 fax: 45 42 93 02 83 e-mail: [email protected]

Roger Emery

Francois Pajot

Martin Giard

Isabelle Ristorcelli

Yannick Giraud-Heraud

Michael Rowan-Robinson

Matthew Griffin

Tim Sumner

Rutherford Appelton Laboratory Chilton - Didcot OX 11 OQX Oxfordshire, United Kingdom tel: 44 12 35 44 67 11 fax: 44 12 35 44 66 67 e-mail: [email protected]

Institut d'Astrophysique Spatiale B^at 121, Universite Paris Sud F-91405, Orsay, France tel: 33 1 69 85 85 67 fax: 33 1 69 85 86 75 e-mail: [email protected]

Centre d'Etudes Spatiales des Rayonnements 9, avenue du Colonel Roche - BP 4341 F-31029 Toulouse Cedex, France tel: 33 5 61 55 66 48 fax: 33 5 61 55 67 01 e-mail: [email protected]

Centre d'Etude Spatiale des Rayonnements 9, avenue du Colonel Roche - BP 4341 F-31029 Toulouse Cedex, France tel: 33 5 61 55 65 51 fax: 33 5 61 55 67 01 e-mail: [email protected]

Physique Corpusculaire et Cosmologie College de France 11, place Marcelin Berthelot F-75231 Paris Cedex 05, France tel: 33 1 44 27 15 49 fax: 33 1 43 54 69 89 e-mail: [email protected]

Imperial College Astrophysics Group, Blackett Laboratory Prince Consort Rd 5W7 2BZ London, United Kingdom tel: 44 171 594 75 30 fax: 44 171 594 77 77 e-mail: [email protected]

Queen Mary and Westeld College Physics Department - Mile End Road E1 4NS London, United Kingdom tel: 44 171 975 50 68 fax: 44 181 980 09 86 e-mail: m.j.gri[email protected]

Imperial College Astrophysics Group, Blackett Laboratory Prince Consort Rd 5W7 2BZ London, United Kingdom tel: 44 171 594 75 52 fax: 44 171 594 77 77 e-mail: [email protected]

Shaul Hanany

Racah Institute of Physics The Hebrew University Jerusalem 91904, Israel tel: 972 2 658 4550 fax: 972 2 658 4437 e-mail: [email protected]

Rashid Sunyaev

Max-Planck-Institut fur Astrophysik Karl Schwarzschildstr.1 - Postfach 1523 D-85740 Garching, Germany tel: 49 89 32 99 32 44 fax: 49 89 32 99 32 35 e-mail: [email protected]

Andrew Lange

California Institute of Technology Observational Cosmology - MailStop 59-33 1201 E.California CA 91125 Pasadena, U.S.A. tel: 1 626 395 68 87 fax: 1 626 584 99 29 e-mail: [email protected]

Laurent Vigroux

Anthony Lasenby

Simon White

Commissariat a l'Energie Atomique, DAPNIA Orme des Merisiers 91191 Gif sur Yvette, France tel: 33 1 69 08 39 12 fax: 33 1 69 08 65 77 e-mail: [email protected] Max-Planck-Institut fur Astrophysik Karl Schwarzschildstr.1 - Postfach 1523 D-85740 Garching, Germany tel: 49 89 32 99 32 11 fax: 49 89 3299 32 35 e-mail: [email protected]

Mullard Radio Astronomy Observatory Cavendish Laboratory Madingley Road, Cambridge CB3 OHE, United Kingdom tel: 44 223 33 72 93 fax: 44 223 35 45 99 e-mail: [email protected] Anthony Murphy

Experimental Physics National University of Ireland Co. Kildare - Maynooth, Ireland tel: 353 1 708 37 71 fax: 353 1 628 92 77 e-mail: [email protected]

i

SCIENTIFIC ASSOCIATES

Nabila Aghanim (Institut d'Astrophysique Spatiale, Orsay) Andreas Albrecht (Imperial College, London) Reza Ansari (IN2P3, Laboratoire de l'Accelerateur Lineaire, Orsay) Monique Arnaud (DAPNIA-SAp, Commissariat a l'Energie Atomique, Saclay) Eric Aubourg (DAPNIA-SPP, Commissariat a l'Energie Atomique, Saclay) Jean Ballet (DAPNIA-SAp, Commissariat a l'Energie Atomique, Saclay) Anthony Banday (Max Planck Institut fur Astrophysik, Garching) Pierre Bareyre (IN2P3, Laboratoire de Physique Corpusculaire, College de France, Paris) Matthias Bartelmann (Max Planck Institut fur Astrophysik, Garching) James G. Bartlett (Observatoire de Strasbourg) Charles Beichman (Infrared Processing and Analysis Center, Pasadena) Jean-Philippe Bernard (Institut d'Astrophysique Spatiale, Orsay) Francis Bernardeau (Service de Physique Theorique, Commissariat a l'Energie Atomique, Saclay) Alain Blanchard (Observatoire de Strasbourg) J. Richard Bond (Canadian Institute for Theoretical Astrophysics, Toronto) John Carlstrom (The University of Chicago) Catherine Cesarsky (Direction des Sciences de la Matiere, Commissariat a l'Energie Atomique, Saclay) Stephane Colombi (Institut d'Astrophysique de Paris) Jacques Delabrouille (Institut d'Astrophysique Spatiale, Orsay) Francois-Xavier Desert (Observatoire de Grenoble) Mark Dragovan (The University of Chicago) Daniel Egret (Observatoire de Strasbourg) David Elbaz (DAPNIA-SAp, Commissariat a l'Energie Atomique, Saclay) Ken Ganga (Infrared Processing and Analysis Center, Pasadena) Walter Gear (Royal Observatory, Edinburgh) Krzysztof M. Gorski (Theoretical Astrophysics Center, Copenhagen) Bruno Guiderdoni (Institut d'Astrophysique de Paris) Jacques Haissinski (IN2P3, Laboratoire de l'Accelerateur Lineaire, Orsay) Alan Heavens (Institute for Astronomy, Edinburgh) George Helou (Infrared Processing and Analysis Center, Pasadena) Richard Hills (Mullard Radio Astronomy Observatory, Cambridge) Eric Hivon (Theoretical Astrophysics Center, Copenhagen) Michael Hobson (University of Cambridge) Aled Jones (Mullard Radio Astronomy Observatory, Cambridge) Jean Kaplan (Laboratoire de Physique Corpusculaire, College de France, Paris) Lloyd Knox (Canadian Institute for Theoretical Astrophysics, Toronto) Andrew Lawrence (Royal Observatory, Edinburgh) Bruno Maei (Queen Mary and Westeld college, London) Christophe Magneville (DAPNIA-SPP, Commissariat a l'Energie Atomique, Saclay) Robert Mann (Imperial College, London) Pierre de Marcillac (Institut d'Astrophysique Spatiale, Orsay) Silvia Masi (Universita la Sapienza, Rome) Stephan Meyer (The University of Chicago) Alain Omont (Institut d'Astrophysique de Paris) Chris Paine (Jet Propulsion Laboratory, Pasadena) David Polarski (Universite de Tours) Paul Richards (University of California, Berkeley) Uros Seljak (Center for Astrophysics, Cambridge, MA) Stephen Serjeant (Imperial College, London) Guy Serra (Centre d'Etude Spatiale des Rayonnements, Toulouse) Joseph Silk (University of California, Berkeley) Naoshi Sugiyama (University of Kyoto) Jean-Francois Sygnet (Institut d'Astrophysique de Paris) Max Tegmark (Institute for Advanced Study, Princeton) Romain Teyssier (DAPNIA-SAp, Commissariat a l'Energie Atomique, Saclay) Jean-Pierre Torre (Service d'Aeronomie, Verrieres le Buisson) Neil Turok (University of Cambridge) F. Van Leeuwen (University of Cambridge) Martin Ward (University of Leicester) Dominique Yvon (DAPNIA-SPP, Commissariat a l'Energie Atomique, Saclay)

ii

High Frequency Instrument for the Planck Mission A proposal to the European Space Agency Executive Summary Principal Investigator: Survey Scientist: Consortium Science Coordinator: Instrument Scientist: Data Processing Center Manager: Project Manager: Co-Investigators:

Jean-Loup Puget (Institut d'Astrophysique Spatiale, Orsay) George Efstathiou (Institute of Astronomy ) Francois R. Bouchet (Institut d'Astrophysique de Paris) Jean-Michel Lamarre (Institut d'Astrophysique Spatiale, Orsay) Richard Gispert (Institut d'Astrophysique Spatiale, Orsay) Jacques Charra (Institut d'Astrophysique Spatiale, Orsay)

Peter Ade (Queen Mary and Westeld College, London) Kevin Bennett (Space Science Department, European Space Research and Technology Centre, Noordwijk) Alain Benoit (Centre de Recherche sur les Tres Basses Temperatures, Grenoble) Paolo de Bernardis (Universita La Sapienza, Roma) Jamie Bock (Jet Propulsion Laboratory, Pasadena) Thomas Bradshaw (Rutherford Appelton Laboratory, Oxfordshire) Sarah Church (California Institute of Technology, Pasadena) Francois Couchot (IN2P3, Laboratoire de l'Accelerateur Lineaire, Orsay) Thierry Courvoisier (Integral Science Data Center, Versoix) Roger Emery (Rutherford Appelton Laboratory, Oxfordshire) Martin Giard (Centre d'Etude Spatiale des Rayonnements, Toulouse) Yannick Giraud-Heraud (Physique Corpusculaire et Cosmologie, College de France, Paris) Matthew Grin (Queen Mary and Westeld College, London) Shaul Hanany (Racah Institute of Physics, Jerusalem) Andrew Lange (California Institute of Technology, Pasadena) Anthony Lasenby (Mullard Radio Astronomy Observatory, Cambridge) Anthony Murphy (Experimental Physics, Maynooth) Hans Ulrik Norgaard-Nielsen (Danish Space Research Institute, Copenhagen) Francois Pajot (Institut d'Astrophysique Spatiale, Orsay) Isabelle Ristorcelli (Centre d'Etude Spatiale des Rayonnements, Toulouse) Michael Rowan-Robinson (Imperial College, London) Tim Sumner (Imperial College, London) Rashid Sunyaev (Max-Planck-Institut fur Astrophysik, Garching) Laurent Vigroux (Service d'Astrophysique, Commisssariat a l'Energie Atomique, DAPNIA, Saclay) Simon White (Max-Planck-Institut fur Astrophysik, Garching)

The Planck Surveyor High Frequency Instrument (HFI) will utilise 100 mK bolometers to measure the anisotropies of the Cosmic Microwave Background (CMB) at all scales larger than 6 arcminutes to an unprecedented accuracy of T=T = 2 10;6 . These measurements will be limited only by the fundamental limits set by CMB photon noise and astrophysical foregrounds. Furthermore the very high sensitivity of the Planck HFI will allow precise measurements of the polarization of the CMB. The Planck CMB measurements will enable cosmologists to test models for the origin and structure of the Universe (quantum uctuations or topological defects) and to constrain the key 10{20 cosmological parameters de ning our Universe to an accuracy of order a percent or better in most scenarios. Since Hubble's discovery of the expansion of the Universe, much eort has been devoted to establishing the geometrical and kinematic characteristics of our universe (Hubble constant, deceleration parameter, cosmological constant, curvature,....). The precise constraints on these and other parameters from Planck (and particularly Planck HFI) will far surpass the accuracy of conventional astronomical techniques, heralding a new era in cosmology. The HFI will cover the frequency range from 100 GHz to 860 GHz. This wide frequency coverage has been chosen so that the HFI can measure the two main foregrounds at these frequecies, Galactic dust emission and distant infrared galaxies. For these, the spectra are steeply rising with frequency iii

allowing a measurement of these foregrounds with an angular resolution as good as the one achieved on the CMB measurements. These foregrounds are now much better understood as a result of all sky surveys constructed with the DIRBE and FIRAS instruments aboard COBE, recent ISO observations and ground based follow up observations. The HFI frequencies have been carefully chosen to optimise the detection of clusters of galaxies via the Sunyaev-Zeldovich (S-Z) eect. This eect arises from the Compton interaction of CMB photons with the hot gaseous atmospheres of clusters of galaxies. The S-Z eect is expected to be the dominant secondary distortion of the CMB, but can be separated very accurately from the primordial CMB anisotropies via its unique spectral signature. The HFI should detect many thousands of S-Z clusters of galaxies, probing redshifts z  1. The HFI will also detect many thousands of infrared galaxies. The production of complete near all-sky catalogues of galaxy clusters and infrared galaxies with the HFI are important scienti c goals of the Planck mission. Detailed models of the millimeter and submillimeter sky, including polarization, have been undertaken to analyse the performance of the HFI. With the HFI design proposed here, we have demonstrated that the primordial CMB anisotropies can be recovered with an accuracy that is limited only by the precision with which we can subtract foregrounds. The most sensitive channel for both the intensity and the polarization anisotropies is at 217 GHz and has an angular resolution of 5.5 arcminutes. The full frequency range of the HFI instrument provides excellent monitoring and subtraction of foregrounds and the S-Z eect, and oers some redundancy against partial instrument failures. With our present understanding of foregrounds, the accuracy of the primordial CMB reconstruction is improved by combining HFI and LFI data. The addition of LFI data enables more accurate removal of the low frequency foregrounds (free-free and synchrotron emission). The full Planck payload (HFI and LFI) oers very broad frequency coverage with relatively uniform sensitivity. This provides essential cross-checks of systematic errors (e.g. thermal variations, 1/f noise and stray-light) and of any unexpected behaviour of the foregrounds. The cosmological results from Planck will thus be as free as possible from systematic errors and any a priori hypotheses concerning foregrounds. A large number of scientists are involved in the HFI consortium (91 co-investigators and scienti c associates). We propose that all Planck scientists (project scientist, HFI, LFI and telescope teams) form a single science collaboration under an international coordination group which will organize scienti c projects during the proprietary period. The membership of the Planck collaboration will be revised regularly to allow new scientists to join the project. Teams to work on speci c scienti c topics, and their associated data rights, will be determined by the international coordination group following Planck collaboration workshops and will be submitted to the Science Team for nal approval. The Planck Surveyor sky scanning strategy can be chosen to optimize the redundancy in the data by moving the spin axis by up to 10 from the antisolar direction. An optimized scanning strategy is essential for detecting, controling and removing systematic eects which might aect the data. A full pipeline including modules to test and remove all identi ed systematic eects will be ready and fully tested one year before launch. During the commissioning and veri cation phase several prede ned sky scanning patterns will be tested to choose the one to be used for the rst sky survey. The HFI Data Processing Centre will be decentralized, involving a number of institutions under the coordination of IAS (Orsay). The Integrated Data and Information System (IDIS) is central to this concept and is common to the HFI and LFI consortia. IDIS will provide an environment within which all data, software and documentation can be stored, accessed and cross-referenced according to project-wide standards. It will be developed by MPA (Garching), SSD (Noordwijk) and OAT (Trieste), with MPA taking explicit responsibility for HFI/IDIS operation. Four main sites will each take responsibility for one of the main levels of processing:

 For the Level 1 processing, a common centre for both Planck instruments will be established in

Geneva at the Integral Science Data Centre. This centre will be responsible for the extraction from raw telemetry of scienti c and housekeeping data for both instruments, and for the insertion of these data into IDIS for further processing by other centres. It will perform Real Time Assessment of the basic instrument parameters, of instrument health, of the quality of the raw telemetry, and of the status of the survey with respect to the mission operations plan. The centre will transmit a report to the Project Scientist who will, on behalf of the Science Team, iv

direct the Mission Operations Centre to retrieve any data lost through telemetry drop-out or similar problems and modify the sky scanning plan as required. The Level 1 Centre will also distribute the relevent data to the two instrument teams on a daily basis for further assessment of instrument health and data quality. Finally, the centre will be responsible for warning the Project Scientist and the PIs of anomalous instrument behaviour.  Level 2 will process ground test and calibration data together with Level 1 ight data to reconstruct a time-ordered sequence of scienti c, housekeeping and pointing information for each detector. These Time-Ordered Data (TOD) will be corrected for systematic eects, and used to infer a model of the instrument and to construct maps of the sky at each frequency. Obtaining these three elements is iterative in nature. Several versions of the 1-D timelines and of the 2-D maps will be generated and stored within IDIS, with rigorously controlled access rights. Many groups in the consortium will contribute to this work through studies of systematics and by developing algorithms to test and correct for them. Data and software objects for the data processing pipeline will be accessed, together with the relevant documentation, through IDIS. The Level 2 pipeline will be developed and tested by POSDAC (Orsay) with a strong involvement from LPAC (London) and CPAC (Cambridge).  Level 3 processing uses the Level 2 sky maps at each Planck frequency to construct full sky maps of the various physical components of diuse microwave emission (primary CMB emission, Galactic synchrotron, free-free and dust emission) together with catalogs of various discrete sources (stars, infra-red galaxies, clusters of galaxies). Iteration with Level 2 will be required to control low-level systematic eects. The products of this processing stage will be made available through IDIS for scienti c exploitation by speci c groups within the Planck Science Collaboration as authorised by the Science Team. Such exploitation tasks will include statistical characterisation of the CMB uctuations by a variety of techniques, estimation of cosmological parameters, analysis of the various discrete source catalogs, construction of physical models for the Galactic emission components, as well as many others. As in Level 2, many groups will play a role in Level 3 processing with CPAC (Cambridge) taking responsibility for implementing and testing the Level 3 pipeline, with strong involvement from POSDAC and LPAC.

 Level 4 processing will (as with Level 1) be common to both instruments and entails preparation

of the Final Products of the mission, and their public distribution. These Final Products will consist of data, software and documentation, as determined by the Science Team. The data products will include calibrated and corrected TOD, sky maps at individual frequencies, maps of the various emission components and source catalogs, and will be supplied with full documentation. Software for accessing, manipulating and visualising these data products will also be provided. MPA (Garching) will be the responsible Centre for Level 4 for both instruments, with strong involvement from SSD (Noordwijk) especially with respect to integration of the nal data from IDIS into FINDAS. Public access to the Final Products will be possible either through FINDAS or directly through IDIS.

To ful ll these goals, we propose a novel instrument concept based on very sensitive, stable bolometers cooled to 100 mK. Over the last few years, bolometers have been developed with such low noise characteristics that, when placed behind the low emissivity, passively cooled (50K) Planck telescope, photon noise from the CMB and the telescope (which are comparable in the lowest frequency channels) will dominate over the noise from the entire detector chain. The best performances today have been obtained with so called spider web bolometers developed at Caltech/JPL. These are the baseline detectors for the project. Detectors with performances close to the baseline ones are available in Europe. The total power readout electronics is essential to the proposed design because it will provide measurements of the CMB anisotropies at all useful spatial frequencies with uniform sensitivity. The excellent sensitivity provided by the HFI (T=T = 1:7 10;6 ) can be obtained by cooling the bolometers to 100 mK using a space quali ed dilution refrigerator. These two critical elements have been applied very successfully in the DIABOLO experiment (CRTBT, CESR and IAS) behind the 30 meter IRAM telescope. The cooling system will include a 4 K Joule-Thomson cooler v

from RAL and a 18 K sorption cooler developed by JPL and shared by both focal plane instruments. The HFI will use concentrating optics and lters developed by QMW and Maynooth. The QMW group has contributed to many succesful submillimeter experiments including the cold optics for the SCUBA instrument on the JCMT. At high frequencies, sampling in the cross scan direction is obtained from detectors suitably displaced with respect to each other in the cross scan direction. The relatively low level of polarization expected for the CMB requires that polarization measurements are carried out in the most sensitive CMB channels (143 GHz and 217 GHz). Three detectors with polarisers at 60 from each other (or four at 45 , depending on frequency) are placed on the same scanning path to allow direct dierential polarization measurements. Polarisation measurements are also carried out at 545 GHz where Galactic dust emission is expected to dominate. Polarisation measurements are an important and feasible science goal for the HFI. Table 1 gives a summary of the main parameters of each frequency channel including its sensitivity expressed in terms of the main cosmological and astrophysical signals. Simulations incorporating a detailed model of the CMB sky show that a nal sensitivity T=T < 2 10;6 on the primordial CMB anisotropies can be obtained after removal of systematics and foregrounds through the use of all the channels. Central frequency ( ) GHz 100 143 217 Beam Full Width Half Maximum arcmin 10.7 8.0 5.5 Number of unpolarised detectors 4 3 4 T=T Sensitivity (Intensity) K=K 1.7 2.0 4.3 Number of polarised detectors 0 9 8 T=T Sensitivity (U and Q) K=K 3.7 8.9 Total Flux Sensitivity per pixel mJy 8.7 11.5 11.5 ySZ per FOV (x 106 ) 1.11 1.88 547

353 545 857 5.0 5.0 5.0 6 0 6 14.4 147 6670 0 8 0 208 19.4 38 43 6.44 26 600

Table 1: HFI Sensitivities A critical trade-o for such measurements must be made between the angular resolution and the straylight received by the detectors. The highest angular resolution would require full illumination of the telescope from the detector feed horns, whereas very low side lobes require under-illumination of the telescope. A simulation has shown that the distribution of Galactic emission is such that the full beam pattern can be reconstructed from the data themselves taking advantage of the redundances. The base line design (and resulting angular resolution) is conservative and requires that the total power outside the main beam decrease from from 2% at 100 GHz to 0.7% at 350 GHz. With this design, the degradation of the sensitivity caused by the signal from the far side lobes is negligible. The HFI beam characteristics will be re ned further during phase B when the telescope size and its baing concept will be frozen. This HFI optimisation will have no direct system impact as it will aect only the low frequency channels with no additional requirements to the pointing accuracy or the data rate. The base line share of responsibilities for the instrument can be summarized as follows:

     

bolometers: California Institute of Technology FET box: University La Sapienza cold optics: Queen Mary and West eld College 20 K sorption cooler: Jet Propulsion Laboratory with contribution from Jerusalem University 4 K cooler: Rutherford Appleton Laboratory dilution cooler (0.1 K): CRTBT (Grenoble) and IAS with Air Liquide as sub-contractor vi

 mechanical and thermal design: IAS  integration, nal tests and calibration: IAS  general on-board electronics and EGSE: IN2P3, CESR. The main requirements on the spacecraft and operations are:  passive cooling of the focal plane unit environnement to less than 50 K

 telescope temperature below 60 K with a temperature stability better than 200 K over one

minute measured by the spacecraft for a spin axis up to 10 from the antisolar direction high level of cleanlines for the mirrors.

 spinning at a rate not faster than 1 rpm  optical axis at an angle from the spin axis of at most 85  a smaller angle would be preferable (80 ) to improve the data redundancy

 depointing of the spin axis by 4 arcminutes or more at a frequency lower or equal to 1 per hour with a depointing accuracy of 1 arcminute

 data rate: 35 kbits minimum with no loss at emission, data storage on board 2 Gbytes,  no solar illumination of the telescope at any phase of the mission (including coast phase)  absolute time available on board provided to the experiment with an accuracy of 1 ms. The Planck payload places some dicult requirements on the mission. The passive cooling of a large telescope and instrument system to rather low temperatures (50-60 K) is dicult. Furthermore the integration of this payload will also be dicult because of the many connections between the 20 K Focal Plane Unit and the 300 K level: piping for two sorption coolers, one 4 K cooler and the dilution cooler, waveguides of the LFI and electrical harness for both instruments. The integration of this large system can only be done by industry, but the instrument teams must be closely associated with the payload development at all stages. The HFI team thus propose to participate in the design of the payload module, the de nition of the testing and integration plans, and the integration itself. The HFI consortium propose to make available a test facility and operating team for the optical tests of the Planck telescope (this facility has been developed to test the PRONAOS 2 meter ballon borne submillimeter telescope and the ODIN satellite telescope). This activity, if accepted, must be part of the overall payload test and veri cation plan and the conditions of this activity will have to be agreed. Because of the dierent geometry, the integration will be signi cantly more dicult for a merged Planck-FIRST mission than for a stand-alone Planck mission. We also stress the criticality of the temperature achievable by passive cooling for the HFI cryogenic chain. This temperature is signi cantly lower in the case of a stand-alone Planck mission. Funding, as well as the stang plan required for all the contributors, have been detailed and submitted to the relevant agencies. No major problem has been identi ed in the preliminary discussions and formal approval of the resources for the development phase will be obtained before the end of 1998.

vii

Contents 1 SCIENTIFIC CASE

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Anisotropies in the CMB . . . . . . . . . . . . . . . . . 1.2.1 Statistical Description of the Anisotropies . . . . 1.2.2 Maps of the CMB . . . . . . . . . . . . . . . . . 1.2.3 Observations of the CMB . . . . . . . . . . . . . 1.3 Testing Theoretical Models . . . . . . . . . . . . . . . . 1.3.1 Estimating Cosmological Parameters . . . . . . . 1.3.2 The Case for Measuring Polarisation . . . . . . 1.3.3 Topological Defects and Other Theories . . . . . 1.4 The Sunyaev-Zeldovich Eect . . . . . . . . . . . . . . 1.5 Extragalactic Infra-Red Point Sources and Background . 1.6 Galactic Studies . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Instrument layout . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Focal Plane Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Pixel layout . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Coupling to the telescope - horn requirements . . . . . . . . 3.2.4 Wavelength selection, lters . . . . . . . . . . . . . . . . . . 3.3 Bolometric detectors . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 General Electronics . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Readout electronics for 0.1K bolometers and thermometers 3.4.3 Science Data . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

2 FROM TIMELINES TO SCIENTIFIC PRODUCTS

2.1 General considerations . . . . . . . . . . . . . . . . . . . 2.2 From timelines to maps . . . . . . . . . . . . . . . . . . 2.2.1 Methods for map reconstruction and simulations 2.2.2 Low-frequency drifts and map striping . . . . . 2.2.3 The measurement of polarisation . . . . . . . . . 2.2.4 The far side lobe problem . . . . . . . . . . . . . 2.3 The component separation problem . . . . . . . . . . . 2.3.1 The microwave sky . . . . . . . . . . . . . . . . 2.3.2 Expected performances of the instrument . . . . 2.4 Numerical simulations . . . . . . . . . . . . . . . . . . . 2.4.1 Numerical simulations of the observations . . . . 2.4.2 Analysing observations . . . . . . . . . . . . . . 2.5 Discussion & conclusions . . . . . . . . . . . . . . . . .

3 INSTRUMENT DESCRIPTION

viii

1

1 3 3 4 6 7 7 8 10 10 12 12

13 14 15 15 16 18 18 20 20 23 27 27 30 31

33

33 33 34 35 35 36 36 38 39 42 42 43 47

3.5 Cryogenics . . . . . . . . . . . . . . 3.5.1 Introduction . . . . . . . . 3.5.2 The 0.1K cooler . . . . . . 3.5.3 The 4K Mechanical Cooler 3.5.4 Models . . . . . . . . . . . 3.5.5 Technical performance . . . 3.5.6 18K-20K Sorption cooler . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

4 INFORMATION MANAGEMENT AND DATA PROCESSING PLANS 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Data Processing Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Level 1 (Telemetry Processing) . . . . . . . . . . . . . . . . . . 4.2.2 Level 2 (Data Reduction and Calibration) . . . . . . . . . . . . 4.2.3 Level 3 (Component Separation and Optimisation) . . . . . . . 4.2.4 Level 4 (Generation of nal products) . . . . . . . . . . . . . . 4.3 Organisation of the pipeline . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Data Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Management of Overall DPC Structure . . . . . . . . . . . . . 4.4 Integrated Data and Information System (IDIS) . . . . . . . . . . . . . 4.4.1 IDIS Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 IDIS Organisation . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 HFI-speci c implementation . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 HFI-DPC General philosophy and implementation . . . . . . . 4.5.2 DPC Development . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 DPC Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Post-operations phase . . . . . . . . . . . . . . . . . . . . . . . 4.5.5 Organisation structure . . . . . . . . . . . . . . . . . . . . . . . 4.5.6 Local implementation of the HFI major Data Processing Sites .

5 TEST AND CALIBRATION PLAN

5.1 Test plan . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Performance tests and calibrations plan . . . . . . . . 5.2.1 Ground calibrations . . . . . . . . . . . . . . . 5.2.2 Test and calibration facility . . . . . . . . . . . 5.2.3 In- ight calibration . . . . . . . . . . . . . . . . 5.3 Calibrations parameters . . . . . . . . . . . . . . . . . 5.4 Test and calibrations facility . . . . . . . . . . . . . . . 5.4.1 Thermal environment and interfaces . . . . . . 5.4.2 Optical uxes . . . . . . . . . . . . . . . . . . . 5.4.3 Control and data . . . . . . . . . . . . . . . . . 5.4.4 Optical setup . . . . . . . . . . . . . . . . . . . 5.4.5 Cryostat . . . . . . . . . . . . . . . . . . . . . . 5.4.6 Test and Calibration facility development plan

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

6 SYSTEM LEVEL ASSEMBLY, INTEGRATION AND VERIFICATION

6.1 The HFI-LFI focal plane units interface . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Contribution of the HFI Consortium to the Payload Module (in reply to section 1.3.2 of the AO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Proposal for a contribution to the Planck telescope alignment . . . . . . . . . . . . . 6.4 Assembly, Integration and Veri cation . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

48 48 49 52 53 53 55

57

57 58 58 59 61 62 62 64 64 65 65 67 68 69 69 70 72 72 73 73

78

78 79 79 79 79 79 80 80 80 81 82 82 82

83

83 83 84 84

7 FLIGHT OPERATIONS

7.1 HFI Flight Operation philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 HFI Modes description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Data continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8 QUALIFICATION AND EXPERIENCE OF THE PI TEAM 8.1 8.2 8.3 8.4

Principal Investigator . . . . . . . . . . . . . . . . . Co-Investigators . . . . . . . . . . . . . . . . . . . . Key technical personal . . . . . . . . . . . . . . . . . Institutes unique capabilities and relevant experience

. . . .

. . . .

. . . .

. . . .

. . . .

9 ORGANISATION AND MANAGEMENT STRUCTURE 9.1 General Management Structure . . . . . . . . . . . . . . . . 9.2 Instrument Design and Procurement organisation . . . . . . 9.2.1 Instrument System Design and Analysis . . . . . . . 9.2.2 Instrument Development and Procurement Control . 9.3 Data Processing Center organisation . . . . . . . . . . . . . 9.4 Operation after launch organisation . . . . . . . . . . . . . 9.5 Science organisation . . . . . . . . . . . . . . . . . . . . . . 9.6 Relationship with LFI and Telescope Team . . . . . . . . . 9.7 Communications, publicity agreement . . . . . . . . . . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . .

87 87 87 89

90 90 90 93 94

97

. 97 . 98 . 98 . 99 . 99 . 99 . 100 . 100 . 100

BIBLIOGRAPHY

100

APPENDICES A From timelines to maps

1

B Polarisation measurements and striping C Iterative sidelobe correction D Separation of components

4 6 8

A.1 Model of the measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Map making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

D.1 D.2 D.3 D.4 D.5

Physical model . . . . . . . . . . . . The component separation problem \Straight" minimisation of residuals Wiener ltering . . . . . . . . . . . Maximum Entropy Method . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

Galactic emissions . . . . . . . . . . . . . . . . . The polarised emission from dust in our Galaxy Infrared sources & their background . . . . . . . Clusters of galaxies . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

E A model of the microwave sky E.1 E.2 E.3 E.4

. . . . .

. . . . .

. . . . .

F Merit of the component separation using Wiener ltering

F.1 The unpolarised case . . . . . . . . . . . . . . . . . . . . . . . F.2 Generalisation to the polarised case . . . . . . . . . . . . . . F.2.1 Unbiased estimators and covariance of power spectra . F.2.2 Errors on cosmological parameters: Fisher matrix . . x

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1 2

8 8 9 9 10

12

12 14 15 18

20 20 23 24 25

G List of Acronyms

26

xi

List of Figures 1.1 1.2 1.3 1.4 1.5 1.6 1.7

Scalar and tensor CMB power spectra C` for CDM . . . . . . . . . . . Simulated maps of the CMB . . . . . . . . . . . . . . . . . . . . . . . . A compilation of experimental measurements of the CMB anisotropies Cosmological parameters and C (`) accuracy for Planck . . . . . . . . Dependence of the polarisation power spectrum on b and H . . . . . Simulated CMB maps of topological defect models . . . . . . . . . . . Simulated maps of the thermal and kinetic Sunyaev-Zeldovich eect .

. . . . . . .

. . . . . . .

. . . . . . .

3 5 6 8 9 10 11

2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17

From timelines to scienti c products . . . . . . . . . . . . . . . . . . . . . . . . . . Raw and destriped noise maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Location of intersections between sky scans . . . . . . . . . . . . . . . . . . . . . . Sidelobe signal reprojected on the sky . . . . . . . . . . . . . . . . . . . . . . . . . Far sidelobe recovery at 350 GHz . . . . . . . . . . . . . . . . . . . . . . . . . . . . Angular power spectra of the various foregrounds . . . . . . . . . . . . . . . . . . . rms contributions of the various components in the Planck channels . . . . . . . Contour levels of of the dierent foregrounds in the frequency-space,  ; `, plane . Expected performance of the component separation using Wiener ltering . . . . . Expected errors on the mode amplitudes using Wiener ltering . . . . . . . . . . . HFI performance in case of global variations across channels of the detector noise Expected CMB residuals in case of various hardware failures . . . . . . . . . . . . Expected errors on the CMB polarisation spectra using Wiener ltering . . . . . . Simulated maps of observations by Planck . . . . . . . . . . . . . . . . . . . . . . Recovered maps of T by Wiener ltering HFI observations . . . . . . . . . . . . Accuracy of a recovered maps of T by Wiener ltering HFI observations . . . . . Recovered maps of ySZ using MEM on HFI and Planck observations . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

13 16 17 18 19 22 23 24 24 25 26 26 27 28 29 29 29

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18

Schematic layout of the HFI . . . . . . . . . . . . . . . . . . . . . . Architecture of the HFIfocal plane unit . . . . . . . . . . . . . . . . View of the entrance horns . . . . . . . . . . . . . . . . . . . . . . . Schematic of optical layout for a single HFI pixel . . . . . . . . . . . Far eld beam pattern for HFI 143GHz feed . . . . . . . . . . . . . . Plot of Prototype 143GHz Channel Spectral Response . . . . . . . . Prototype spider bolometer CSK18 . . . . . . . . . . . . . . . . . . . General Electronics Architecture . . . . . . . . . . . . . . . . . . . . Electrical links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Architecture of the bolometer / 0.1K thermistor readout electronics Noise power spectrum of a 3k test resistor . . . . . . . . . . . . . . Noise power spectrum of \Yogi" testbed . . . . . . . . . . . . . . . . The 4K box and its extension to the JFET box . . . . . . . . . . . . Schematic implementation of the FET box . . . . . . . . . . . . . . . Scheme of the cooling system . . . . . . . . . . . . . . . . . . . . . . View of the demonstration model of the 0.1K cooler . . . . . . . . . 0.1K stage diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . The major components of the 4K system . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

33 35 36 37 37 38 40 42 43 44 45 46 46 47 48 49 49 52

xii

. . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

3.19 3.20 3.21 3.22

Theoretical cooling power of the 4K system . Ladder diagram . . . . . . . . . . . . . . . . . Schematic of a 20 K Planck cooler . . . . . AutoCad drawing of the compressor element .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

53 54 55 56

4.1 4.2 4.3 4.4

Data ow among sites forming the Planck data processing distributed structure . . . Diagram illustrating interactions and commonalities between the HFI and LFI DPCs IDIS structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IDIS management structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64 65 66 68

5.1 Background power falling on single mode detectors . . . . . . . . . . . . . . . . . . . . 5.2 HFI calibration setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80 81

9.1 HFI management organigram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 HFI Engineering Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 HFI Local Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97 98 99

APPENDICES E.1 E.2 E.3 E.4 E.5 E.6

The cosmic sandwich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rms uctuations in our Galactic model at the 7 degree scale . . . . . . . Redshifting a template galaxy spectrum . . . . . . . . . . . . . . . . . . IR galaxies number counts prediction . . . . . . . . . . . . . . . . . . . . Frequency correlations of the galaxies templates . . . . . . . . . . . . . . Power spectra of the SZ thermal eect for dierent cosmological models

. . . . . .

12 13 15 16 16 18

F.1 CMB Wiener matrices for the HFI and the full Planck mission . . . . . . . . . . . . F.2 HFI Wiener matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.3 Comparisons of quality factors for the LFI, HFI, and Planck . . . . . . . . . . . . .

20 21 23

xiii

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

List of Tables 1

HFI Sensitivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

1.1 1  errors in estimates of cosmological parameters . . . . . . . . . . . . . . . . . . . . .

8

2.1 Galaxies number counts with Planck HFI . . . . . . . . . . . . . . . . . . . . . . . . 2.2 1  errors in estimates of cosmological parameters using the polarisation information .

21 30

HFI Sensitivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10

Parameters de ning the horns . . . . . . . . . . . . . . . . . . . Detail of the ltering scheme . . . . . . . . . . . . . . . . . . . Requirements on detectors . . . . . . . . . . . . . . . . . . . . . Data rate breakdown . . . . . . . . . . . . . . . . . . . . . . . . Helium storage quantities . . . . . . . . . . . . . . . . . . . . . J-T Compressors mass budget . . . . . . . . . . . . . . . . . . . Heat loads on each of the stages from the 4K system . . . . . . Main Modelling Assumptions . . . . . . . . . . . . . . . . . . . Total instrument cooling and 20 K sorption cooler input power

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

37 39 40 48 51 53 54 54 56

4.1 4.2 4.3 4.4

Technical sta during the development phase . . . . . . . . . . . . . Technical sta during the operations and post-operations phases . . Scienti c support during the development phase . . . . . . . . . . . Scienti c support during the operations and post-operations phases .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

76 77 77 77

5.1 Test and calibration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

. . . . . . . . .

. . . . . . . . .

APPENDICES E.1 Expected IR galaxies number counts along timelines . . . . . . . . . . . . . . . . . . .

17

F.1 Summary of experimental characteristics used for comparing experiments . . . . . . . F.2 1  errors in estimates of cosmological parameters using the polarisation information .

22 25

xiv

Chapter 1

SCIENTIFIC CASE 1.1 Introduction The Planck (formerly COBRAS/SAMBA) was selected by ESA in 1996 as Medium Mission 3 of the Horizon 2000 plan. A comprehensive scienti c case for the mission was presented in the Phase A report and is as valid today as it was when the Phase A report was written. Indeed, recent developments in both theory and observation have demonstrated a compelling scienti c case for Planck. Rather than reproducing the Phase A scienti c case in its entirety, we therefore emphasise new developments and especially those aspects of the scienti c case that depend critically on the HFI instrument. The strength of the scienti c case for Planck is recognised both in Europe (via the M3 selection) and in the USA1 . The detection of CMB anisotropies in 1992 by NASA's Cosmic Background Explorer (COBE) has opened up an entirely new way of studying cosmology to high precision. The next decade will see an enormous experimental eort dedicated to mapping the CMB at increased sensitivities and angular resolution. This will include NASA's MAP satellite and the much more powerful ESA Planck. The aims of Planck are to obtain de nitive images of the CMB uctuations and to subtract the primordial signal to high accuracy from contaminating astrophysical sources of emission (e.g. Galactic foregrounds and extragalactic point sources). This can be achieved by an experiment which combines high angular resolution, high sensitivity (T=T  2 10;6 ), wide frequency coverage and excellent control of systematic errors (e.g. stray-light and thermal variations). This combination of requirements cannot be met either by ground-based or balloon-borne observations, but demands a space mission such as Planck. The CMB maps from Planck will provide by far the cleanest information available on the nature and fundamental mechanism of structure formation involving new physics beyond the Standard Model. Very little is known about the Universe prior to the epoch of nucleosynthesis. For example, we do not yet know whether the large-scale uniformity of the Universe owes its origin to an early period of rapid expansion known as ination. Neither do we know the origin of the primordial irregularities required to form galaxies and other structure in the Universe e.g. are galaxies and clusters the product of quantum uctuations generated during an in ationary phase, or of topological defects, such as cosmic strings, created at an ultra-high energy phase transition? Observations of the CMB anisotropies are one of the very few ways of testing physics at ultra-high energies and are of crucial importance in the further development of fundamental theories such as supersymmetry, superstrings and quantum gravity. In this respect, Planck is complementary to the next generation of accelerator experiments planned with the LHC at CERN, in which Europe has the leading role. One of the most important goals of the Planck mission is a precise determination of the fundamental cosmological parameters that de ne our Universe. These include, the densities of baryonic, cold and hot dark matter, the values of the cosmological constant, !, Hubble constant, H0 , and the The US National Academy of Sciences mid-decadal review \A New Science Strategy for Space Astronomy and Astrophysics" (1997) places the determination of the geometry and content of the universe by measurement of microwave background anisotropies as its highest scientic priority, above all other areas of space astronomy including the detection of planets, X-ray and gamma ray astronomy. 1

1

neutrino content of the Universe. Observational cosmology has been struggling for more than 50 years to constrain these parameters, but the remaining uncertainties are still very large, e.g. factors of several in the densities of baryonic and dark matter. Planck is capable of determining these densities to a precision of better than a percent (see x 1.3). Planck is also capable of detecting entirely new forms of matter, for example, `quintessence' (Caldwell, Dave, & Steinhardt, 1997) and, more generally, in establishing the relationship between primordial irregularities and the large-scale structure { the chains, laments and voids { observed in galaxy surveys. To achieve the scienti c goals of Planck, it is critical that the primordial anisotropy signal be separated to high accuracy from contaminating extragalactic and Galactic foregrounds. Planck has therefore been designed with two detector arrays a low-frequency instrument (LFI) using HEMT detectors sampling the frequency range 30{100 GHz, and a high-frequency instrument (HFI) using bolometers sampling the frequency range 100{860 GHz. Detailed simulations (described in the Phase A report and in x 2.4) have shown that with the wide frequency coverage and high sensitivities of the Planck instruments, it is possible to use the spectral signatures of Galactic and extragalactic foregrounds to recover the primordial signal to an accuracy of T=T < 2 10;6 over at least 50% of the sky. In addition, in this proposal we present a design for the Planck HFI that is capable of measuring CMB polarisation to high precision, opening up a new area of scienti c analysis (see x 1.3.2). Planck will provide true (i.e. high signal-to-noise per resolution element) all-sky maps of the background radiation at 9 frequency bands that encompass all of the major sources of microwave emission. These maps will constitute a resource that is comparable to the IRAS and COBE-DIRBE maps at shorter wavelengths. Thus Planck will generate a large amount of science over a wide range of astronomy, including:

 The detection of 10 000 or more rich clusters of galaxies via the spectral signature of the Sunyaev-

Zeldovich eect (Zeldovich & Sunyaev, 1969). These data can be used to study, for example: (i) the cosmological evolution of clusters and intra-cluster gas out to redshifts z  1, especially in conjunction with X-ray measurements from satellites such as XMM (ii) large-scale clustering in the Universe (iii) peculiar velocities of clusters.  Planck will produce catalogues of tens of thousands of point sources. We expect to produce point source catalogues of starburst, active galaxies, radio galaxies, quasars, blazars and inverted spectrum radio sources.  Planck will produce a wealth of information on uctuations of the far-infrared background.  Maps of the Galaxy from radio to far infrared wavelengths suitable for studies of the properties of Galactic dust, cold cloud and cirrus morphologies, the distribution of cosmic rays, maps of the Galactic magnetic eld etc. The HFI is critical for the Planck mission for a number of reasons: (i) All of the Planck measurements at frequencies > 100 GHz require the HFI instrument and are inaccessible either to the Planck LFI or NASA's MAP satellite (ii) the HFI will provide high resolution and signalto-noise measurements of the primordial anisotropies at the optimal frequency range 100{300 GHz for the detection of primordial anisotropies against contaminating foregrounds (Bouchet, Gispert, & Puget, 1995 Tegmark & Efstathiou, 1996) (iii) the 6 frequencies of the HFI are essential for the accurate subtraction of Galactic emission (iv) the HFI frequency bands have been chosen to detect the spectral signature of the Sunyaev-Zeldovich (SZ) eect. Almost all of the Planck SZ science, which is of great interest to a wide scienti c community, requires data from the HFI (iv) the HFI will produce a unique complete sample of infra-red galaxies which can be used to study galaxy evolution and large-scale structure in the Universe. Furthermore, our consortium has a very strong record in the bolometer and cryogenic technology required for the HFI, drawing on expertise with IRAS, ISO, FIRST and a number of ground based and balloon borne bolometer arrays. There are, therefore, compelling scienti c and technological reasons to include a powerful HFI on Planck.

1.2 Anisotropies in the CMB

1.2.1 Statistical Description of the Anisotropies

The temperature pattern on the celestial sphere can be expanded in spherical harmonics T = X am Y m (  )

T

`m

` `

(1.1)

and the power spectrum of the temperature uctuations, C`, is de ned by the mean square value of the coecients am` C` = hjam` j2 i: (1.2) If the uctuations in the early universe obey Gaussian statistics, as expected in most theories of the early Universe (see x 1.3), each of the coecients am` is independent and so the power spectrum C` provides a complete statistical description of the temperature anisotropies. The temperature power spectrum, C` , is thus of fundamental importance in studies of the microwave background anisotropies. The temperature power spectrum can be estimated directly from observations by performing a spherical harmonic analysis, as has been done with the COBE data (G#orski et al., 1994 Bond, 1995 Tegmark, 1996 Tegmark & Hamilton, 1997).

Figure 1.1: The power spectrum C` of the microwave background anisotropies plotted against multipole ` for an in ationary cold dark matter cosmology. The angular scale corresponding to a given multipole is indicated by the scale at the top of the gure. The curve labelled scalar shows the contribution to the temperature power spectrum from small density (scalar) uctuations in the early universe. The curve labelled tensor shows the contribution to the temperature power spectrum from gravitational waves generated during in ation. The relative amplitude of these two contributions depends on the speci c details of the model of in ation, as described in x 1.3. The bars show the range of multipoles (angular scales) probed by COBE and by Planck. Figure 1.1 shows a calculation of the temperature power spectrum for a cold dark matter (CDM) dominated universe with 0 = 1. These curves assume a scale-invariant initial uctuation spectrum, as expected in the simplest models of in ation, a baryon density b = 0:05 and a Hubble constant h = 0:5. The curve labelled `scalar' shows the power spectrum from density (scalar mode) perturbations these are the small irregularities in the early universe that grow under the action of gravity to form the structure in the Universe that we see today. The curve labelled `tensor' shows the power spectrum arising from gravitational waves (tensor modes) generated during in ation (e.g. Starobinsky, 1985 Davis & al., 1992 Crittenden & al., 1993). Notice the large dierences in shape between the two curves which can be utilised to test models of in ation (see x 1.3).

The multipole ` tells us about anisotropies on an angular scale  1=`, as indicated by the scale at the top of the gure. Thus COBE, which has an angular resolution of FWHM  7 samples only low multipoles ` < 20 (shown by the shaded bar in the upper panel). In contrast, the high angular resolution of Planck will allow measurements of multipoles up to ` > 2500 (see gure 2.9), sampling the full multipole range of the theoretical predictions. The temperature anisotropies in this class of theoretical model arise from two distinct physical processes:

 Potential Fluctuations in the Early Universe: On angular scale > 1 , the temperature anisotropies

measure uctuations in the gravitational potential along dierent lines of sight, T=T = 31  =c2 . This is often called the Sachs-Wolfe eect (Sachs & Wolfe, 1967) and is of particular importance because it links temperature anisotropies directly with potential uctuations generated in the early Universe.  Sound Waves Prior to Recombination: In the standard hot Big Bang model the Universe is highly ionised until a redshift zR  1000, the so called recombination epoch when protons and electrons combine to make hydrogen atoms. Prior to this epoch, photons are tightly coupled to the electrons by Thomson scattering, but once recombination is complete the Universe becomes transparent to radiation and photons propagate towards us along geodesics, almost unimpeded by the matter. Maps of the microwave background radiation therefore provide us with a picture of irregularities at the `last scattering surface' at zR  1000, when the Universe was about 300 000 years old. Small scale uctuations in the matter-radiation uid at this epoch are causally connected and oscillate like sound waves. The peaks in the scalar radiation power spectrum in gure 1.1 are a consequence of these oscillations. The prominent `Doppler peak' at  1 (`  200) indicates the maximum distance that a sound wave can travel at the time of recombination. Accurate measurements of the temperature anisotropies on small angular scales thus provide information on the sound speed at the time of recombination, and hence on the matter content of the Universe. Furthermore, the relation between physical distances and angular separations on the sky provides information on the geometry of the Universe. The positions and heights of the peaks in the temperature powerspectrum can therefore be used to derive fundamental parameters that de ne our Universe such as the spatial curvature, Hubble constant H0 and baryon density b . One of the principal goals of Planck is to determine these fundamental parameters to unprecedented precisions of better that a percent. (See x 1.3 for more details).

1.2.2 Maps of the CMB Some of the key physical points of the previous Section can be illustrated by constructing simulated maps of the CMB anisotropies using the power spectra of gure 1.1. Examples are shown in gure 1.2. The picture to the left in the upper panel shows a simulated map of the sky at the sensitivity and angular resolution of the COBE satellite (T=T  3:4 10;5 at 53 GHz and FWHM = 7 ). Detailed analyses of the COBE data have shown that they are statistically indistinguishable from the simulated map in gure 1.2. The picture to the upper right shows how the sky would appear at the much higher angular resolution and sensitivity of Planck. The individual hot and cold spots seen in the Planck simulation have angular sizes of  1 , characteristic of the primary Doppler peak in gure 1.1. The physical sizes of these uctuations are comparable to those of clusters and and superclusters of galaxies observed in the present Universe. Thus with Planck we will be able to resolve primordial uctuations that are the precursors of non-linear structures in the present Universe. Evidently, we expect to see a wealth of ne-scale structure in the microwave sky. The primary goal of Planck is to map these structures with high precision, free of foreground contamination, so enabling us to address the scientic questions described in this proposal. As a consequence of the anisotropy of Thomson scattering, the CMB anisotropies are predicted to be linearly polarised at about the 5% level, with a characteristic angular scale of  1 (Bond & Efstathiou, 1987 Seljak, 1997). The lower panel of gure 1.2 shows simulated maps of the polarisation pattern (direction of polarisation is plotted on the lower left and the amplitude is plotted on the lower right). Recent theoretical work, summarised in x 1.3.2 has shown that measurements of the polarisation

CMB Fluctuations ( K)

CMB Polarisation

Figure 1.2: Simulated maps of the CMB anisotropies and CMB polarisation.

pattern can yield unique tests of in ationary models of the early universe and of the recent thermal history of intergalactic medium. Furthermore, measurement of the polarisation pattern would provide an important (and largely independent) consistency check of cosmological parameters derived from the total anisotropy signal. For these compelling reasons, we have designed the HFI to measure three Stokes parameters at three frequencies. The analyses of x 2.2.3 and x 2.3.2 indicate that this design should allow proper removal of the polarisation foreground for the HFI and lead to clean and usable CMB polarisation power spectra.

1.2.3 Observations of the CMB In 1992, the NASA COBE satellite detected small (T=T  1 10;5 ) temperature irregularities in > 7 (Smoot et al. 1992). The the microwave background radiation temperature on angular scale  COBE results provided the rst convincing detection of primordial temperature anisotropies, which had been searched for earnestly by experimenters since the discovery of the microwave background by Penzias and Wilson in 1965. COBE has provided low resolution maps of the sky at three frequencies, 30, 53 and 90 GHz and with relatively poor sensitivity after four years of observation. The COBE

measurements are consistent with ground based measurements from Tenerife at frequencies 10, 15 and 32:5 GHz (Hancock et al., 1997) and with the balloon born Far-infrared Survey (Ganga et al., 1993). Thus at large angular scales ( > 5 ) there is strong evidence that the temperature irregularities are independent of frequency over the range 10{170 GHz, as expected if they are of primordial rather than of Galactic origin. The COBE discovery has stimulated a concerted eort to detect primordial anisotropies on smaller angular scale  1 (e.g. Fischer et al., 1992 Gaier et al., 1992 Schuster et al., 1993 Meinhold et al., 1993 Wollack et al., 1993 Cheng et al., 1994 De Bernardis et al., 1994 Tanaka et al., 1996 Netter eld et al., 1997 Tucker et al., 1997 Platt et al., 1997). Figure 1.3 shows a recent compilation of data points plotted as `band-power' estimates of the power spectrum C` . The two theoretical curves in Figure 1.3.a show how such observations can probe the geometry of the Universe. Both curves assume a scaleinvariant CDM dominated universe2, but the solid line shows a critical density universe (0 = 1) and the dotted line shows an open universe with a density parameter similar to that suggested by dynamical studies of clusters and groups of galaxies (0 = 0:3). The present observations are suggestive of a spatially at universe, but cannot yet convincingly exclude a low density model (see e.g. Lineweaver & Barbosa, 1998). Figures 1.3.b and 1.3.c show how the temperature power spectrum of a critical density universe depends on the on the baryon density and Hubble constant. The structure of the Doppler peaks at multipoles ` > 100 diers from model-to-model and can be used to identify the parameters of our Universe, but the dierences are small in comparison to the observational errors. These simple examples show that the CMB power spectrum provides a `cosmic ngerprint' that can be used to identify the parameters of our Universe. However, disentangling the variations caused by dierent physical parameters presents a formidable problem and requires an experiment with the precision, angular resolution and control of systematic errors provided by Planck. This is described in detail in the next subsection, which also compares Planck with the NASA MAP satellite.

1.3 Testing Theoretical Models

1.3.1 Estimating Cosmological Parameters

A complete theory of cosmology and structure formation requires the speci cation of a number of cosmological parameters. These include the density parameters of various matter components, j , where j = b, cdm, hdm, ,  , .... refers to baryons, cold dark matter, hot dark matter, photons, neutrinos,etc. We must also specify the geometry of the Universe determined by P the cosmological constant, !, and the spatial curvature (which we parameterise by k = 1 ; j ;  , where  = !=H02 ). In addition, we need to specify the uctuations present in the early Universe. These 2 In both cases, the cosmological constant is assumed to be zero, the baryon density is xed at b = 0 05 and the Hubble constant is xed at 0 = 50 kms;1 Mpc;1 . :

H

Figure 1.3: The points in the Figures show a compilation of experimental measurements of the CMB anisotropies on various angular scales (see e.g. Lineweaver & Barbosa (1998), for a full list of references). The theoretical curves in g. 1.3.a show predictions for a critical density and a low density CDM Universe. Figures 1.3.b and 1.3.c show how the CMB anisotropies in a spatially at universe depend on the baryon density b and Hubble constant h = H0 =100 kms;1 Mpc;1 . In 1.3.b, we show curves for several values of b with the Hubble constant xed at h = 0:5. In 1.3.c, we show curves for several values of H0 with the baryon density xed at b = 0:05. are often represented by a scalar amplitude Qs and spectral index ns and a tensor amplitude Qt and spectral index nt . Typically, therefore, a theoretical model is de ned by 10 or more parameters, many of which are poorly constrained at present. COBE has had an enormous impact on cosmology, yet it provided crude constraints on only two of these parameters3 A key scientic goal of Planck is to determine these parameters to the theoretical limits permitted by observations of the CMB. In fact detailed calculations (Bond, Efstathiou, & Tegmark, 1997 Zaldarriaga, Spergel, & Seljak, 1997 Efstathiou & Bond, 1998) have shown that Planck can determine many of these parameters to a precision of better than a percent. Such accurate determinations would truly revolutionise cosmology. For example:  The ability to measure small deviations from a precise scale invariant spectrum of uctuations (ns = 1, nt = 0) will provide tight constraints on the form of an in ationary potential and hence on fundamental physics at energies > 1015 GeV.  The detection of a tensor (gravitational wave) component would distinguish between an in ationary origin of uctuations and alternative models such as cosmic defects.  Accurate measurements of the baryonic density can be compared directly with the predictions of primordial nucleosynthesis providing a further probe of new physics (e.g. massive neutrinos, singlet neutrinos etc).  CMB observations can set tight constraints on the geometry of the Universe, the cosmological constant !, and the Hubble constant H0 , that are free of the many systematic uncertainties of conventional astronomical techniques.  Measurements of the various density components will set tight limits on the nature of the dark matter in our present Universe. The unique ability of Planck HFI to distinguish between theoretical models with very similar cosmological parameters is illustrated in gure 1.4. In these two gures, we compare CMB power spectra for spatially at adiabatic CDM models with dierent parameters and compare the ability of MAP and Planck to dierentiate between them. Figure 1.4.a compares two models with very dierent baryon densities4 !b =!b = 24% and gure 1.4.b compares two models with nearly identical baryon densities but dierent cold dark matter densities and Hubble constants, !c=!c = 18% and h=h = 22%. Planck can easily dierentiate between these models at high signi cance levels. For example, the models in g. 1.4.b are dicult to distinguish by MAP, even in its upgraded speci cation 3 4

= ( 2s + 2t ) and s . We use the notation i i Q

Q

Q

n

!

2

h

to denote physical densities.

(2 = 26 for 7 degrees of freedom, shown in the middle panel), whereas Planck yields a highly signi cant result (2 = 381). Fig. 1.4.a shows that Planck is particularly sensitive to small variations in the baryon density. Table 1.1 lists the 1  errors in several cosmological parameters for a spatially

at universe expected from these satellites. Investigations of cosmological parameter estimation from the CMB (e.g. Efstathiou & Bond, 1998) has signi cantly strengthened the conclusions of the Phase A report, showing that an experiment with the sensitivity and angular resolution of Planck can produce precise estimates of fundamental cosmological parameters at the theoretical limit allowed by measurements of the CMB anisotropies.

Figure 1.4: Simulations of the CMB power spectrum of a cold dark matter model illustrating how Planck can determine cosmological parameters to high precision. The solid curves in the upper panel shows the CMB power spectrum for an adiabatic CDM model with baryon density !b = b h2 = 0:0125, CDM density !c = ch2 = 0:2375, zero cosmological constant, Hubble constant H0 = 50 kms;1 Mpc;1 , scale-invariant spectra, ns = 1, nt = 0, and a ratio of r = 0:2 for the tensor to scalar amplitudes. The dashed lines (barely distinguishable from the solid lines) show spatially at models with the parameters listed above each gure. The dierences in these power spectra are plotted on an expanded scale in the lower panels. The points show simulated observations and 1  errors for the original speci cations of MAP, the current improved MAP design (designated MAP+) and for the current design speci cations of Planck HFI. These models are marginally distinguishable by MAP but are easily distinguishable by Planck at high signi cance levels. It is important to emphasise that the examples shown in gure 1.4 and table 1.1 ignore any systematic errors from Galactic foregrounds, exParameter MAP MAP+ Planck HFI tragalactic point sources, sidelobe leakage etc. As !b =!b 0:11 0:05 0:0068 demonstrated in x 2 and in the Phase A report, ! =! 0 : 21 0 : 11 0:0063 c c Planck has been carefully designed to minimise ! 0:15 0:081 0:0049 these sources of systematic error. In particuQ 0 : 014 0 : 0046 0 :00139 lar, the wide frequency coverage provided by the r 0:81 0:67 0:49 Planck HFI and LFI instruments (30 ; 1000 n 0 : 066 0 : 032 0 :005 s GHz, compared to 22{90 GHz for MAP) aln 0 : 74 0 : 72 0 :57 t lows accurate subtraction of the Galaxy and extragalactic foregrounds from the primordial cosmological signal. Furthermore, the LFI and HFI Table 1.1: 1  errors in estimates of cosmological have dierent beam pro les, noise characteristics parameters (spatially at universe). and both sample the 100 GHz frequency range at comparable sensitivities. The Planck instruments have been designed to allow a large number of cross-checks of the data. Such consistency checks are essential for a comprehensive and convincing analysis of cosmological parameters.

1.3.2 The Case for Measuring Polarisation

As mentioned in x 1.2.2, the anisotropic nature of Thomson scattering introduces linear polarisation in the CMB (see e.g. Hu & White, 1997, and gure 1.2). In general, the temperature anisotropies and the polarisation can be speci ed by three Stokes parameters, I , Q and U (the fourth Stokes parameter V measures circular polarisation and cannot be generated by Thomson scattering). Theoretical work since the Phase A report has demonstrated a strong scienti c case for measuring the polarisation anisotropies. Furthermore, recent work suggests that the polarised components of Galactic emission should be small compared to the primordial signal over much of the sky in the HFI frequency range (Prunet et al., 1998, and x 2.3.1). The HFI in this proposal has therefore been designed to measure three orientations of polarisation at three frequencies, allowing the extraction of I , Q and U and monitoring of Galactic polarisation. This can be achieved with negligible impact on the principal scienti c goals of the mission. The main scienti c reasons to measure polarisation are as follows:  Dierentiating between tensor and scalar modes: Any polarisation pattern can be separated into `electric' (E) and `magnetic' (B) components. Scalar perturbations produce a pure E-mode polarisation pattern, vector perturbations (generated in topological defect models) generate mainly a B-mode polarisation pattern and tensor modes generate an admixture of E- and B-modes (see e.g. Kamionkowski & Kosowski, 1998 Zaldarriaga & Seljak, 1997). Measuring the polarisation pattern can provide a much more sensitive (and direct) test of the existence of tensor and vector modes than those derivable from the temperature anisotropies alone. The cross-correlation of the polarisation pattern with the temperature also allows a dierentiation between scalar and tensor components (Crittenden & Turok, 1995).  Consistency checks for cosmological parameter estimates: The polarisation and polarisation-temperature power spectra are sensitive to cosmological parameters in a similar way to that described for the temperature power spectrum in the previous subsection. For example, gure 1.5 shows how the polarisation power spectrum Cp in scale-invariant, spatially at, CDM models depends on the parameters b and H0 . These gures are the polarisation analogues of gures 1.3.b and 1.3.c. The polarisation power spectra can therefore be used in their own right to estimate cosmological parameters. This provides an important consistency check of the experiment. A demonstration that we can recover consistent values of the cosmological parameters from the temperature and polarisation measurements would provide powerful evidence to the astronomical community that the results are free of systematic errors.

Figure 1.5: Dependence of the polarisation power spectrum on the baryon density b and Hubble constant h = H0 =100 kms;1 Mpc;1 . In a), we show curves for several values of b with the Hubble constant xed at h = 0:5. In b), we show curves for several values of H0 with the baryon density xed at b = 0:05. A scale-invariant, spatially at, CDM model has been assumed.

 Reionisation in the intergalactic medium: The absence of a Ly absorption trough in the spectra of

high redshift quasars shows that the intergalactic medium must have been reionised at a redshift z > 5. However, it is not yet known when this reionization occurred, or by what mechanism. Polarization measurements of the CMB can set strong limits on the redshift of reionization (Zaldarriaga, Spergel,

& Seljak, 1997) so probing the epoch when the rst structures in the Universe are believed to have formed.  Weak gravitational lensing: Gravitational lensing of the CMB polarization pattern caused by structures along the line-of-sight should produce a measurable distortion at the Planck sensitivities (Stompor & Efstathiou, 1998). A detection of of weak gravitational lensing would constrain the amplitude and spectrum of mass uctuations in the present Universe.

1.3.3 Topological Defects and Other Theories One of the most exciting possibilities oered by Planck is that of searching our present horizon volume for cosmic defects, relics of a grand uni cation phase transition in the earliest moments of the hot big bang. Since Kibble (1976) realized that such defects were a generic prediction of uni ed gauge eld theories, there has been continued theoretical interest in their cosmological consequences. For some time they have provided the leading alternative scenario for structure formation to in ationary quantum uctuations. Recent computational breakthroughs (Pen, Seljak, & Turok, 1997 Albrecht, Battye, & Robinson, 1997) have made them less attractive in this respect, since when normalized to COBE the theories predict too little power in the galaxy distribution on 100 Mpc scales. Nevertheless it is still extremely important to search for cosmic defects produced in the early universe, since they provide one of the very few available windows into the process of uni ed gauge symmetry breaking in the very early universe. Defects can be formed at any energy scale, however the most likely possibility is that they were formed at the GUT transition, predicted from the high precision LEP measurements to be at 3 1016 GeV. Such defects would give a substantial contribution to the cosmic microwave background anisotropy, with several key features allowing it to be easily distinguished from an in ationary contribution. Cosmic defects would produce a nonGaussian pattern of temperature anisotropies on the sky (see gure 1.6). Global textures and monopoles would produce distinctive patterns of hot spots on scales of  100 which Planck would be able to map to high accuracy. Cosmic strings would produce a more Gaussian pattern of anisotropies, but with a key dierence to in ation in that the Doppler peaks produced by strings are smeared out in l space due to the nonlinear character of the string dynamics. It is also important to bear in mind that there are several other possibilities for cosmic structure formation, some of which have only begun to be explored. For example, isocurvature perturbations where the total energy density of the universe is smooth but the relative abundance of Figure 1.6: Simulated CMB maps of topologdierent species varies spatially have been par- ical defect models (cosmic strings, texture and tially studied (Peebles, 1987 Efstathiou & Bond, monopoles) compared to the standard ( = 1) cold 1986, 1987). Calculations for neutrino isocurva- dark matter model. The pictures show 10 10 ture perturbations are only now being performed patches of the sky and the temperature scale is in (Spergel & Turok, 1998). More generally, there is K . the possibility that nonlinear but causal physics occurring before recombination could have seeded the cosmic perturbations. A few such models have been proposed (e.g. Turok, 1996), which are able in some respects to mimic in ation but are distinguishable for example by their predictions for the polarization-temperature cross correlation (Spergel & Zaldarriaga, 1997).

1.4 The Sunyaev-Zeldovich Eect In addition to primary CMB anisotropies generated in the early Universe, secondary anisotropies can be generated between the epoch of decoupling and the present epoch. The most important of these is the Sunyaev-Zeldovich eect in which CMB photons are scattered by free electrons in the hot gaseous component of rich clusters of galaxies. The spectrum of the CMB is modi ed in a way which depends on both the spatial distribution and temperature of the hot cluster gas, and the relative velocity of the cluster with respect to the rest frame de ned by the CMB. The Sunyaev-Zeldovich (SZ) eect has been observed unambiguously in a small number of rich clusters in ground based observations (see e.g. Rephaeli, 1995, for a recent review). However, as described in the Phase A report, the high angular resolution, sensitivity and frequency coverage of Planck HFI should lead to the detection of the SZ eect in several thousand clusters of galaxies (Aghanim et al., 1997a) and to valuable statistical information on cluster peculiar motions (Haehnelt & Tegmark, 1996).

Figure 1.7: Simulations of the thermal and kinetic Sunyaev-Zeldovich eect from rich clusters of galaxies. The pictures show a 10 10 patch of the sky at 300 GHz. The temperature scale is in K. See appendix E.4 for details. We can distinguish between two distinct SZ eects:  The thermal Sunyaev-Zeldovich eect: This arises from the frequency shift when CMB photons are scattered by the hot electrons in the intra-cluster gas. The frequency dependence of this eect results in a temperature decrement in the Rayleigh-Jeans region of the CMB spectrum and to an excess at high frequencies. The central frequencies of the Planck HFI bands have been chosen carefully to straddle the regions of negative and positive spectral distortion, with one channel centered at 217 GHz where the thermal SZ eect is zero. The frequency coverage of the HFI is essential to separate the frequency dependent SZ signal from the frequency independent primordial anisotropies. In addition, the detailed SZ spectral signature from the HFI can be used to determine the temperatures of nearby clusters (Pointecouteau, Giard, & Barret, 1997).  The kinetic Sunyaev-Zeldovich eect: This arises because peculiar velocities of the hot intra-cluster gas lead to a Doppler shift of the scattered photons which is proportional to the radial peculiar velocity of the cluster and to the electron density integrated along the line of sight through the cluster. This eect has a blackbody spectrum and so the main source of confusion is with the primordial anisotropies themselves. However, as described in the Phase A report, with the high angular resolution of the HFI and carefully chosen spatial lters, it should be possible to measure cluster peculiar velocities for the rich clusters (y  10;4 ) to an accuracy of  300 km s;1 and large-scale coherent ows to a precision < 100 km s;1 at the 100 Mpc scale. The construction of a complete catalogue of rich clusters of galaxies selected via the SZ eect is

an important science goal of Planck. The scienti c case is presented in detail in the Phase A report, but brie y the main aims include the following: (i) to study the properties and the evolution of hot gas in clusters of galaxies in conjunction with X-ray observations (ii) to detect high redshift clusters at z > 0:5, which can be used to test theories of structure formation and to study galaxy evolution in cluster environments (iii) to constraint H0 and q0 by combining SZ measurements with spatially resolved X-ray temperature and ux pro les, (iv) to measure deviations from the Hubble ow and large-scale peculiar motions in the Universe. There have been two main developments since the Phase A report. First, much improved predictions of the thermal and kinetic eects fully support the calculations of the Phase A report indicating that Planck should detect more than 10,000 clusters via the SZ eect (see e.g. gure 2.14 in the Phase A report). Second improved statistical algorithms (based on maximum entropy) have been developed for detecting clusters to fainter ux levels (see x 2.4.2). This work suggests that Planck HFI should surpass the expectations of the Phase A report for SZ studies.

1.5 Extragalactic Infra-Red Point Sources and Background High-redshift galaxies that are similar to the powerful sources of thermal dust radiation detected by IRAS at low redshifts will produce a signi cant amount of their radiation in the HFI observing bands. Recently the rst deep images of small elds in the millimeter/submillimeter wavebands, made using ISO, at 175 m (Kawara & al., 1997 Clements et al., 1997 Puget et al., 1998), the JCMT/SCUBA at 850 m (Smail, Ivison, & Blain, 1997) and the BIMA interferometer at 2.8 mm (Wilner & Wright, 1997), have con rmed a picture in which far-infrared luminous galaxies evolve as strongly as quasars out to redshifts in excess of 2. These results, together with recent detections of the spectral intensity of extragalactic background radiation (Puget et al., 1996 Guiderdoni et al., 1997 Schlegel, Finkbeiner, & Davis, 1997, and the recent announcement at the AAS meeting by the COBE teams) have allowed us to re ne the Phase A predictions of the number of infra-red point sources that are likely to be detected by Planck. The models are described in more detail in x E.3 and suggest that Planck HFI will detect more than 10 000 distant far-infrared sources over the whole sky (see Table 2.1). The resulting point source catalogue will be a valuable independent scienti c goal of the mission. As described in the Phase A report, the multi-frequency information from Planck HFI can be used to con rm the existence of an extragalactic infra-red background and to extract the small-amplitude

uctuating component from much weaker unresolved sources. A uctuation analysis of Planck HFI data will set stringent constraints on source counts to ux levels of a few tens of mJy at  > 350m.

1.6 Galactic Studies

Planck will provide all-sky maps of the emission from Galactic dust at sub-mm and mm wavelengths. At the two shortest wavelengths the Planck maps will have the same resolution as those from IRAS

with an order of magnitude better sensitivity in terms of gas column density, NH of a few times 1018 H cm;2 . At longer wavelengths the detection limit is 1 to 2 1019 H cm;2 . This sensitivity allows the detection of emission from Galactic dust over the whole sky in all of the HFI bands. One can thus expect that Planck HFI will have an even more signi cant impact on Galactic studies than the successful IRAS mission. In particular, we expect to get detailed information on the size and temperature distribution of cold dust grains and of cold molecular clouds. Planck will provide the rst systematic search for cold condensations in the interstellar medium over a large fraction of the Galaxy and will provide a unique way of nding regions forming cold dense cores which have not produced yetstars detectable in the near infrared.

Chapter 2

FROM TIMELINES TO SCIENTIFIC PRODUCTS Ground Calibrations Maps of co-added data Model of the Instrument

Systematics subtraction Time lines

Four Low Frequency Calibrated Maps from LFI

Six Frequency Band Calibrated Maps

Data from former Experiments

Component separation Cluster catalogues Temperature and Polarisation

Source catalogues

Maps Fine Tests of Galactic Models RUMBA 1998

C(l) Analysis and other detailed Tests of Cosmological Models

Figure 2.1: Some important steps of the data processing which leads to scienti c products... 13

Chapter 1 has demonstrated the scienti c potential of CMB observations by Planck. Here we address the data analysis issues involved in achieving these science objectives. In particular, we discuss the steps required to generate deliverable products to the community (i.e. calibrated timelines and maps of the sky at each observational frequency, 6 for the HFI), and describe how these maps can be separated into physical components (maps of the primordial CMB and Galactic emissions, catalogues of point sources, galaxy clusters, etc : : : ). We in particular deduce the expected accuracy of the cosmological parameters determination using the polarisation information, once the eect of foregrounds is taken into account. Since the Phase A report much work has been done to assess in greater detail the level of quality expected for the deliverable products. For instance, the question of pickup of Galactic disk signal by the far sidelobes of the instrument has been addressed and shown to be tractable, even if the sidelobes cannot be mapped using the Earth and the Moon during the transfer to L2. The destriping of the maps has been further investigated. The assumptions presented in the Phase A report that were used to generate a model of the sky at Planck frequencies are fully supported by new observational results. More sophisticated modelling and new simulations of the component separation process have strengthened the scienti c case without unearthing major stumbling blocks. New work has shown also that dust polarisation could be removed to an acceptable level, leading to the exciting possibility of accurate CMB polarisation measurements by the HFI. This chapter is devoted to the results of the new analyses and is largely self-contained, although some technical details have been relegated to appendices. The analysis described in this Section, although reassuring, is incomplete in places and does not do justice yet to the full complexity of the data expected from Planck. Furthermore, even conceptually simple analysis problems and their solutions, although tractable in principle, might not be implementable at the full resolution of the data due to the enormous computing power required. But as testi ed by the number of papers submitted every month on the issue of optimal data analysis of the very large and complex data sets which will be generated by MAP and Planck, work is proceeding very actively world-wide to develop nearly optimal while still practical algorithms which will be implemented in the future data analysis pipe. To provide a testing ground for new ideas and algorithms, we have an ongoing activity aimed at increasing the realism of our simulated time-lines. The goal is to produce three years before launch simulated time-lines as realistic as possible given the ground knowledge of the hardware, and a working end-to-end pipe prototype. The remaining three years will then be used to actual implementation of the pipe but the work on IDIS which is the main tool for building the pipe will start early after the selection.

2.1 General considerations Na%&vely, one may think that the data analysis problem could simply be solved by a brute force approach, such as maximising the likelihood of a global \theory" (as speci ed, say, by < 100 values specifying the cosmological model, characteristics of the spectral behaviour of the various components of the microwave sky, some unknown parameters of the instrument, etc : : : ) given the raw data. This would entail \observing" the theory, T , with the instrument, and inferring which theory accounts best for the data D (arranged as a long vector d). This approach is, of course, infeasible given the size of the parameter space that would need be scanned. Indeed, suppose for de niteness that the probability distribution for the data given the theory, L(DjT ), is Gaussian, one would have to maximise





L(T jD) / exp (d ; hdi)T C;T 1 (d ; hdi) L(T )



(2.1)

where hdi and CT = (d ; hdi)(d ; hdi)T are the mean vector and the covariance matrix of the observed theory, and L(T ) stands for the probability of that theory. For Planck the length of the data vector d would be Nd > 1011 . One would thus have to explore the theory parameter space with  100 dimensions requiring the calculation and inversion of the Nd Nd CT matrix at each \point" of the search in the hundred-dimensional space! In addition to being infeasible, such a bruteforce approach is not desirable either, since intermediate products are of interest in their own right (calibrated timelines, maps of the sky at dierent frequencies, maps of physical components like the

CMB or the dust emission, or catalogues of sources, : : : ), and may be thought of as dierent steps of a nearly lossless compression. Furthermore, many unknowns are functions whose behaviour has to be determined from the data before modelling and parameterising them. Each bolometric detector will respond to the sky brightness in some frequency range (de ned by lters) and a spatial range (de ned by the horns and the optics), some of which cannot be determined prior to launch with the required precision. The analogue signal from the detector will be ampli ed, temporally binned, discretised, compressed, and nally transmitted to the ground where it will be stored as an individual time line. The time lines will have to be analysed separately to take into account the individual nature of the detectors (beam pro le, instantaneous response and long term evolution of the bolometer and its electronic chain, etc...) AND jointly to lower the noise in each, allow the best determination of global quantities like a-posteriori pointing reconstruction, focal plane alignment, as well as enable the analysis of cross-talk or coupling to external conditions. This clearly shows that the data analysis will have to be iterative, moving back and forth between dierent representations (e.g. time-lines and sky maps), in order to progressively re ne a model of the instrument and of the sky, with initial input of prior knowledge such as ground calibrations (and up to some degree external constraints like the IRAS or COBE datasets). The information to be extracted from this noisy data must necessarily be over-constrained to permit a separation of systematics from real features. In the following, we give an overview of the problems encountered, of their solution, and of the expected quality of the outcome as can be judged from our studies. The next section (x 2.2) is devoted to the issue of map-making in the Planck context. Section 2.3 reviews the general issue of the joint analysis of maps at several frequencies and dierent resolution to produce \optimal" maps of individual emission mechanisms. We describe in x 2.3.1 the main characteristics of our sky model (with a more in-depth discussion in appendix E), and the expected performances of the instrument, as assessed by semi-analytical analyses (x 2.3.2). Finally, x 2.4 presents our detailed simulation work concerning the separation problem. Our main conclusions are presented in x 2.5.

2.2 From timelines to maps Appendix A describes the general map-making problem, and one possible prescription for its solution which has been widely used in CMB anisotropy measurements. While this solution is (linearly) optimal in theory, it suers from several drawbacks which make it impractical in its present form for Planck. In the following, we describe recent progress in developing methods for reconstructing maps with little spurious signals, and give estimates of expected residual levels of uncertainties in the maps due to some of the instrumental eects.

2.2.1 Methods for map reconstruction and simulations

The basic principle of map reconstruction from data streams is to use redundancies in the data for identifying and subtracting spurious uctuations in the signal. This is done by comparing data samples taken with the beam pointing onto the same sky pixel. The most obvious redundancies, for a given detector, are those induced by the spinning of the satellite with a spin-axis xed between consecutive steps of the spin-axis motion. For one xed spinaxis position, the expected signal from the sky is periodic, the period being the inverse of the spinning frequency, fspin. As noise and many systematic eects do not have that periodicity, most of the

uctuations they induce on the time streams can be ltered out simply by combining consecutive scans in one single ring of data for each spin-axis position. The combination of time streams into a set of rings is a non destructive compression of the useful sky signal by  two orders of magnitude (since  100 scans are co-added in each ring). Of course, scan-synchronous systematics and noise cannot be suppressed at this stage. This rst step in data compression, however, is easy to implement and very ecient in simplifying the map-making problem. An additional advantage of performing this step is that the eect of any \ lter" (due to electronics, time constant of bolometers : : : ) on the signal can be quanti ed in an exact way on rings (Delabrouille, G#orski, & Hivon, 1997).

The next step in the map-making process is the identi cation and subtraction of scan-synchronous spurious uctuations by reconnection of the rings. The basic idea, again, is to compare the signal at intersections between rings. There are a few thousands such rings per detector, of a few thousand data samples each1 . The main source of complication in reconnecting rings (or in building a map from time series of detectors scanning the sky in a complicated way) comes from the fact that beams observing the sky with dierent orientations do not necessarily collect the same astrophysical signal. For polarised detectors, the signal depends on the orientation because the measured signal itself is an orientation-dependent linear combination of the Stokes parameters I , Q, and U (see Appendix B). Asymmetries of the beams also induce orientation dependences. While beam asymmetry can be neglected for rst-order map reconstruction, it is not the case for polarisation measurements. We have investigated quantitatively some aspects of this second step by direct numerical simulations to which we now turn.

2.2.2 Low-frequency drifts and map striping The rst concern is the identi cation and removal of the eect of low-frequency noise which generates drifts in the signal. If as expected the knee frequency2 of the low-frequency noise is quite smaller than the spinning frequency of the satellite, the main eect of low-frequency drifts is to displace the average level of each ring by an unknown oset. A rst-order correction amounts to estimate and readjust these osets. Without this step, a direct reprojection on the sky generates stripes in the maps, as shown on the left panel of gure 2.2.

Simplest destriping For unpolarised measurements, one can assume to rst order that the main beam signal is independent of the orientation. This allows for direct oset estimation by comparison of signal dierences at intersection points between rings for each detector individually. A least-square minimisation algorithm has been implemented and tested for destriping Planck maps (Delabrouille, 1998a). For the Planck HFI, the 1=f noise from the electronics has a knee frequency below 0.01 Hz. For such characteristics, this rst-order method is very satisfactory, as illustrated by gure 2.2.

Figure 2.2: Maps of noise reprojected on the sky before and after simple destriping. Note the change of scale. This method allows very accurate destriping, if the crossing points between the rings are well spread over the rings. If this condition is not satis ed, no method can remove stripes satisfactorily. This sets constraints on the observing strategy for Planck in particular great circles with a spin axis kept in the anti-solar direction should be avoided. For a one year mission, with a spin axis kept in the antisolar direction, steps every 4 minutes, there are 5400 \rings" per detector with each 5164 \samples" taken at 1/2.4 of a 100 FWHM beam. 2 We call knee frequency the frequency below which low-frequency eects are signicant, e.g. where one sees a lowfrequency up-turn from pure white noise. 1

Figure 2.3: a) Average number of intersections per degree along the scan, as a function of angle on a Planck

ring, for a scan strategy with 8 sinusoidal oscillations per year and 10 out-of-ecliptic amplitude, and a scan angle of 85 . Intersections are concentrated around the points of closest approach to the ecliptic poles (at angles of 0 and 180 degrees), but there are at least a few points per degree everywhere along the scan. b) Distribution of crossing points between circles on the sky for the same scan strategy. There is more redundancy around the poles, but with still some redundancy everywhere. For comparison, a 90 angle anti-solar scanning would yield crossing points at the North and South ecliptic poles only.

More generally, because some scan-synchronous systematic eects that may generate stripes (e.g. sidelobe pick-up) can only be identi ed by their orientation dependence, it is necessary that pixels in all regions of the sky be observed in at least two, and preferably a few, dierent satellite orientations. For this reason, the intersections of rings should be well spread over all the sky. This, again, puts constraints on the scan strategy. This optimisation of the trajectory of the spin axis and the scan angle can be done with the set of tools just mentioned (within technical constraints). Figure 2.3 illustrates the repartition of intersection points for one candidate scan strategy.

Improved destriping Our simulations have shown that striping due to uncertainties in the oset of each ring can be corrected for with adequate accuracy. Modelling the eect of low frequency noise on the data by a mere oset of each ring, however, requires the knee frequency of the noise to be smaller than the spinning frequency of the satellite3 . If this is not the case, the model of the noise must be re ned to include variations along each ring. Simulations have been run to remove pessimistic low-frequency drifts (1=f 2 noise with fknee = 0.1 Hz). For such a noise, if only one constant oset per ring is adjusted and subtracted from the data, the rms of the noise is increased by 7.5% over the pure white noise gure, with stripes clearly visible on the nal map. This increase is reduced to 2.9% if a linear drift is adjusted for each circle, and to 2.3% if a second degree polynomial is used. Since the algorithm minimises the dierences between measurements at the crossing points of dierent circles, the eciency of the method does not depend on the true signal from the sky, as long as the assumption that the signal from the sky depends only on the direction of pointing (not on satellite orientation) remains valid. Again, this improved method of destriping requires a good interconnection of the rings. The results of the destriping simulations have already allowed us to put requirements on the scanning strategy, and to investigate which set of parameters satisfactorily describes the instrumental eect of low-frequency noise to make its correction possible. The use of such a set of parameters (possibly optimised for a given shape of the noise spectrum) in a more global map-making scheme (which corrects simultaneously, or by iteration, for several instrumental eects) will be one of our next developments. 3 This modelling also requires that the number need (1 + 1 ) knee < spin . =N f

f

N

of scans averaged to form one ring be large - more specically we

2.2.3 The measurement of polarisation There is a speci c problem to the measurement of polarisation. These are made using polarised receivers which do not measure directly the interesting quantities on the sky (the Stokes parameters I , Q, and U ), but instead a linear combination of I , Q and U . The measurements of several detectors, each of which suers from low-frequency noise eects, thus need to be combined in order to get maps of I , Q, and U . In general, the best estimation of the Stokes parameters from a set of measurement of linear polarisation does not guarantee that the errors are decorrelated. On the other hand, as shown in Appendix B, measurements of the Stokes parameters Q and U , and of the total intensity, can have decorrelated errors, provided the signal intensity is measured with similar noise performances in at least three polarisation orientations separated by equal angles. We call such an ensemble of bolometers a decorrelated con guration. The proposed design of the HFI polarised channels relies on series of decorrelated con gurations. Using decorrelated con gurations of bolometers, we have implemented a generalised version of the \destriping" algorithm above (Revenu & al., 1998). Instead of dealing with simple dierences at crossing points, the method minimises a least square system built from dierences of appropriate linear combinations of the measurements for all the detectors involved. The results are as good as those obtained in the simple, unpolarised case: the residual striping due to osets is reduced to a non-detectable level, with no increase of the noise rms per pixel.

2.2.4 The far side lobe problem We now describe the progress that has been made in quantifying the eects of far sidelobe straylight pickup, and in nding solutions for sidelobe signal estimation and correction in the data set. The problem of the sidelobe contributions to the signal is central to the design of Planck, since the angular resolution depends drastically on the amount of spillover radiation that is acceptable. We have demonstrated during the phase A study that the data from the Earth, Sun and Moon acquired during the transfer phase to L2 could be used to map the sidelobe pattern (Delabrouille, 1998b). But the mapping of asymmetric, very distant, very low Figure 2.4: Sidelobe signal reprojected on the sky. Pickup from the sidelobes requires an iterative inversion dipole and galactic emission are included. A rst order correction been applied by readjusting the average level of each Planck of the data of the whole mission, in- has ring. that the equivalent brightness of the reprojected sidelobe cluding the data gathered in the useful signalNote is smaller than the brightness sensitivity of 0:27pW=m2=Hz observation time. (per 10 pixel) everywhere except near the galactic plane. Two If no data can be taken during the features of the antenna pattern can be identied from this map: 1) transfer phase, then not only the very Some asymmetry of near sidelobes, identied by the double image far, very low-level sidelobes, but also of the galaxy: to rst order, one image corresponds to one satellite intermediate level sidelobes cannot be orientation, and the other to the other satellite orientation, six later 2) A few rings, which reproject in the centre of the mapped directly using the Earth and months map, suer particularly from pickup of the emission of the galactic Moon, and thus might not be well centre detected in the spillover ring around the primary mirror known at the beginning of scienti c ob- (notice especially the bow-shaped feature). servations. During the scanning, the motion of the strongly inhomogeneous pattern of Galactic emission in the sidelobes of the antenna pattern generates spurious signals. If the 0

shape and level of the sidelobes is not known, the removal of such signals cannot be done directly. For the \best" cosmology channels (100 GHz - 217 GHz), the cosmological dipole will generate fake signals from sidelobe pickup as well, at very low frequencies mainly.

Straylight signal estimates Using COBE and HASLAM measurements of galactic foregrounds and of the CMB dipole, and a two dimensional sidelobe model built from cuts in the antenna pattern computed by P. de Maagt at ESTEC, an estimation of the convolution signal of the sky with the sidelobe pattern has been computed at 100 GHz. The sidelobe level is not the same for all detectors, and the worst of the provided antenna patterns at 100 GHz has been used for this calculation. At 100 GHz, the dominant source of sidelobe pickup is the dipole, but most of its contribution is subtracted out by oset readjustment on the rings. After such readjustment, both the galaxy and the dipole contribute signi cantly to the residual contamination. The amplitude of this fake contribution, per 10 arcminute pixel, is smaller than the noise rms. If not corrected for with better accuracy, however, the sensitivity at large angular scales (at low ` values) may be somewhat degraded. Residual sidelobe signals after oset readjustment can be seen on gure 2.4. The oset readjustment is done using the destriping algorithm of section 2.2.2, and corrects for the rst-order eect of the dipole. As the physical source of some of the reprojected signal can be identi ed by looking on the map of gure 2.4, it is clear that Figure 2.5: Far sidelobe recovery by deconvolution some knowledge of the lobe, and a corresponding of the signal due to the emission of the galactic correction of the sidelobe eects, can probably be dust in a model 350 GHz antenna pattern. This obtained from the data with an optimal analy- illustrates that redundancies are sucient to use sis scheme. Algorithms for the identi cation and the galactic emission for sidelobe mapping. Actual best subtraction of such signals are being inves- sidelobe recovery, however, requires an iterative intigated. Redundancies in the measurement are version of the data, as explained in appendix C. necessary for such methods to be ecient. The comparison of the measurements made by dierent bolometers in the same channel, in particular, will be useful.

Estimation and correction of straylight signal Even if the uctuations due to sidelobe pickup are at a level lower than the sensitivity per 10 arcminute pixel (and thus are not too much of a worry for high ` value measurement of C`), they will aect the outcome of the experiment at low spatial frequencies. If the antenna pattern is known, straylight correction is possible because it is then possible to estimate the sidelobe signal quite well using rst-order prior knowledge of the sky and the antenna pattern (and then re ne by iteration if necessary). First reprojection of the Planck data will give a very good rst-order estimate of the bright features in the sky. They are also known (at lower angular resolution) from FIRAS.

If the antenna pattern is not known, an algorithm which estimates sidelobe contribution and antenna pattern iteratively, with increased accuracy, should be implemented. Such an algorithm has been used for estimating the sidelobe contribution of quasi-pointlike bright sources of straylight as the Sun and the Earth in very low sidelobes. A similar algorithm, however, may be implemented to estimate the sidelobe contribution of the galactic plane only if the shape of the lobe can be recovered from the knowledge of the sidelobe contribution, which is not obvious a priori. Using numerical simulations, we have tested that the lobe can be recovered very accurately if the sidelobe signal is known, as illustrated in gure 2.5. This shows that the system is not degenerate, and suggests that an iterative solution can be found. Iterative algorithms for sidelobe correction are currently being tested. They seem to converge quite slowly, but the results are very encouraging (Delabrouille, Gispert, & Puget, 1998). Details on the method can be found in Appendix C.

2.3 The component separation problem We now turn to the component separation step. Appendix D gives a self-consistent description of this problem, generalising the scheme already discussed in the phase A report. Of course, to assess the performance of the proposed payload, we need a model of the microwave sky which we review below, and a speci c proposition of analysis method. This will allow us to use a gure of merit of various experimental set-ups, and address questions of robustness of the proposed instrument design.

2.3.1 The microwave sky We use an update of the model originally developed for the phase A study. The Galactic model is identical since new results have con rmed that this model, if anything, overestimates the Galactic contamination of primordial CMB anisotropies. This is justi ed in Appendix E. The main new areas of modelling concern the contribution from infrared point sources (an area of much active development recently) and of Galactic contamination of the polarisation measurements.

Infrared sources & their background Several models have been developed for predicting the galaxy number counts in the Infra Red (see for instance Blain & Longair, 1993 Pearson & Rowan-Robinson, 1996). In the context of preparation for the Planck mission, we began a long term project to extend to the far-infrared range the successful type of physical modelling which was originally developed for the UV/optical range. The goal was to simulate maps of extragalactic source contribution in the Planck frequency channels. This eort has recently converged (Guiderdoni et al., 1997, 1998). As detailed in Appendix E.3, the key new observational fact has been the discovery of the Cosmic Infrared Background by Puget et al. (1996), recently con rmed by independent studies (see Schlegel, Finkbeiner, & Davis, 1997 Hauser et al., 1997). However, the lack of observational data left open several possible models of galaxy evolution at far-infrared wavelengths. One of them (hereafter Model E of Guiderdoni et al., 1997) successfully predicted the galaxy number counts at 15, 175, and 850 m which have been measured since then. Thus, irrespective of the uncertainties in the details of the model, the close match with number counts observations at the frequencies similar to the HFI suggests that we can predict the contribution of infrared galaxies to Planck measurements with reasonable accuracy. Of course, forthcoming observations will tighten the constraints on the models and might induce small variations in the predicted number counts. Table 2.1 gives our current best estimate, based on this model, for the source number density detectable at the 5  level, where  stands for the quadratic sum tot of the contribution from detector noise det , the cirrus cir , the CMB CMB , and the contribution from the unresolved sources themselves, conf . This was computed iteratively, with only the sources with a ux < 5 tot contributing to conf , and using a cirrus level of NHI = 1:3 1020 cm;2 which corresponds to the best  10% of the sky. We also give in column (9) a total detectable number of sources over the sky, by repeating this calculation for dierent NHI and integrating over the H I distribution derived from the Leiden/Dwingeloo survey. The result of the same calculation at the 3 tot level is given in column (10). Note that all



GHz (1) 857 545 353 217 143 100

NUMBER COUNTS WITH Planck HFI ins cir CMB conf tot N (> 5 tot ) N (5 ) N (3 ) mJy mJy mJy mJy mJy sr;1 in 4 sr in 4 sr (3) (4) (5) (6) (7) (8) (9) (10) 43.3 64 0.1 146 165 954 2980 11200 43.8 22 3.4 93 105 515 2370 10700 19.4 5.7 17 45 53 398 2560 11500 11.5 1.7 34 17 40 31 250 1440 8.3 1.4 57 9.2 58 0.36 4 17 8.3 0.8 63 3.8 64 0.17 2 4

FWHM

arcmin (2) 5 5 5 5.5 8.0 10.7

Table 2.1: Theoretical estimates from model E of Guiderdoni et al. (1997). (1) HFI wavebands central frequency in GHz. (2) Beam full width half maximum in arcmin. (3) 1  instrumental noise for 14 month nominal mission. (4) 1  uctuations due to cirrus at NHI = 1:3 1020 cm;2 (level of the cleanest 10 % of the sky). The uctuations have been estimated following Gautier et al. (1992) with P (k) k;2:9 and T=T = 10;5. (6) 1  confusion limit due to P0100m = 1:4 10;12B03100m . (5) 1  CMB uctuations for R S lim 2 FIR sources in beam  FWHM , dened by conf = ( 0 S 2 (dN=dS )dS )1=2 . The values conf and 2 1=2 2 + 2 2 Slim = q tot have been estimated iteratively with q = 5. (7) tot = (ins conf + cir + CMB ) . Here 20 ; 2 cir is for NHI = 1:3 10 cm . The values tot and conf have been estimated iteratively with q = 5. (8) Surface density of FIR sources for Slim = 5 tot . (9) Total number of FIR sources in 4 sr for Slim = 5 tot with cir consistently computed from the NHI distribution of the Leiden/Dwingeloo survey. (10) Total number of FIR sources in 4 sr for Slim = 3 tot with cir consistently computed from the NHI distribution of the Leiden/Dwingeloo survey. /



these calculations were done with an instrumental noise derived from a mean integration time while our scan strategies imply that in fact some high latitude regions will be surveyed more deeply (up to ins  3 times lower). In any case, the HFI will provide the absolute calibration of the relatively bright number counts in the submillimetre range which any theory of galaxy formation will have to account for. This data will not be provided by any planned instrument and will complement the deeper counts which the BOL instrument aboard FIRST will provide. Also of interest is the number of sources which can be detected along scan circles. As shown by table E.1 (p. 17 in the appendix), each bolometer in the 3 highest frequency ranges should see about one 5  source after accumulating data for about 100 rotations along the same scan circle. This will certainly prove useful for a posteriori attitude reconstruction. It is not yet possible to model with the same precision the low-frequency contributions from radiosources, blazars, inverted-spectrum sources : : : For completeness, we have used the predictions of the best available model (Toolatti et al., 1997) to model the contribution from unresolved sources at low frequencies.

Foreground polarisation

The main polarised foreground at the HFI frequencies is the dust emission from our Galaxy. By using a combination of theoretical modelling and data analysis we have recently succeeded in producing estimates of the expected level of dust polarisation (Prunet et al., 1998). The polarisation level at high Galactic latitudes is typically at the 10% level of the unpolarised emission (see Appendix E.2 for details). The free-free emission is not polarised. Thus the only other source of polarised emission from our Galaxy comes from the synchrotron emission. Here the situation is more complex, since we have data on this emission only at low frequencies ( 1 GHz), and Faraday rotation has to be taken into account. It is expected that at frequencies of about 100 GHz, the synchrotron emission might be 30% to 50% polarised. In our modelling below, we assume a 40% polarisation level. While better knowledge of this component will have to wait for the MAP and LFI measurements, this is not critical for the HFI since synchrotron emission should be weak at frequencies greater than 100 GHz, and in particular at 217 GHz where most of the CMB polarisation signal will be derived (the 317 GHz polarised channel

being mostly a dust monitor). Concerning the other sources of emission i) the polarisation contribution from clusters should be weak, since it requires strong deviations from sphericity ii) the number of sources detected in the unpolarised case is small enough at frequencies < 217 GHz that their polarised counterparts should be no problem (although this issue may require a more quantitative appraisal). In the following, we only include the dust and synchrotron emissions.

Comparing contributions to the microwave sky

Figure 2.6: Angular power spectra of the various components: a) unpolarised at 100 GHz b) unpolarised at

217 GHz c) polarised at 217 GHz. The thick black line corresponds to the unpolarised CMB (temperature component T) in a COBE-normalised tilted CDM model, while the thin black line is for the E polarised component and the dotted black line is for the E-T cross-correlation power spectrum. The red, blue and green curves refer respectively to the dust, free-free and synchrotron emissions of the galaxy, the purple and light blue lines to (respectively) the background of infrared galaxies and radio-sources (once 5  sources are removed), the orange line shows the \on-sky" noise level, while the SZ contribution is in yellow. In c) the dotted red and green lines are for the T-E dust and synchrotron cross-correlation power spectra.

New results of our modeling of the microwave sky are the power spectra of the uctuations of all the relevant components, as functions of the frequency. Figure 2.6 compares them at 100 and 217 GHz. The last element missing from the models is detector noise. Since detector noise is added to the total signal after the sky uctuations have been observed (and thus after their convolution with the beam), it is convenient to derive a ctitious noise eld \on the sky" which, once convolved with the beam pattern and pixelized, will be equivalent to the real one (Knox, 1995). For channel i the \on-sky" noise power spectrum is thus the real one divided by the square of wi (`). Here we assume that there is no 1=f component left and that the beam pro les are Gaussians of FWHM i , which gives 2 2 Ci (`) ' c2noise exp(; p` i )

(2.2) 2 2 ln 2 with c2noise = i2 i = i2 i2 ), if i stands for the 1{ T sensitivity per eld of view (square pixels, see table F.1 for numbers). It is immediately obvious from gure 2.6.b that the 217 GHz HFI channel will be particularly important for measuring the high-` part of the spectrum, since it has high angular resolution, no signi cant SZ contribution, and modest dust contamination. One should thus directly obtain a signal to noise of the order of unity (for a CDM like spectrum) at `  2000! The predicted rms contributions per beam of gure 2.7 are simply obtained by integrating in ` the power spectra, multiplied by the beam pro le, wi (`). In the unpolarised case ( g. 2.7.a), it is interesting to note the high signal to noise available to the HFI for mapping its main foreground, dust, unhindered by the CMB. We further note a potential problem for any high precision low frequency measurement, which is the at SZ contribution (which, although sizeable, should prove hard to remove). The plots also show that the residual foreground contribution to the HFI measurement of the CMB should be at the few K level.

Figure 2.7: rms contributions of the various components in the Planck channels: a) unpolarised case b) polarised case c) polarised case in one degree FWHM beams. The colour coding is the same than in g. 2.6, but the contribution of infrared galaxies and radio-sources in a) have been co-added under the label of \PNTS". In the case of polarisation measurements ( g. 2.7.b), it is clear that the signal per beam will be largely dominated by noise, at least in regions with mean sky coverage. On the other hand, at the degree scale ( g. 2.7.c), it should be easy to map the foregrounds with high signal-to-noise (since most of their power is at large scale) to enable a \clean" statistical analysis of the CMB polarisation. Figure 2.8.a shows the contours in the angular scale{frequency plane where the uctuations, as estimated from `(` + 1)C (`)=2, reach 100 K2 , i.e. about one tenth of the large scale COBE level of (30 K)2 . These contours map the three dimensional topography of the uctuations of individual components in the  ; ` plane. The synchrotron component (when expressed in equivalent temperature

uctuations) de nes a valley which opens towards large `, since `2 C (`) decreases with ` as 1=`. The free-free component de nes a shallower and gentler valley, while the dust emission creates a high frequency cli. The large ` end of the valley is barred by the point sources \dam". The dotted black line shows the path followed by a stream lying at the bottom of the valley, i.e. it traces the lowest level of the total foreground uctuations. Its location con rms partly the common wisdom that  ' 100 GHz is the best frequency for CMB work. Indeed, this appears to be true only for low-` measurements. At ` > 200, the optimal frequency moves to higher values  150 GHz. Of course the exact optimal value depends on the respective strength of the eects from sources and clusters (whose zero is at 217 GHz), which is somewhat model dependent. In any case, this con rms that the most stringent constraints on the high-` part of the CMB spectrum should come from the 143-217 GHz channel, not the lower frequency ones of lower angular resolution. Figure 2.8.b displays the polarised4 countryside for comparison with the polarised case. Note that the contour levels are for ten times smaller C (`). While the phase A study (and x 2.3.2 & x 2.4.2) demonstrated that Planck will reach the fundamental limits set by photon noise and astrophysical foregrounds in the unpolarised case, this plot suggests that the polarised measurements from Planck will be limited by the noise and angular resolution of the HFI, but not by Galactic polarisation (see x 2.3.2 for a quantitative analysis).

2.3.2 Expected performances of the instrument

Given our sky model, we can suppose such a sky has been observed by a particular experimental set-up, and analysed using Wiener ltering. Indeed, as shown in Appendix F, once the instrument and the covariance matrix C of the templates are known, the Wiener lter is entirely determined and can be used to obtain i) the eective `-space window of the experiment for each foreground, once the data from all channels have been co-analysed, ii) the expected reconstruction errors iii) the error contributed by noise and foregrounds removal to the CMB power spectrum. The corresponding equations (F.1-F.3] are introduced in Appendix F, p. 20). 4

For the E-type polarisation E and B type are coordinate-invariant transforms of the

Q

and

U

Stokes parameters.

Figure 2.8: Contour levels of of the dierent foregrounds in the frequency-space,  ; `, plane: a) contours at the 100 K2 for the unpolarised components b) contours at the 10 K2 for the polarised E-components.

Nominal unpolarised HFI

Figure 2.9 shows the results in the HFI case. It is assumed that all 5  sources have already been removed, and that the remaining contributions from unresolved infrared and radio sources (the background) each behave as processes with well-de ned properties (see a justi cation in E.3, in particular gure E.5).

Figure 2.9: a) Quality factors or square of the `-space eective windows for the HFI. As usual, black is for the

CMB, red, blue, and green are for the Galactic components, yellow is for the SZ contribution. The transform of the channel beams are also shown for comparison as dotted orange lines. b) Corresponding real space eective beams (negative parts are denoted by dashes). c) Break down of the contributions to the residual CMB error.

Figure 2.9.a gives the eective `-space window Qp (`) of the experiment for each component, to be compared with the individual beam-induced windows of each channel, wi (`). It shows the gain obtained by combining channels through Wiener ltering. As expected the weaker Galactic foregrounds are poorly recovered except on the largest scales (small `) where their signal is strongest. In the SZ case, the `-range recovered is larger, but the recovered modes will be biased low due to the relatively low signal to noise. This is somewhat misleading though since the SZ signal is strongly non-Gaussian and thus poorly described by a power spectrum only. Figure 2.9.b shows the inverse spherical harmonics

transforms of the Qp(`), i.e. the eective beams. They show the eective point spread function by which the underlying maps of the component emissions have been convolved once the analysis is complete. One can see in particular that the CMB FWHM is ' 60 while that of the dust components is ' 40 . Quoting a FWHM for the other components is not very meaningful since they have very extended tails at large separations. We can also estimate the spectrum of residual errors in the map for each component. Figure 2.9.c compares these residual errors with the true signal. At low ` the largest contribution (aside noise) is the one from the synchrotron and free-free emissions, while at ` > 100, the largest contribution is the one from the unresolved background of low-frequency sources. Note that the sum of reconstruction errors from all components (black line) are well below the expected primordial CMB signal at all multipoles ` < 2000. Finally, we can estimate the uncertainty added by the noise and foreground removal to the CMB power spectrum. Figure 2.10 shows the envelope of the 1- error expected from the original MAP concept (yellow), the current design of MAP(green), the LFI (blue), and the HFI or the full Planck (red) the experimental characteristics used in this comparison may Figure 2.10: Expected errors on the modes amplitudes (no be found in table F.1, p. 22 of the ap- cosmic variance included). Colour-coding: yellow corrependices. Note that there has been no sponds to the originally proposed MAP experiment, green \band averaging" in this plot5 , which is for the revised concept, blue is for the LFI, and red is for means that there is still cosmological the HFI. signal to be extracted from the HFI at `  2500. The message from this plot is thus excellent news since it tells us that even accounting for foregrounds, Planck will be able to probe the very weak tail of the power spectrum and allow breaking the near degeneracy between some of the cosmological parameters.

Robustness & reliability

Figure 2.11.a shows how the expected 1- error-bars on the derived C (`) of the CMB from the HFI are aected by global noise increase in all channels, as could be produced for instance by an operating temperature oset from the nominal one. The black lines show the \truth" and the HFI nominal error bars, the blue lines show the eect of a varying noise level by a factor of two both ways, and the red line shows the case of a variation by a factor of four. In brief, each improvement of the noise level by a factor of two allows mapping one more Doppler peak, and the nominal HFI should allow mapping (without any band averaging) till the 7th peak in a spatially at CDM universe. By comparison the improved MAP should determine the spectrum till the fourth peak and the LFI till the 5th peak. Figure 2.11.b shows the total residual contribution from noise and unsubtracted foregrounds to a CMB map. This is obtained by integrating the residual spectrum of gure 2.9.c till a maximum `max . For \low"-resolution maps retaining the modes till `max < 1000, the residuals vary nearly linearly with the noise level. The high-` part is most aected by an increase of the noise level. For all the 5 The band averaging would reduce the error bars on the smoothed ( ) by the square root of the number of multipoles in each band. C `

Figure 2.11: Performance variations for global variations across channels of the detector noise level a) nominal HFI power spectra errors (black) compared with the resulting ones for noise variations by a factor of 2 (red) and 4 (blue) both ways. b) Variations of the residual errors (restricting the integration to a maximum `max , the yellow, green, blue and red lines corresponding respectively to `max = 500 1000 1500, and no cuto at all.

modes < 1500, the nominal performances should imply a 9 K global residual, i.e. T=T  3 10;6 (and T=T ' 2 10;6 for ` < 1000). In order to test whether the failure of a single channel would be critical, we have also estimated the variation of the CMB residuals for `max = 1500 in dierent con gurations. Figure 2.12 shows that i) the residuals are dominated by noise ii) there is nearly no impact at all if a single Planck channel is lost except for the 143 & 217 GHz channels iii) loosing the HFI would more than double the residuals, but the LFI completed by the HFI 217 or 353 GHz channel would already do rather well. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Con gurations checked full con guration full except 30 GHz full except 44 GHz full except 70 GHz full except 100 GHz LFI full except 100 GHz HFI full except 143 GHz full except 217 GHz full except 353 GHz full except 545 GHz full except 857 GHz HFI only LFI plus 143 GHz LFI plus 217 GHz LFI plus 353 GHz LFI plus 857 GHz LFI only

Figure 2.12: a) CMB residuals (in a map with all modes till `max = 1500) retained) in various congurations. As usual, the red, blue, and green curves correspond to the Galactic residuals, while the yellow curve is for the thermal SZ, and the brown is for the noise (including the contribution from unresolved point sources). The black curve displays the total. b) Table of conguration indexes in the adjacent gure.

Polarisation analysis The very recent generalisation of Wiener ltering to the co-analysis of polarised and unpolarised data (Prunet, Sethi, & Bouchet, 1998, and Appendix F.2) can be used as above to estimate the performance of various set-ups. Here we focus on the uncertainty added by the noise and the foreground removal in determining the error bars on the polarisation power spectra of E-type and the E-T cross-correlation. Figure 2.13 shows that the HFI should do very well if the foregrounds are not too dierent from what was assumed. 100

100

80

60

0 40

20

-100 0

500

1000

1500

2000

l

500

1000

1500

2000

l

Figure 2.13: The black lines show the expected power spectra in a COBE-normalised tilted CDM: a) E autocorrelation spectrum b) E-T cross-correlation power spectrum. The red lines show the 1- error bars contributed by noise and the remaining foregrounds. In all cases, the curves have been smoothed with a 10% running average in log `. We have used these derived error bars to estimate the accuracy achievable on a restricted set of cosmological parameters, once the foregrounds are taken into account through our extended Wiener ltering procedure. Table 2.2 shows that the reachable accuracy is even better, when we use all the HFI channels in the presence of foregrounds, than we could anticipate on the basis of a na%&ve calculation where foregrounds are ignored and it is assumed that the noise simply corresponds to that of the best polarised channel (i.e. 217 GHz for the HFI). This provides support for the idea that the 350 GHz channel can be eciently used as a sensitive dust polarisation monitor, even at high `.

2.4 Numerical simulations The previous semi-analytical analyses used Wiener ltering to estimate the eciency of the separation of Planck observations into physical components. However, as discussed in x D.4, Wiener ltering is linearly optimal only for Gaussian processes. In addition it relies on a prior, the covariance matrix of the templates, that has to come from a rst inversion whose precision needs to be assessed. For all these reasons, we conducted (x 2.4) simulations of the sky observed with various experimental set-ups, that we analysed with this formalism, as well as with a nonlinear Maximum Entropy Method (MEM) (x 2.4.2).

2.4.1 Numerical simulations of the observations

Our simulations of the observations proceed as in the phase A study. We have included the three Galactic components, the primordial CMB uctuations, and the Sunyaev-Zeldovich eect from clusters

LFI 44GHz

LFI 70GHz

HFI 100GHz

HFI 143GHz

HFI 217GHz

HFI 353GHz

HFI 545GHz

HFI 857GHz

Figure 2.14: Simulated maps of 9 degree diameter, as observed by Planck. The top row shows the rst three

channels of the LFI instrument (it's 100 GHz map is not shown). The next 2 rows correspond to the 6 channels of the HFI instrument). The rms level of the foregrounds corresponds approximately to their median value. The gure in the last row shows the power spectra of the processes contributing to the 100 GHz \observation" above.

F.R. BOUCHET & R. GISPERT 1998

LFI 30GHz

Figure 2.15: An example of component separation of the HFI \observations" of gure 2.14. a) template map of the T anisotropies used in the simulation b) recovered map by Wiener ltering.

Figure 2.16: Accuracy of the reconstruction of the T map of gure 2.15 by Wiener ltering of HFI observations. a) Histogram of the reconstruction errors. It is well tted by a Gaussian with  = 15% of the input map rms of 77.6 K (red line). b) Contours in the plane of recovered values (per pixel) versus input values. The inner

contour already contains 90% of the pixels values, while the outer one contains all pixels. c) Comparisons between (band averaged) power spectra of the input (black), output (red), and dierence map (blue). 

Figure 2.17: An example of separation using MEM of the ySZ component from the \observations" of gure 2.14.

The left panel shows the template map used in the simulation, after convolution with a 100 beam, the middle panel shows the recovered map by MEM applied to HFI observations alone, while the right panel shows the result for full Planck observations.

Parameters C2 Model 796(K )2 Best channel (LFI) 10 % Wiener (LFI) 6:2 % Best channel (HFI) 5% Wiener (HFI) 4% Best channel (Planck) 5% Wiener (Planck) 3:8 %

h b 0:5 0:05 2:9 % 4:7 % 1:6 % 2:9 % 1:5 % 2:7 % 1 % 1:9 % 1:5 % 2:7 % 1 % 1:7 %

0:05

 0:0 0:08 0:05 0:05 0:03 0:05 0:03

12 % 9:8 % 6:6 % 7:2 % 6:6 % 5:8 %

ns 0:9

1% 0:6 % 0:5 % 0:4 % 0:5 % 0:4 %

nt 0:1

59 % 39 % 31 % 27 % 31 % 24 %

Table 2.2: 1  errors in estimates of cosmological parameters using the polarisation information. The retained

parameters are, from left to right of the rst row, the normalisation of the power spectrum, Hubble's constant in units of 100 km/s/Mpc, the energy density of baryons and of the vacuum in units of the closure density, the optical depth to last scattering, and the scalar and tensor indexes of the power-law initial conditions. The next row shows the model values we used, while the other rows compare na$ve estimates based on using the \best" channel of the experiment (e.g. the 143 GHz channel for the HFI, assuming the others can be used to fully remove any foreground contribution) and our estimates assuming Wiener ltering is used.

(both thermal and kinetic). The only astrophysical source of uctuations missing from the simulation are those arising from the background of unresolved sources other than clusters, which are at very faint levels where the CMB contribution is strongest (but they should be included later, to test the removal of resolved sources). We have chosen to construct 12.5 12.5 square degree elds with 1.5' 1.5' pixels, and have created spatial templates for all the components. For the primary T=T uctuations, we use a COBEnormalised standard CDM model. Realisations of the thermal and kinetic eects of clusters are stored respectively as maps of the y parameter and T=T uctuations as before (see g. 1.7). To simulate Galactic emissions, we use the same spectral model as described in Appendix E.1 and use templates extracted from IRAS and the 408 MHz survey of Haslam6 . The measurement process by the Planck instruments was simulated in the following way. We have assumed unit transmission across each spectral channel (with = = 0:20 for the LFI and = = 0:25 for the HFI), and integrated across each waveband the dierent spectral components. The angular response of each channel was assumed to be Gaussian (of the corresponding FWHM), and the sky maps were convolved with these beams. Finally, we have added isotropic noise maps, assuming a spatial sampling at 1/2.4 of the beam FWHM7 (we assume that a prior destriping procedure has been ecient enough for the residuals to be negligible). Figure 2.14 shows an example of (simulated) maps as \observed" by Planck in a region with nearly median foregrounds.

2.4.2 Analysing observations Since the phase A study, the main development concerning the inversion by Wiener ltering has been to deduce the Wiener lters by using power spectra as obtained from a rst round of 2 minimisation using a Singular Value Decomposition (to allow regularisation of the solution). This was done in Fourier space, mode by mode. From the mode obtained, the spectra were deduced by removing the noise bias (this inversion noise being estimated by the algorithm itself) and averaging over angles. A t was then performed for each component8 . These ts deduced from the data were then used in designing the Wiener lters used to \polish" the component separation (see Bouchet & Gispert, 6 Since the Haslam map (Haslam et al., 1982) has an angular resolution of only 0 85 , we added to them small scale structures with a ( ) / ;3 power spectrum, thereby extrapolating to small scales the observed behaviour of the spatial spectrum. In practice, the map is Fourier transformed (which is equivalent to a spherical harmonics decomposition if the size of the map is much smaller than a radian) and its spectrum is computed. New harmonics are then generated at larger , and globally normalised so that their spectrum smoothly extends the measured one. The results are then transformed back in real space. 7 The noise rms of the simulated maps are thus  2 4 times greater than the numbers recalled in the performance table F.1. These are reobtained if one degrades the noise maps resolution to the FWHM of the corresponding channel. 8 The CMB is tted by an ;2 power law times an exponential cut-o, the SZ part by an ;1 power law, while all the Galactic components (both for their auto- and cross-correlations) are tted with ;3 power laws. :

C `

`

`

:

`

`

`

1998b, for details). Figure 2.15 shows a comparison between the zero-mean input template Ti of the simulated maps of gure 2.14, and that recovered, To , after Wiener analysis of HFI observations. The accuracy of this inversion may be judged with the help of gure 2.16: as panel a) shows, the reconstruction errors per pixel, "(x) = Ti (x) ; To (x), have a Gaussian distribution to a high degree of accuracy9 , with an rms , , which is 15% of the rms of the input maps (i.e. T=T  4  10;6 , with   a quite weak skewness S = (" ; ")3 =3 = ;0:007, and a quasi Gaussian kurtosis, K = (" ; ")4 =4 = 3:0). The contours of gure 2.16.b show a tight correlation between the input and recovered values, with no bias, no outliers, and an even distribution of the recovered values for all input values. Finally, gure 2.16.c compare the spectra of the input, output and dierence. The dierence spectrum reaches 5 K at `  800, instead of 1200 as suggested by the theory (see g. 2.9.c). It is only for scales ` > 2000 that the signal becomes too weak to be fully recovered. Similar numbers are obtained for the analysis of full Planck observations. Note that these numbers correspond to the analysis of only one map. The residual errors on the spectrum determination should thus be much smaller once a large fraction of the sky has been analysed (as we showed in the phase A study by computing the spectrum of the residual error on a large number of maps). Finally, we nd that inversions by MEM lead to very similar quantitative conclusions. The situation is dierent though for the recovery of the SZ eect from clusters. MEM then does much better than Wiener ltering for this tenuous but strongly non-Gaussian signal. Furthermore, as one can see from gure 2.17, analysing the full Planck observation is helpful to disentangle this weak signal from other weak contributions from Galactic emissions (the strongest cluster of this image has a central y value of 5 10;6 at a 100 resolution, but weaker ones with a central value  2 10;6 are easily recovered10 . Even in this small region of 9 diameter with median foregrounds, one easily detects at least ten clusters. The Planck catalogue of SZ clusters should have  10 000 entries.

2.5 Discussion & conclusions We can thus summarise the detailed analyses of the previous sections as follows.

 The making of maps from time-ordered data should lead to very small residual striping, even with easily implementable algorithm, provided that the scan pattern is well interconnected.

 Despite the added complexity of polarisation measurements, we have shown that the use of

\decorrelated con guration" should simplify the making of polarisation maps. We found and implemented a simple polarisation map-making method, and showed that the residual striping should also be very low.

 We have now addressed the issue of the sidelobe pickup of the sky signal very far from the

main beam axis. The iterative analysis of the data should allow the reconstruction of the antenna pattern including sidelobes, thereby allowing the production of frequency maps with this systematic eect under control.

 The basic hypotheses underlying the microwave sky model built during the phase A study have

been rather strengthenned by recent developments. Our model for the contribution for the SZ eect from clusters has been improved to take into account the latest observational constraints. In addition, we now have a much better grip on the contribution from Infrared sources.

 We have estimated the dust polarisation, and extended the semi-analytical analyses of Wiener

ltering to the polarisation case. It appears that at the levels of polarisation assumed, the foregrounds should not impede the analysis of the CMB polarisation, thanks to the sensitivity and frequency coverage of the instrument.

This should simplify both the analysis of parameter estimation, and the quantitative appraisal of the Gaussianity or not of the primary signal, via for instance measurements of the bispectrum, see Heavens (1998). 10 These values are about 1.5 times larger at a resolution twice better. 9

 We have performed more realistic simulations of the component separation using the Wiener

ltering method, by designing the lters from the information derived from the observations themselves rather than assuming perfect knowledge of the underlying power spectra. We also implemented another method called MEM (for Maximum Entropy Method) which appears to perform equally well than Wiener ltering for the CMB, but allows a much improved detection of weak SZ clusters. These analyses con rm the capabilities of the HFI instrument to full ll the science goals, and further indicate that its added capability of polarisation measurements can be used to dig even deeper into the CMB \gold mine". It is also important to bear in mind the expected bene ts of combining the data from the HFI and LFI instruments:  Since the 100 GHz channel is common to both instruments, it allows direct cross-checks of systematic errors at the calibrated timeline level.  The SZ contribution might dominate the residual contribution to the CMB maps derived from the LFI alone, and the synchrotron and unresolved background from sources might dominate the residuals of the CMB derived from the HFI alone. But the combination of the HFI and LFI should produce a much cleaner determination of the CMB at large scales.  The dominant residual source of error in the reconstructed CMB maps from both instruments would be noise rather than residual foregound contributions, except at the smallest scales (` > 1000) where the residual contribution from the unresolved background of low-frequency sources would contribute equally to the noise (but both at a quite low level).  More quantitatively, gure F.3 shows that the total residual at ` = 10 would decrease from about  0.05 K (LFI) and  0:008 K (HFI) down to 0.003 K for their combination (and  0:015 0:005 and 0.004 K at ` = 1000). These numbers follow from a semi-analytical analysis using Wiener ltering. It assumes Gaussian-distributed foregrounds, an assumption which is of of course untrue. Although the results of the detailed simulations (which do not rely on this hypothesis) of course lead to larger numbers, they still validate our general conclusions.  Obtaining polarisation measurements free of systematic eects is technically challenging, and will require from both instrument teams very detailed analyses. The HFI and LFI use very dierent techniques to measure the polarisation. The ability to cross-check between instruments will constrain possible sources of systematic error.  The combination insures robustness to (localised) unexpected foreground behaviour thanks to the very wide frequency coverage of the combined instruments. This detailed work done since the phase A report thus con rms the impressive capabilities of the HFI and Planck and indicates that we have valid concepts for developping the data analysis pipe. In fact, we have already built a \sky simulator" of 4 observations at the resolution of each Planck channel, and we are currently developping the algorithm to eciently mine though this large database.

Chapter 3

INSTRUMENT DESCRIPTION 3.1 Introduction

The High Frequency Instrument (HFI) is designed to measure the anisotropy of the CMB radiation over the frequency bands where contamination from foreground sources is at a minimum and the CMB signal is at a maximum. Emission from foreground contributions (from the Galaxy and extra-galactic sources) will be removed from the sky maps by measuring the spectral signature of the uctuations over a wide frequency range. The HFI is therefore a multiband instrument with 6 bands from 100 to 857 GHz. Further, the critical cosmological information is contained within the cleaned spatial maps. It is necessary that the HFI has enough pixels at each frequency in the cross scan direction to ensure proper sampling of the sky as the satellite spin axis is depointed in steps of 3 to 5 arc minutes. The instrument needs to measure the polarisations at several frequencies. Detectors at the same frequency but dierent polarisation angles must follow each other on the same scanHFI Focal Plane Unit ning path to allow direct dierences to be made. Further, 4K the number of detectors also provide improved sensitivity, improved immunity to cosmic rays and redundancy. The 18K plate (LFI) focal plane is therefore a layout of 48 pixels ful lling all of the scienti c requirements. LR2 20K LR3 24K

3.1.1 Instrument layout The HFI consists of (i) the HFI focal plane unit, (ii) the

4K LR1 18K

20K 50K J-FET Box

20K

50K stage

readout electronics, (iii) the Data Processing Unit, (iv) the 80K coolers, and (v) harness and tubes linking various subsysHarness 140K tems. It is based on the use of bolometers cooled at 0.1K, that are the most sensitive detectors for wide band photomGene- Readout electronics ral etry in the HFI spectral range. Bolometers are sensitive to Electro Data Processing the heat deposited in an absorber by the incident radianics Unit tion. Very low temperatures are required to obtain a low 18K cooler heat capacity giving a high sensitivity with a short enough thermal time constant. Cooling the detectors at 0.1K in space is a major re4K cooler Dilution cooler quirement that drives the architecture of the HFI. This is achieved, starting from the passively cooled 50K/60K stage 300K of the payload module, by a four-stage cooling system (18K4K-1.6K-0.1K) detailed in section 3.5. The 18K cooler is Figure 3.1: Schematic layout of the HFI common to the HFI and the Low Frequency Instrument showing its main parts and their temper(LFI). atures. The 4K stage protects the inner stages from the thermal radiation of the 18K environment. It provides also an electromagnetic shielding (a Faraday cage) for the high 33

impedance part of the readout electronics. It is the envelope of the HFI focal plane unit. The coupling of the telescope with the detectors is made by back-to-back horns attached on the 4K stage, the aperture of the waveguides being the only radiative coupling between the inside and the outside of the 4K box. Filters are attached on the 1.6K stage, and bolometers on the 0.1K stage, which corresponds to an optimal distribution of heat loads on the dierent stages. The HFI focal plane unit has an extension to the 18K and 50/60K stages, enclosing the rst stage of the preampli ers (J-FETs at 120K). The AC bias and readout electronics (section 3.4.2) performs all the electrical functions of the cold stages, including the temperature measurement and control.

3.1.2 Sensitivity

The ultimate limitation to the sensitivity of radiometers is the quantum uctuations of the radiation itself, i.e. photon noise of the ux reaching the detector, ideally only that from the observed source. HFI is designed to approach this ideal limit. The signal to noise ratio obtained by one detector after an integration time t can written as a rst approximation as: Wsignal S N = NEP (2t);1=2 +Wsystematics

where Wsignal is the power absorbed by the detector, after transmission by the optical system, NEP is the Noise Equivalent Power of the detection system, including intrinsic detector noise, photon noise, and spurious signals and Wsystematics is the power associated with a fraction of the systematic eects, such as spin synchronous variations of straylight and temperature, that cannot be taken away in the data reduction process (most of them is removed using the redundancy in the data). The photon noise on the HFI detectors originates mainly from the CMB for  > 1:5mm, and mainly from the thermal emission of the telescope at shorter wavelengths. A colder telescope improves the sensitivity at high frequencies. At low frequencies, the HFI is designed to approach the quantum noise of the CMB itself. An instrument approaching the theoretical sensitivity limit must meet severe requirements in several domains: (i) The detectors intrinsic noise must be small with respect to photon noise. The current results obtained with spiderweb bolometers give intrinsic NEPs equal to or less than photon noise (see Section 3.3). (ii) The eciency of the optical system must be high we aim at an overall eciency better than 30%. (iii) The stray light must have negligible impact on the measurement. The horns are optimised to get the maximum directivity compatible with the stray light requirement (see section 3.2.3). (iv) The time response, the noise spectrum, and the detector layout must be consistent with the sky coverage strategy. While the instrument is scanning the sky, angular frequencies along the observed circle are detected as time frequencies. The system must be able to detect all the useful frequencies, from 0.016Hz to nearly 100Hz. A new type of readout electronics has been developed for this purpose (see Section 3.4.2) (v) In addition, other sources of noise, such as those induced by ionising particles or electromagnetic interference must be kept negligible. A special attention has been given to this subject, and is detailed in the adequate sections. Once this sensitivity limit has been reached, the only ways to increase the accuracy of the measurement are to increase the number of detectors and/or the duration of the mission. Table 3.1 gives the mean sensitivity per pixel for a mission duration of 14 months. Bolometer noise is assumed to be equal to photon noise. Pixels are assumed to be square (side = beam Full Width at Half Maximum). T=T sensitivity is the noise expressed in CMB temperature relative change (1). ySZ is the sensitivity (1) to the comptonisation factor for the Sunyaev-Zeldovich eect. The beam patterns on the sky are nearly gaussian and well de ned by their full width half maximum. For all channels, spectral resolution is = = 4, and the total optical eciency, including the losses in the telescope, is assumed to be 0.32. The sensitivity given in table 3.1 is relevant for a uniform extended source (or a point source for the ux sensitivity) varying with time scales long enough to avoid damping by the bolometer time constant and short enough not to be in the domain of 1=f noise.

Central frequency ( ) GHz 100 143 217 Beam Full Width Half Maximum arcmin 10.7 8.0 5.5 Number of unpolarised detectors 4 3 4 T=T Sensitivity (Intensity) K=K 1.7 2.0 4.3 Number of polarised detectors 0 9 8 T=T Sensitivity (U and Q) K=K 3.7 8.9 Total Flux Sensitivity per pixel mJy 8.7 11.5 11.5 ySZ per FOV (x 106 ) 1.11 1.88 547

353 545 857 5.0 5.0 5.0 6 0 6 14.4 147 6670 0 8 0 208 19.4 38 43 6.44 26 600

Table 3.1: HFI Sensitivities. Deviation from this ideal case result from the following eects:  Spatial frequencies are not transmitted uniformly by the optical system. The Modulation Transfer Function lters high spatial frequencies. The goal is to optimise angular resolution keeping the straylight at an acceptable level (see Section 3.2).  High frequencies are ltered out by the bolometer time response. Implementing fast enough bolometers is a requirement of the instrument (see Section 3.3.3)  The sensitivity at low frequencies may be degraded due to 1=f noise of the detection system of the electronics. In consequence, the temperature stability of the 0.1K stage, and the readout electronics are required to show no excess uctuations down to 0.016 Hz, i.e. the frequency of the 1 rpm scanning.

3.2 Focal Plane Optics 3.2.1 Architecture To detect anisotropies at a level of 1 part in 106 in the CMB it is essential JFET box at 50K (JFETs at 120K) that the sensitivity of the HFI to unwanted energy is minimised. A high rejection ratio must be achieved in the 1.6K angular domain, i.e. for scattered and diracted waves. For that purpose, the eld of view of the HFI detectors is 0.1K determined by naked horns in the foBack to back 50K cal plane which have a well determined horns at 4K 4K angular response thereby allowing conFilters at 1.6K 20K trol of the stray elds whilst coupling Bolometers, horns and filters at 0.1K eciently to the wanted energy. SecFlange at 20K tion 3.2.3 details the technical solution adopted. Since most sources of spurious radiation have a steep spectrum, a Figure 3.2: Architecture of the HFIfocal plane unit high spectral rejection is also mandatory. In addition, the radiation of the lters themselves on the detectors, and the load from warm parts on the cryogenic stages must be kept small (a few nW on the 0.1K stage). Complying with these requirements is achieved by a wide use of high performance interference lters, and by an original architecture tightly coupling optical and cryogenic designs ( gure 3.2). By using back-to-back horns at 4K (see Section 3.2.3) a beam waist is produced at the 1.6K level where spectral lters are placed to de ne the detected band. A third horn re-images radiation onto the bolometric detector. This design naturally oers thermal breaks between the 100mK detectors, the warmer 1.6K lters and the focal plane horns at 4K.

3.2.2 Pixel layout

Figure 3.3 shows a schematic of the proposed HFI focal plane that optimises the use of the 143GHz available focal plane space. The limitation of the SCAN 100GHz number of horns comes from a thermal and mass limitation imposed by the cooler and from the re217GHz quirement to share the focal plane with the LFI. The pixel layout is driven by:  The number of feeds at each frequency is determined by the cross scan sampling requirement, the need to measure polarisation, and the goal of maximum redundancy.  The CMB polarisation signature needs to be measured at 143, 217, and 545GHz.  The requirement to measure the polarisation 353GHz drives the focal plane layout since at least 3 545GHz polar directions must be measured at the same 217GHz point on the sky. We choose to measure 4 when possible because this provides the clean857GHz est way to independently separate the two relevant Stokes parameters.  The location of the dierent frequencies is de- Figure 3.3: View of the entrance horns, as seen termined by the requirement to minimise the from the telescope. Lines across horns represent eects of focal-plane aberration, and to mea- the direction of the measured polarisation, when sure at short time intervals the dierent polar- applicable. isations of a given pixel on the sky by detectors on the same scan path.  In the cross-scan direction, the high frequency pixels are staggered to give a sampling step of about 2 arcmin, which is consistent with Nyquist criterion, and gives a full sampling of the sky for steps of the spin axis up to 4 arcmin. The 100 and 143GHz beams are large enough to guarantee a correct sampling with the nominal steps in the spin axis.  Redundancy is achieved along the scan direction by having two sets of identical detectors (polarised or unpolarised). The selected spectral bands, the number of detectors in each band, their polarisation sensitivity (if any) and the beamwidth of each channel on the sky are given in Table 3.1.

3.2.3 Coupling to the telescope - horn requirements

The HFI instrument has evolved since the "Red Book" (ESA Phase A Report) design to incorporate new developments in feed systems for broadband detectors, which provide much cleaner spectral and spatial system response. The philosophy behind the scheme for the focal plane horns is in uenced by a number of speci c requirements peculiar to the Planck Mission: To obtain the necessary resolution with low spillover properties, a conical corrugated horn design has been chosen for the feeds. The 100, 143, 217, and possibly 353GHz horns are single moded, and so produce coherent diraction limited beam patterns. The details of the sidelobe structure, and thus the spillover levels as a function of edge taper, depend on the phase error s = d2 =8L across the horn aperture (d is the horn diameter and L the horn axial length). However, because of mass restrictions and the limited eld of view in the telescope focal plane, the sizes of the horns have to be minimised within the straylight and angular resolution requirements. This inevitably pushes the horn design towards a horn con guration with s in the range 0.25-0.40 for the single moded channels. The speci c horn design is a compromise driven by the straylight requirements, which are quite exacting, and the goal of optimal angular resolution on the CMB given the nite size of the telescope. To reduce the straylight contamination by the Galaxy, the total integrated spillover power at the

primary has to be minimised (see Section 2.2.4). Conservative values for acceptable spillover levels for the dierent frequency channels vary between 2% at 100 GHz to 0.7% in the higher frequency bands. The physical parameters for the horns aperture diameters and horn lengths, that have been chosen, with the corresponding edge taper and spillover levels, are listed in the table 3.2.

 (GHz) Spillover(%) Edge taper(dB) d(mm) L(mm) z(mm) FWHM(arcmin) 100 143 217 353 545 857

2.0 1.0 0.7 0.7 0.7 0.7

-21 -25 -29 -27 -26 -26

17.3 13.8 9.8 8.2 8.2 8.2

42 37 29 25 28 25

17 15 12 13 12 12

10.7 8.0 5.5 5.0 5.0 5.0

Table 3.2: Parameters de ning the horns: d is the horn-aperture diameter, L is the horn axial length and z is the distance of the phase centre behind the horn aperture The positions of the phase centres of the horns (i.e. the position of the telescope focal plane with respect to the horn aperture) are determined by optimising the on-axis gain of the telescope (i.e. coupling to a point source). The phase centre positions for the dierent frequency channels are also listed in Table 3.2. A drawing of a prototype 100GHz horn assembly used for testing the proposed instrument performance is shown in gure 3.4.

Figure 3.4: Schematic of optical layout for a single HFI pixel with, at 0.1K (left), the bolometer, its horn, and its lters, at 1.6K (centre) lters, and at 4K (right) lters and back-to-back horns. The feed horns are part of a back-to-back dual horn structure connected via a waveguide which controls the number of modes that can propagate. For the 100, 143 and 217GHz channels the waveguide allows for propagation of the fundamental mode only (one or both polarisations). A design suitable for these channels, in which there is a transition from a corrugated to smooth wall within the ared section of the horns, and which has very low return loss, is shown in Figure 3.4. The corresponding far- eld beam pattern is shown in Figure 3.5. For the higher frequencies (353GHz, 545GHz and 857GHz) the angular resolution requirement does not demand diraction limited operation for the required spillover levels. Few moded horns will therefore be utilised in these cases to increase the throughput and coupling to a wider beam on the sky. Few moded operation is obtained by increasing the waveguide diameter and allowing higher-order waveguide modes to propagate. Because of the wide bandwidth the number of Figure 3.5: Far eld beam pattern for HFI 143GHz modes and, thus, the narrow band-beam pattern feed. will vary across the full 25% bandwidths of the detectors the integrated pattern, therefore, will

be carefully modelled, and measured to ensure a well-understood design. The edge taper and spillover levels for the higher frequency channels (the channels most sensitive to edge taper and spillover levels) can be improved upon (i.e. reduced below 0.5% and -30 dB, respectively) through the use of a re ecting bae positioned around the primary. Re nement of this trade-o will be done in phase B with no in uence on the system requirements.

3.2.4 Wavelength selection, lters

The Planck submillimetre telescope is a simple o-axis Gregorian with a primary aperture of  1:3 metres. This design ensures that there are no sup1 port structures in the beam, which 0.1 could otherwise cause diraction of the 0.01 sky beam or radiate unwanted power to the detectors. The telescope will be 0.001 a low emissivity one (Red Book value 0.0001 = 1%, but expected to be < 0:5%), 1E-05 passively cooled, together with its en1E-06 closure, to 60K or less to minimise the 1E-07 thermal power radiated to the HFI detectors. Optically the telescope system 1E-08 is equivalent to a single parabolic mir1E-09 ror with an eective focal length of 1.8 1E-10 metres, which focuses the sky radiation 1E-11 onto bolometric detectors located in1E-12 side the HFI module. The rejection of the broadband emission from the sky 1E-13 and telescope requires a sequence of l1E-14 ters to guarantee the spectral purity of 50 150 250 350 450 550 650 750 850 950 the nal measurements. Currently the Figure 3.6: Plot of Prototype 143GHz Channel Spectral Re- spectral bands are de ned by a combination of the high pass waveguide sponse (horizontal axis in GHz) cut-on between the front back-to-back horns and a low pass metal mesh lter cut-o. Because of the requirements to minimise harmonic leaks we add four additional low pass edge lters such that the overall rejection exceeds 1010 at higher frequencies. This scheme also allows some exibility to choose where the unwanted thermal power is dumped (i.e. 4K, 1.6K or 100mK). The measured spectral performance for a prototype 143GHz band lter set is given in Figure 3.6. The characteristic is shown for each lter along with the overall system transmission. As can be seen, the overall lter transmission is about 55% while the rejection increases from 1010 to 1012 not accounting for the cut-o of the nal lter. These characteristics ful l the spectral purity requirement for HFI channels. Thermal modelling of the power reaching the detector has shown that the band edge de ning the low pass lter needs to be at 100mK to minimise both the 4K and 1.6K shield/ lter/horn emission that reaches the detector. This model also shows that the high pass edge needs to be at 1.6K to minimise the instrument 4K emission. For the longer wavelength channels the high pass ltering is obtained by the waveguide between the back-to-back horn pair at 4K. In order to prevent the detectors from seeing a 4K blackbody over all frequencies up to the low pass edge cut-o, an additional grill type lter is put in the beam waist at 1.6K. An alternate solution would be to use another waveguide at the exit of the nal detector condensing horn at 100mK. To minimise the thermal loading on the 100mK stage the second low pass lter is placed at the exit of the back-to-back horn pair where there is a beam waist and the incident radiation is at near normal angles to the lter. This ensures that most of the unwanted power (wavelengths  lower band edge) is re ected at the 4K stage. Equally important, this location has been shown to eliminate the 2mm filter centre part 55cm-1 edge

C147: 12 cm-1 edge C169: 18 cm-1 edge

C146: 5.9 cm-1 edge

Overall System Response

Band I Band (GHz) 100 Centre(cm;1) 3.34 Band Edge Low Upper (GHz) 88 113 t (cm;1 ) (mm) Band edge #5 2.9 3.8 6.0 WG / Grill WG radius (mm) 1.00 Blocker #1 5.8 3.02 Blocker #3 12 1.46 Blocker #2 18 0.97 Blocker #4 55 0.90 Total Filter 12.3 thickness (in mm) Band IV Band (GHz) 353 Centre(cm;1 ) 11.8 Band Edge Low Upper (GHz) 309 397 t (cm;1 ) (mm) Band edge #5 10.3 13.2 1.7 WG / Grill WG radius (mm) 0.28 Blocker #1 18 0.97 Blocker #3 25 0.70 Blocker #2 39 0.45 Blocker #4 55 0.75 Total Filter 4.57 thickness (in mm)

Band II Band III 143 217 4.77 7.24 Low Upper Low Upper 125 161 t 190 244 t (cm;1 ) (mm) (cm;1 ) (mm) 4.2 5.4 4.2 6.3 8.1 2.8 WG WG 0.70 0.46 6 2.92 12 1.46 12 1.46 18 0.97 18 0.97 39 0.45 55 0.70 55 0.85 10 6.49 Band V Band VI 545 857 18.2 28.6 Low Upper Low Upper 477 613 t 750 964 t (cm;1 ) (mm) (cm;1 ) (mm) 15.9 20.5 1.1 25.0 32.2 0.7 Grill Grill 0.18 0.55 0.12 0.35 25 0.70 39 0.45 39 0.45 60 0.29 60 0.29 100 0.35 100 0.23 3.44 2.01

Table 3.3: Detail of the ltering scheme: (1)The lter number (e.g. #1 etc.) indicates the location of the lter in the instrument #1: 4K stage #2: 1.6K stage #4: 100mK stage #3: 1.6K stage #5: 100mK stage (2)Low frequency Grill edge should be on 1.6K stage for some channels (eliminates 4K loading on these pixels) enhanced sidelobe response that occurs if the lters are placed in a converging beam in front of the feeds. The metal mesh interference lters have been developed by QMW and used by QMW and others for many astronomical eld instruments. They have been shown to be durable (embedded in polypropylene) and lightweight (self-supporting) and have been quali ed for space use by NASA.

3.3 Bolometric detectors The HFI sensitivity requirements have been determined from the fundamental constraints of photon noise originating from the 3K CMB radiation itself at the longer wavelengths and the residual emission from the telescope and instrument at the shorter wavelengths. Speci cally then, the Planck HFI bolometric detectors require inherent NEPs of less than or equal to the quadratic sum of the noises from the background components together with a speed of response fast enough to preserve all of the signal information at the 1 rpm scan rate of the satellite. The current baseline detectors, which provide the required sensitivity and response speed are CalTech/JPL spider bolometers. These requirements are summarised in Table 3.4 in terms of the required bolometer parameters along with the results of measurements on a prototype device, CSK18, whose time constant and NEP are close to those needed

for the 100GHz channel. Frequency

Q C(est.) GHz msec pW PJ/K 100 4.6 0.43 0.46 143 3.2 0.46 0.39 217 2.1 0.47 0.34 353 2.0 1.12 0.36 857 2.0 12.0 0.58 143p 217p 545p

3.2 0.23 2.1 0.24 2.0 1.87

CSK18

4.5

0.39 0.34 0.41

G NEPbol NEPphot NEPt =NEPp pW/K 10;17 10;17 multicolumn1c| 56 0.82 1.01 1.29 68 0.90 1.24 1.24 90 1.04 1.49 1.22 110 1.16 2.88 1.08 1200 3.80 14.6 1.03 68 90 190

0.90 1.04 1.51

Measured performances 0.45 56

0.82

0.88 1.05 4.66

1.43 1.41 1.05

Table 3.4: Requirements on detectors with assumptions : (1)The satellite has ap1rpm scan rate and we take 2 time constants (2 ) per beam (2)The inherent detector noise  10nV= Hz and the ampli er p noise  5nV= Hz 2 2 2 )1=2 (4)NEPtot = (NEPbol 2 + NEPphoton 2 )1=2 (3)NEPbol = (NEPphonon + NEPJohnson + NEPamp (5)The bolometer thermal conductance, G = MAX(100*Q, 0.56*C/t) pW/K As is evidenced by careful inspection of Table 3.4, the silicon nitride micromesh ("spider web") bolometer technology developed for Planck will provide background limited performance in all bands. The radiation is eciently absorbed in a conducting lm deposited on a micromesh absorber, which is thermally isolated by radial legs of uncoated silicon nitride that provide rigid mechanical support with excellent thermal isolation (see Figure 3.7 for details). The temperature of the absorber is measured by a small neutron transmutation doped (NTD) Ge thermistor that is indium bumpbonded to the absorber and readout via thin lm leads that are photolithographed on two of the radial legs. Compared to a solid absorber, the micromesh has a geometric lling factor of approximately 1.5% , providing a correspondingly small suspended mass, absorber heat capacity, and cosmicFigure 3.7: Prototype spider bolometer CSK18. Active absorber diameter, ray cross-section. The outer spider circle, is 5.675 mm. Inset shows NTD Ge sensor at the centre lithographic techniques with the two thicker current carrying and thermal conductance control lines used to fabricate the detectors ensure high running out horizontally to electrical contacts on the silicon substrate. reliability and repro-

ducibility. Micromesh bolometers are currently being used in numerous CMB experiments (BOOMERANG, MAXIMA, SuZIE, PRONAOS) which operate under similar optical loading and detector sensitivity requirements to those needed here. The Planck HFI bolometers will be optimised for the throughput, wavelength, background load and time constant requirement of each band. For all of the Planck bands, the optimum thermal conductivity between the absorber and the heat sink, and thus the NEP  (4kT 2 G)1=2 , is determined by the time constant requirement (conservatively taken to be less than half the beam crossing time) and the heat capacity. The micromesh architecture allows the thermal conductivity to be easily tailored to the optimum value for each band. Sensitivity is thus limited ultimately by the heat capacity of the device. In practice the polarised channels at 143 and 217GHz, where the backgrounds are lowest, provide the most demanding requirement but the data from the CSK18 prototype show that even these performances can be met. We estimate the heat capacity by calculating the contribution of the component materials used in constructing the bolometers derived from a combination of direct measurements and conservative upper limits. Because the geometry of the detector depends on the throughput and wavelength, the estimated heat capacity varies slightly between channels. The estimated bolometer NEP is compared with the background-limited NEP in Table 3.4. For most channels, the bolometer NEP is signi cantly below the background limit. In the worst case, for the 143 and 217GHz polarised channels, the detector and background-limited NEPs are equal. The last column in Table 3.4 to an ideal detector system. The estimates, given in Table 3.4, have been con rmed by fabricating a prototype bolometer (CSK18) that completely satis es the requirements of the 100 GHz channel. Despite the fact that this detector has a larger area than that required for the higher frequency channels, the achieved heat capacity of 0.45pJ/K comes close to meeting the speci cation for all channels, and is below the estimated heat capacity of 0.56pJ/K for this device. Because the resonant frequency of the micromesh bolometers is high ( 50 kHz), the devices are insensitive to the relatively low frequency vibrations encountered during the launch and operation. We have undertaken a program to test the performances of the bolometers under the vibration levels anticipated from the mechanical coolers on Planck. We have tested a bolometer with comparable sensitivity to a Planck detector under vibrations 10 times in excess of the expected level. Although further testing is required (using the ight electronics and wiring and active temperature control) no degradation in the noise performance has been observed under vibration. We have also successfully subjected bolometers to the multiple random and sine-sweep vibration tests used to qualify components on the US sounding rocket program.

3.4 Electronics 3.4.1 General Electronics

General electronics architecture The gure 3.8 shows the General Electronics Architecture of HFI. In order to reduce electrical coupling and to lead to autonomous sub-assemblies, each equipment will include, when possible, its own DCDC converter powered by 28V current limited and switched power lines. The electrical links will be implemented as shown in Figure 3.9. Fo cal Plan e Un it an d JFET Box

Read o ut Electro n ics Un it (REU)

0.1 K Co o ler

4 K Coo ler an d An cillary Units

0 .1K Co oler Co ntro l Un it

4K Coo ler Con tro l Un it

28V

28V

1 8K Co oler

18K Co o ler Co n trol Un it

28V

Data Processing Un it (DPU)

Po wer Su pp ly Unit (PSU)

OBDH (S/C I/F)

28V

Figure 3.8: General Electronics Architecture

The Data Processing Unit(DPU) Its functions are the following:  driving the instrument: start, cooling, active, suspend, stop, abort, routine operations, watch, tests, optimisations.  compressing the data to t in the telemetry allocation (see section 3.4.3)  production of the science and housekeeping telemetries, using the data provided by the various equipments (bolometer readout, cryogenics, and temperature sensors).  software uploading and downloading. It will be built around the same type of microprocessor as the one used for the other FIRST and Planck instruments. It is linked to the spacecraft through the ESA standard communication protocol, and to the other HFI sub-systems through speci c protocols. The science ow will contain, in routine operation, all the information necessary for data reduction: bolometer, dark bolometers, thermometers, etc... In test mode operation, it will contain data sets on selected channels, in a more verbose format. The housekeeping ow will contain all context parameters regularly transmitted at low rates.

Digital link

ON/OFF command line Data Out

Data In Complementary Oututput

Discrete message Line

RI>4KΩ Differential Input

Data Out

Data In Single Endeb ON/OFF Command line output (Low level 0.5V)

>10KΩ

Analog link Data Out >100KΩ

Data In >100KΩ

Chassis Ground Signal/Secondary PowerGroud

Analog monitoring Isolation Impedance

Figure 3.9: Electrical links

The Electrical Ground Support Equipment It gives the ground users access to the DPU through the industrial equipment in communication with the spacecraft. Its role is to code and send appropriate sequences to be transmitted to the DPU, and to receive and encode the data ows built by the DPU. It drives all procedures available on board, in order to carry out the following operations:  Instrument level Testing  Module and System level Testing  In orbit instrument commissioning  Performance Veri cation Phase  Routine operation

3.4.2 Readout electronics for 0.1K bolometers and thermometers The readout electronics of the bolometers and the 0.1K thermometers is based on a system able to give a total power readout over the frequency range needed for Planck, i.e. 0.016 Hz to about 100Hz (Gaertner et al., 1997).pThis system uses a dierential AC square bias current and has a uniform noise performance: < 5nV= Hz , i.e. less than the Johnson noise of the bolometer, over most of the useful frequency range. This system allows a full control of the current and voltage of the measured resistor, so that in ight optimisation of the resistors impedance will be possible: V(I) measurement, and (S/N)(I) on the CMB dipole and the galaxy. Two dedicated heaters will allow to perform the temperature control of the 0.1K stage. The whole subsystem architecture is shown in Figure 3.10. Each bolometer/thermistor is handled by its proper Modulation/Ampli er circuit. Groups of 6 such modules are digitally interfaced to a programmable circuit (FPGA) which controls the modulation/ampli er parameters and performs the demodulation of the AC signal (transient elimination and data averaging). A main processor (with cold redundancy) will handle 11 such blocks for a total of 66 measurement chains: 48 bolometers, 4 dark bolometers, 2 test capacitances (to allow system tests at room temperature), 10 thermometers and 2 heaters Similar systems have been successfully used on ground based (Desert, Giard, & Benoit, 1997) and space borne experiments (Murakami & et al, 1996).

Description With this system (Figure 3.10) the bolometer (/thermistor) bias current is an alternate square (frequency fmod ) wave fed through two symmetrical capacitive loads. The voltage dierence at the bolometer ends is read by a dierential chain which allows a good rejection of EMIs due to its perfect symmetry. The full amplitude of the bolometer voltage is ampli ed, but only the dierence to a reference signal (V) is digitised to reduce the dynamics of the AD converter (14 bits are enough to code the strong sources against the photon plus detector noise at the digitising frequency). A compensation of the transient signals induced by the steps of the square wave is directly performed by addition of a square voltage of variable amplitude to the triangular bias of the capacitive loads (this is derived to "anti-transient" current by the load). The 3 modulation parameters, current (I), compensation voltage (V), and anti-Transient (T) are controlled by 3 digital to analog 12-bits converters . The signal is ampli ed with a programmable gain and is passed through a 2nd order RC lter (30dB attenuation at fdigit =2) before AD conversion. The digitising frequency is chosen fast enough to allow a correct elimination of the residual transients in the subsequent digital processing (fdigit = 64fmod ). As none of the bolometers end is connected to the ground, discharging to the ground will be possible on request via two FETs connected to each end of the bolometer.

0.1 K 120 K

Faraday cage (4K to 50K) 300 K

Modulation Control andReset 4x(12 bitsDACs)

Differential Preamp Programmable Amplifier ADC 16 bits nd 2 Order RC filters 16 bits ADC

+15VA -15VA + 5VA x6

x 11

c 90 x 64= 5760 words/sec + H.K.

Command & Preprocessing Module ACTEL FPGA c +5VA –5VA +15VA –15VA +5VN

+5Vn

Control and Multiplexor Modul. ACTEL FPGA

+5VA –5VA +15VA –15VA +5VN

PSU

1080x11=11880 longword/sec + H.K.

Control and Interface Unit INMOS T805

+28V Heater

32 High frequ. Bolo. 180 Hz 16 Low frequ. Bolo. 90Hz 10 Thermometers 90 Hz 4 Dark bolometers 180 Hz 2 Heaters / 2 High impedance 180 Hz devices 66

From DPU

Figure 3.10: Architecture of the bolometer / 0.1K thermistor readout electronics

Digital pre-processing functionalities They are:  Phase synchronous dectection  Blanking of transient signal at beginning of each half modulation period (Nblank )  Summation of the signal over each half period  Delivering signals to main processor: scienti c data ow at 2fmod , and house-keeping ow (One fully sampled modulation period of the signal, plus the parameters of the Modulation/Ampli er electronics at the rate of the general house-keeping format).

Functions of the Readout Control and Interface Unit Routine operations:  Decodes dedicated DPU commands and controls the variable parameters of the Modulation/Ampli er card and the Digital pre-processing: fmod , I, V, T, BOLreset, Gamp , Ndigit , Nblank .  Collects data from all bolometers and sends them to the DPU.  Decodes from DPU commands the parameters of the PID 0.1K temperature control. Computes PID commands from thermometer measurements and adjusts the heater current within a time delay compatible with the thermometer time response.  Optimisation of bolometer working point (beginning of mission, and occasionally on request)  For all bolometers, measures V(I) by incrementing I in a given range and nding for each I the value of V which maximises the output signal (electronic optimisation). This optimisation is done at speci c times through an open loop with the ground. The time delay between two successive values of I is adjustable so that at least one complete sky scan is measured which allows an optimisation on the sky signal.  Bridge balance: nd for each bolometer the V value that gives a null output signal

Development status Our system is derived from a single ended (non dierential) readout implemented on the ground based \Diabolo"photometer, and which has proven its ability to be used on the sky under operational conditions. An engineering model of the space quali able electronics has been built and fully tested, both with single ended and dierential readouts, on 0.1K bolometers cooled with open cycle dilution (in Grenoble and at Caltech). Thep noise level of the modulation signals (triangle and square) has been measured and is bellow 3nV= Hz over the useful frequency p range, 0.01Hz to 200Hz. The input noise of the amplifying chain tested independently is below 2nV= pHz over the same range. This is to be compared to the Johnson noise of the bolometer itself : 5nV= Hz for a 10M  resistor at 0.1K. Finally we show in Figures 3.11 and 3.12 the noise spectra obtained with the dierential system for a test resistor at ambient temperature and for a prototype Planck bolometer from Bock et al. mounted in the 0.1K Caltech \Yogi" test bed with a temperature regulation. In both cases the measured noise is close to the level expected from the Johnson noise (Bhatia et al., 1998).

Figure 3.11: Noise power spectrum of readout electronics on a 3k test resistor at ambient temperature (same Johnson noise as a 10M  resistor at 100mK)

Figure 3.12: Noise power spectrum of a Prototype Planck bolometer in the Caltech \Yogi" testbed equipped with 0.1K open cycle dilution cooler and temperature control from our readout.

JFET Box Field Eect Transistors are needed as a rst stage of ampli cation and impedance matching for the HFI bolometers signals. To minimise RF pickup and microphonics, the FETs should be mounted as close as possible to the bolometers. However, Silicon JFETs do not operate below about 100K, while low temperature Ga-As devices are still too noisy to be considered an option. So the FETs will be maintained at T > 100K . We plan to use the NJ132L process to build matched pairs of FETs. This process p has been used in several cryogenic preampli ers for bolometers, and features low noise (1:1nV= Hz @10Hz, 5mA ID) and low input capacitance (15pF @VDS=10V, VGS=0V). Custom devices will be produced by gluing FET dies on a ceramic support in a standard multipin metal case. Physically the FET box is a gold-plated copper cavity sharing the same RF enclosure of the bolometers. For RF immunity reasons we cannot aord to have the high impedance bolometer signals outside the RF shield. The FETs will lower the impedance of the bolometer signals, so that only low impedance signals exit the bolometers cavity through suitable RF lters. NJ132L FETs are mounted on a card suspended in a 50/60K copper cavity. The same card includes the load resistors for the FETs and the bias capacitors for the bolometers. The support structure is a strong stainless steel tube. This tube carries away by conduction most of the power generated from the FETs conducting Figure 3.13: The 4K box and its extension to it to the 50/60K shield, which features signi the JFET box attached on the 50/60K stage form cant refrigeration power. The support structure a Faraday cage protecting the detectors and the design is optimised for high stiness and high mechanical resonance frequency. The FETs card is wiring against external radiation self-heated at 120K by the power dissipated in the FETs (about 90mW). Two Si diode thermometers and a set of metal lm heaters mounted on the card are provided for temperature monitoring and control. 132 additional FETs are mounted on the card for resetting the charge accumulated on the readout FET gates, but not dissipating any signi cant power in normal operation. A thin wall stainless steel tube extends the bolometers RF cavity from 4K to 20K and another extends the RF cavity to 50/60K (see Figures 3.13 and 3.14). Twisted pairs made with 50 m diameter manganin wires form a cable connecting the 4K input signals connectors (3 microD 51 pin connectors) to the 120K FET card. The cable is heat sink at 4, 20, 50/60 and 120K. The total length of these wires is 25cm, resulting in a capacitance of about 12pF. Twisted pairs of the same kind connect the FETs to the output connectors on the 50/60K ange. The signals go through eccosorb RF lters placed below the output connectors. These lters separate the RF clean bolometer cavity from the RF \dirty" environment outside the FET box.

The heat loads from the FETs (90mW) and the heater of the temperature controller (10mW) is carried out to the 50/60K stage through the stainless steel support tube (70mW) and the manganin cables (15mW). The heat load to the 20K stage through the input manganin cable is 1.6mW. An additional heat load on the 20K stage is due to the stainless steel tube extending the RF cavity to the 50/60K stage (12mW). The heat load from the 20K stage on the 4K stage through the stainless steel tube is 2.5mW.

3.4.3 Science Data Time constants and operating frequencies

Figure 3.14: Schematic implementation of the FET of the readout electronics box on the 50/60K shield. The sampling of the signal must comply with the Nyquist criterion, and then on the width of the beams. Each sample taken by the readout electronics is the average of at least one full period of bias voltage, but it can be taken twice per period. In addition it is desirable to phase-lock the sampling with the compressors of the 4K cooler, in order to avoid interferences in the science signal. This leaves only little margin for the choice of the working frequencies: the bias frequency fmod must be in the 80-100Hz range, and the sampling frequency has to be equal to fmod for the 100GHz and 143GHz channels, and to 2fmod for all other ones. Since the readout electronics averages the signal for a full period, it acts as a low pass lter for signal and noise. The angular resolution along a scanning direction will be degraded by less than 10% for all channels. An other source of attenuation is the electrical time constant made by the bolometer resistance and the wires and JFET parasitic capacitance (about 100pF). Requiring that the time constant is less than 10% of the bias frequency, the impedance of the bolometers must be less than 20M  for the 100GHz and 143GHz channels, and 10M  for other ones.

Data rates, on board data compression A breakdown of the science data rate is given in Table 3.5. The uncompressed data rate is 138.5 kbit/s. The maximum data rate for the HFI telemetry is 34.2 kHz, including all possible overheads and margins, which leaves about 29.4 kHz for compressed data. The compression factor needed is hence higher than 4.7. This compression ratio seems to be feasible without signi cant loss of information because of the signal properties. Most of the time, the signal variations are dominated by the gaussian noise. The sky structures drive the signal dynamics but aect less than 10 % of the samples. The glitches due to cosmic rays should be even less frequent in the data. As a rst approach, a speci c real time algorithm has been developed for the HFI data compression. It processes sequentially short bunches of (typically a few hundred) consecutive time samples for each channel. It requires low buer memory and its compression performances are somewhat tuneable depending on the real need. This algorithm gives compression ratios of 4.4 and 5.5 on white noise at the expense of adding respectively 2% and 4% of noise, which is typically accepted in analog to digital conversion systems, due to the quanti cation of the continuous signals. The algorithm is robust against data corruption since it operates separately on small data bunches. Such an algorithm, with this type of signal dominated by noise, gives the requested compression ratios. However, its extensive test by simulating a large variety of situations has still to be carried out, as well as its comparison with other potential candidates.

Number of Frequency measures Bits/sec Channels (Hz) per second (16 bits meas) 100 GHz 4 90 360 5760 143 GHz 12 90 1080 17280 217 GHz 12 180 2160 34560 353 GHz 6 180 1080 17280 545 GHz 8 180 1440 23040 857 GHz 6 180 1080 17280 EMCs 6 180 1080 17280 0,1K temperatures 10 10 100 1600 High Frequency measures 1 64 64 1024 TOTAL 135104 Housekeeping 3420 Grand TOTAL 138524 Table 3.5: Data rate breakdown

3.5 Cryogenics 3.5.1 Introduction The sensitivity of the HFI critically depends on the temperature of the detectors. The cooling scheme that allows 4K to cool at 0.1K the 48 bolometers and their lters is based HFI Focal Plane Unit on technical solutions that have been successfully tested in ight or have been demonstrated on ground applications and are being quali ed for space. Each cooling system takes 20K plate (LFI) 0.1K advantage of the previous one in an optimal way, following 1.6K 20K the arrangement of Figure 3.15. The precooling of the instrument to 50/60K is insured 24K by the Planck Payload Module thanks to passive radiation 20K 20K 18K to free space, which can be very ecient in the environment shield of the Earth-Sun Lagrangian L2 orbit of Planck. J-FETs 50K A closed cycle cooler using Joule-Thomson (J-T) ex50K stage pansion of hydrogen and sorption compressors insure cooling of both the LFI and the HFI at about 20K. The J-T 80K valve delivers a mixture of liquid and gas at about 17.5K. A Harness 140K rst high eciency heat exchanger cools at 18K the helium 0.1K cooler 20K coolers

ows of the 4K and the 0.1K stages. This heat exchanger 4K cooler is thermally decoupled from the one used to cool the LFI Figure 3.15: Scheme of the cooling sys- 20K plate, for which a larger temperature drop in the exchanger is acceptable. The shielded cable from the 4K box tem to the JFET box (see Section 3.4.2) is thermally attached to the 20K stage in order to reduce heat loads on the 4K stage. Joule-Thomson expansion of helium compressed by mechanical compressors is used to cool the 4K stage of the HFI. That stage supports the back-to-back horns that insure the optical coupling of the detectors with the telescope. Lower temperature are obtained at 0.1K by dilution of 3 He in 4 He. A 1.6K stage is generated by J-T expansion of mixed helium. This stage supports lters and intercepts heat from the 4K stage. The 0.1K stage supports the bolometers, thermometers, heaters, and lters. Its temperature is controlled thanks to a closed loop active system. The additional cooling power available from the mixture of 3 He and 4 He under 1.6K is used to intercept heat inputs along the mechanical support of the 0.1K stage. The tubes from and to each stage are attached to form heat exchangers for all circulating uids in

order to minimise thermal losses.

3.5.2 The 0.1K cooler

Overview

The 0.1K cooler uses the dilution of 3 He in 4 He, which is widely used in the laboratory to obtain temperatures down to 10 mK. Alain Benoit and Serge Pujol have invented an original feature (Benoit, 1997), (A. Benoit and S. Pujol, Patent 8801232, Paris, 1988) that allows to use this principle in a micro-gravity environment. Mixing 3 He and 4 He in small capillary tubes in an open cycle has proven to work in any gravity condition testable on the ground. An instrument dedicated to astronomy (Desert, Giard, & Benoit, 1997), and using this method to cool bolometers at 0.1K has been developed and is currently operated on large telescopes. In addition a technical research activity is funded by CNES and ESA in order to qualify a demonstrator of the cooling chain to be used in the HFI. The 0.1K stage of this demonstrator has been successfully tested to the speci ed vibration levels. The performance of the HFI cooler have Figure 3.16: View of the low temperature been estimated by a proper scaling of the demonstrator's part of the demonstration model of the one. The 0.1K cooler includes a low temperature part and 0.1K cooler. a warm part, where the gas is stored and its ow controlled. They are described in detail hereafter.

Low temperature part (4K to 0.1K) 266bars 230bars Effective

He4

He 3

D 300K

D 20bars

Valve for ground operations

Bistable Valves Porous plugs

Charcoal 50K Sintered Cu 18K Exchanger 4K

Mixture He3 He4 a few mbars

Precooling Heater exchanger +Temperature probe Heater +Temperature probe

The 0.1K stage: The 0.1K stage supports 48 bolometers with horns and attached lters, 4 blind bolometers, thermometers, heaters and the dilution heat exchanger. It is designed to minimum mass while insuring a proper positioning of the optical components. In consequence, its thermal conduction is not high enough to insure proper thermalisation of the bolometers. Thus, the low temperature heat exchanger (typical dimensions 3 3 4cm) is mounted on this plate and connected through thermal links to four dierent bolometers blocks. Three thermometers (same readout than bolometers) and two heaters are mounted on this heat exchanger. The heat exchanger and electric cables between 0.1K and 1.6K: Due to the large temperature ratio be-

Heater + Temp. probe Connector

1,6K Sintered Ag Heat exchanger

Description of the cooler:

JTvalve 0,1K

Figure 3.17: 0.1K stage diagram

tween these two stages(16), direct connections for electric cables, tubing and mechanical support between them would lead to excessive heat loads by conduction. The electric wires are soldered together with the dilution tubes. This exchanger wraps twice around the 0.1K stage. The kevlar cords supporting the 0.1K stage are used to maintain this exchanger and each cord is thermalised at two intermediate points between 0.1K and 1.6K. The excess cooling power of

the dilute mixture as it warms up is used to intercept heat in these cords. The diameters of the dilution tubes vary from 40m near the 1.6K stage to 200m at the 0.1K stage.

The 1.6K stage and Joule-Thomson expansion: At the 1.6K stage, each of the input tubes are thermalised with sintered silver inside a small cooper block (2cm3 ). The output goes through a small capillary (13m inside diameter and 5cm length) in which J-T expansion is achieved, and then into a cooper block with sintered cooper (8cm3 ) in order to get a good thermalisation of the vapours. The heat exchanger between 1.6K and 4K: The inside diameter of the exit tube from the cooper block at 1.6K is 2.5mm and its length is less than 40cm. The inside diameter of the input tubes feeding pure 3 He and 4 He is 300m for 3/4 of the length and 60m for the last 1/4 near the 1.6K stage. They are soldered outside along the output tube to achieve a good thermalisation of incoming gas by the leaving one. The electric wires can be xed to this heat exchanger for commodity as in the \demonstrator" but that is not a requirement for thermal eciency. Flow rates: The ow rate can be estimated from the results obtained on the \demonstrator" model

that has been successfully tested and vibrated. Since the 0.1K stage will have a mass of 2.5kg, as compared to the 0.7kg of the demonstrator, heat loads from the mechanical supports increase, while the identical number of wires in the two con gurations guarantees similar heat loads from the wiring. The performances of the demonstrator are: a base temperature of 93mK and heat lift of 50nW (in addition to conductive loads) for a ow rate of 2mole=s of 3 He and 8mole=s of 4 He. By extrapolating the experimental data from the demonstrator performances, the estimated need for the HFI is 4mole=s of 3 He and 20mole=s of 4 He. A model of the HFI will be implemented and operated in the early stage of the instrument development to validate this extrapolation. With the expected ows, the available cooling power at the 1.6K stage is about 800W .

Gas supply and dilution control Tubing between 300K and the low temperature part: Assuming heat exchangers between

input and output tubes with an eciency of 10%, n=24mole=s, and C=20J=mole=K , the thermal loads from the uids and the tubing are 0.67mW, 2mW, and 11.5mW at 4, 18, and 60K, without connection of the tubes with stages warmer than 60K. Bores of the tubes 300K to 60K 60K to 4K Inlet tube 1mm 0.5mm Exit tube 3mm 2.5mm

In order to avoid blocking by contaminants, the incoming 3 He and 4 He go through a set of traps and lters: a hydrogen getter at 300K, a charcoal trap at 60K and sintered metal lters at 18K and 4K. In addition small heaters are installed on the tubes at the entrance of each trap or lter to be able to heat for short periods of time (2 minutes) the input tube at the temperature of the preceeding stage, and therefore to remove any impurity condensed inside the tube. The typical volume of each trap or lter is 5cm3 . In order to facilitate the integration, it is planned to use tube connectors between dierent stages with metallic gasket. There will be three connectors located at 4, 50/60 and 300K.

Gas storage and ow control: The gas is stored in high pressure containers and a pressure regu-

lator maintains a constant pressure of 20 bars at the entrance of the ow control unit. Solenoid valves are used to feed the cryostat through dierent ow limitators. Each of them has a given impedance and the ow is adjusted, using a given choice of open and closed valves. With 4 ow restrictors in the ratio 1 2 4 8, the dierent ows available with a constant 20 bars input pressure will be for each isotope (unit is mole=s): 3 He 4 He

0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6 6.0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

3 He

4 He

total volume of gas (litre TPN) 4 876l 21 600l total mass of gas 0.78kg 3.46kg total volume of container needed 20,6l 91l volume of each container 35.5l 3*35.5l mass of container 7.8 kg 23.4kg usable helium gas (litre TPN) 8400l 25200l mass of helium gas 1.5kg 4.5kg time life for nominal ow 32 months 21 months total mass (gas+container) 9.3kg 27.9kg Table 3.6: Helium storage quantities The real ow depends on the input pressure and the pressure at exit of the ow controller. The parameters of the valves and ow restrictors are chosen to accept large uctuations of the pressure drop in the cryostat tubing, from the baseline value of 3 bars up to 11.5 bars, while maintaining the

ow of gas with an accuracy of about 10%. The current design is based on the use of ight quali ed hardware:  Storage container : The ow requirements are 2mole=s for 3He and 4He during ground operation and part of the transfer orbit, and 4mole=s of 3 He and 20mole=s of 4 He during 18 months, in order to include margin and operation delays. The storage tanks temperature is 300K with a pressure of 300 bars (42.31 g/litre) and the residual pressure at the end of the mission will be 20 bars. Then the available gas is 217 litre TPN for 1 litre container. Standard containers built by Aerospatiale are convenient for the HFI. The main quantities related to storage containers are given in Table 3.6. The total mass of the ow control system is 8kg in the baseline, and 10.8kg including redundant items. The choice between the options will be made after analysis of failure modes. The parts used in this system are available in ight quali ed versions.  Pressure regulator: Standard pressure regulator from Aerospatiale with a mass of 1.15kg can be used to give a constant pressure of 20 bars to feed the ow control unit.  Solenoid valves: Standard valves from Aerospatiale with a mass of 56g each (maximum pressure 60 bars). The power needed for actuating the valves is 5 W at 20 C .  Flow limitators: They are made of sintered stainless steel inside a small tube that also acts as a lter for solid particles. Although their space quali cation will be straightforward, it has still to be done.  Pressure gages: The pressure gages sold by SEP cover all the needed range of pressures with a mass of 70g and a power consumption of 110mW.  Hydrogen getter: Has to be developed for space applications. Operations under vacuum have already been tested.

Temperature stability

p

The temperature of the most sensitive bolometers uctuates by 200nK= Hz , or about 2Krms, due to various sources of noise. These bolometers are also sensitive to small changes of the heat sink temperature at 100mK. Such uctuations are avoided thanks to an active control of the 0.1K heat exchanger. The thermometers are NTD germanium crystals similar to those of the bolometers with a larger volume and maximum thermal connection with the 0.1K stage. Data from three such thermometers are used to measure the temperature and get rid of electrical spikes induced by cosmic rays. The thermometer readout electronics are similar to those of bolometers and the command to two redundant heaters is driven by systems similar to that delivering the bias voltage to the bolometers. A simpli ed version of this system has been tested on the Yogi bench and allowed to obtain the noise spectra shown in Figure 3.12. This rst test gave results good enough to validate these principles.

Eect of particles Ionising particles are expected to heat the mechanical support of the bolometers. A preliminary modelling shows that the mean power deposited by particles is about 6nW, half of which are deposited by high energy particles that will be individually detected at a rate of a few per minute. A proper thermal architecture is needed to avoid that such transient heating propagate in the focal plane unit. The current solution is to separate the bolometers in four groups thermally linked. It will be re ned in further studies.

3.5.3 The 4K Mechanical Cooler Introduction

A 4K stage is required for cooling the dilution refrigerator and the focal plane unit. This will be provided by a 4 He Joule-Thomson (J-T) system. This system has been developed at the Rutherford Appleton Laboratory (Orlowska, Bradshaw, & Hieatt, 1995) and the technology has been transferred to industry: Matra Marconi Space Systems Ltd they have been working with ESA on the space quali cation of the units (Scull et al., 1996). A modi ed version of this system will be used on Planck. Pre-cooling of the system to 18K will be provided by the hydrogen sorption cooler. Additional cooling at an intermediate temperature is also required.

Description of the cooler Focal Plane Unit Low Vibration Drive Electronics

JT Orifice

4K stage Heat Exchangers

18K stage 50K stage

JT Compressors

Ancillary and gas cleaning equipment

Low Temperature plumbing

Figure 3.18: The major components of the 4K system The main components of the 4K system for Planck are:  The J-T compressors which give two stages of compression from about 1 bar to 10 bar.  Ancillary plumbing which contains part of the gas puri cation system, a lter, a ow meter and a valve for the J-T by-pass system.  The cold temperature plumbing which incorporates countercurrent heat exchangers between the stages and lters and gas puri ers on the stages. The 4K stage contains a reservoir for the liquid helium.  Low vibration drive electronics. The 4K cooler requires pre-cooling down to about 20K. On Planck a 18K stage will be provided by the hydrogen sorption cooler. Low vibration drive electronics will be used for the J-T compressors.

These are under development and will be available in the required time scale. The ancillary plumbing, cold heat exchanger geometry and mechanical con guration will need to be built and quali ed for this application.

Integration Subject to suitable cleaning, lling and purging procedures the J-T cold plumbing can be separated from the compressor and room temperature plumbing. Recent tests at RAL have demonstrated the robustness of the design to gaseous contamination. The design is modular so that integration of the 4K system can proceed in stages.

3.5.4 Models Item Mass (in kg) J-T compressors 15.73 5.08 Ancillary & gas cleaning panel Low temperature plumbing 1.08 7.00 Low Vibration Drive Electronics

Total

28.89

Table 3.7: J-T Compressors mass budget Two sets of J-T compressors and low vibration drive electronics will be procured from industry. The heat exchanger and ancillary panels will be built at RAL These will form the ight model and

ight spare units. A prototype model will be built for testing in the ground model at IAS. An EMC model will also be available. The cooler will be capable of being split up into the following parts: electronics, J-T compressors, ancillaries panel and J-T heat exchanger system. This will allow rapid change out of any of these sub-systems.

3.5.5 Technical performance

Cooling power mW

The proposed heat exchanger system has been modelled. The 23 aim of the design is to make the system robust and exible to 22 variations in heat load and pre-cooler temperature. The cooling power of the system is quite strongly dependent on the temper21 ature of the pre-cooler (see Figure 3.19). As an example, with a 20 1.3m heat exchanger between the 18 and 4K stages the cooling 19 power with pre-cooler temperatures of 18 and 20 K is 23 and 17 18 mW respectively. The latter gure would meet the design goal 17 with a small margin. Margin has been included as it is dicult 16 to calculate the cooling power of the system accurately. 15 18 19 20 Recent tests have indicated that the spider bolometers are Pre-cooler temp. K insensitive to vibration levels up to 21 mg rms which is approximately 70 times greater than the vibration levels expected at Figure 3.19: The theoretical cooling the focal plane. The J-T compressors will be mounted nearly 2m power of the 4K system as a func- from the focal plane assembly. The estimated power consumption of pre-cooler temperature tion is in the region of 80W and the mass budget (taken from FIRST-TRS-4K-0049-MMB) is given in Table 3.7.

Thermal budget The heat loads on the 50/60, 18 and 4K stages are given in Table 3.8. It is assumed for these calculations that the mass ow is 4.5mg/s and the heat exchanger length is 1.3m. The rst three

columns give the heat load from the heat exchanger ineciencies. The J-T eect and static conduction are given in the next columns. Negative values mean cooling powers. Bottom HX Middle HX J-T HX J-T eect Cond Total mW 300K -160.4 -10.0 -170.4 50/60K 164.0 -46.7 10.0 127.3 18K 75.3 -3.8 0.3 71.7 -23.0 0.0 -23.0 4K Table 3.8: Heat loads on each of the stages from the 4K system

Thermal model of the loads on the 4K stage Dilution Pipes :CuNi, S/L = 3.26E-5m J-T Pipes :Stainless Steel, S/L = 1.03E-6m Bolometer Wires :Manganin, S/L = 2.95E-6m Thermometry Wires :Constantan, S/L = 9.43E-6m Struts, 18K to 4K :CFRP, S/L = 3.232E-3m,e = 0.80 external, 0.80 internal 18K and 4K shields :Aluminium, e = 0.05 JFET Bellows :Stainless Steel, S/L = 14.5E-5m 4K Aperture :Eective  = 0:90 Primary/Secondary Mirror: = 0:02 50K Enclosure :Shield  = 0:05, Optical Bench  = 0:90

Table 3.9: Main Modelling Assumptions The cooling power of 23 mW is to be compared with the loads derived from a thermal model of the payload. Thermal and Geometrical Mathematical Models (using ESATAN and ESARAD) of the major temperature interfaces which exist in the Focal Plane Unit (HFI and LFI) have been established and integrated into mathematical models, established at ESTEC, of the Planck payload and service modules representative of the \stand-alone" con guration. Although the \stand-alone" con guration is not baseline it is considered that at this stage of the design such an analysis would result in load predictions which are also acceptably representative of the combined FIRST-Planck mission. The FPU is represented by a total of 42 nodes within the Thermal Mathematical Model and the rest of Planck by a further 12 nodes. 1.6K Cooling from 1.6K stage - 0.8 mW

Heat lifted by J-T cooler 13.1mW

4K

Radiation on aperture 2.0mW Conduction along wires 0.7mW

Conduction along pipes & bellows 4mW

Conduction along struts 6.3mW

Dil. Cooler Heat Exchr. 0.8mW

Radiation 0.1mW

20K

50K

Figure 3.20: Ladder diagram

The most signi cant aspects of the Thermal Model are summarised in Table 3.9. In order to calculate the heat lift requirements for the 4K Cooler it was also necessary to know the total heat leak into the 1.6K stage and the precooling requirements at the 4K stage for the Dilution Cooler - these values were derived from the Dilution Cooler study. The HFI FETs are assumed to be mounted o the 18K structure and the wires are well heat sunk to the 18K stage. The 100K (approx.) radiation from the FETs is assumed to be well baed so that it too is sunk to the 18K stage. A ladder diagram summarising the most signi cant heat ows between the 50K and 4K temperature stages is shown in Figure 3.20 for the steady state case for which the FPU is at operational temperature and the sun is directly incident on the solar array at the L2 point. Under these conditions the Optical bench is predicted to be at 47K, the primary

mirror at 36K, the secondary mirror at 48K and the telescope enclosure at 50K. Note that although the loads onto the 18K stage have been calculated they are not included here since they also form a part of the LFI design and the load extracted by the Sorption Cooler is dominated by the LFI HEMT dissipation rather than parasitic loading. This preliminary analysis has indicated a cooling requirement for the 4K Joule Thomson Cooler of 13.1 mW. This leaves a good margin, the cooler being designed for peak loading conditions of about 23 mW. However at this early stage such a margin is required since this present model is based on many assumptions regarding the design of the FPU and also of the passive cooling performance of the Planck telescope. Some typical areas where uncertainties in the predicted loads could arise are the low temperature conductivities of materials used, in particular with regard to the struts. These uncertainties shall be examined in future work, together with the transient performance of the instrument including cool-down.

3.5.6 18K-20K Sorption cooler Sorption coolers are comprised of a sorption compressor, containing a sorbent material, and a Joule- Thomson (J-T) expander. Several review papers have been published which describe the history and basic concepts behind the various kinds of sorption coolers more completely (Wade, 1991). This cooler is highly ecient and has major advantages for Planck because 1) there are no cold moving parts, no vibration and very low EMI, and 2) integration with the spacecraft is simple. Two single-stage sorption coolers are proposed for the Planck Low Frequency and High Frequency instruments to provide full redundancy. The two 30kg compressor assemblies, each 100 750 750mm in size, are located on the Planck equipment platform where they are heat sunk at < 280K . During operation of the cooler 0.0045g/s of compressed refrigerant, desorbed at 6MPa by a compressor elFigure 3.21: Schematic of a 20 K Planck cooler ement heated to 485K, is precooled in a tube-in-tube heat exchange and expanded through a J-T ori ce to create a gas/liquid refrigerant mixture at 0.03MPa and < 18K . The liquid evaporates as it absorbs heat from the instruments, warmed as it returns through the tubein-tube heat exchanger and is then absorbed in a cool (< 280K ) compressor element. The Planck instrument cooler compressor assemblies will each contain 5 compressor elements. At any point in time one compressor element will be heating to pressurise, one hot to desorb gas, one cooling to depressurise and two cold to absorb. Closed-cycle operation is achieved as the compressor elements are switched through each step in this process with the complete cycle taking 4000 seconds. Switching is accomplished by using solid state relays to alternately turn on and turn o electrical heaters embedded in the compressor elements. Gas-gap thermal switches are incorporated into the compressor element design to alternately thermally isolate the compressor elements when in the heating and desorbing phases of operation, and to thermally connect the compressor elements to the < 280K temperature sink when in the cooling and absorbing phases of operation. Stable cold end temperatures are achieved through maintaining a stable low pressure at the liquid reservoir. To help accomplish this, a 1.5 l tank is added to stabilised the high pressure and an extra sorbent bed is maintained at the 280K

sink temperature to stabilise the low pressure by simulating an approximately 200 l plenum. A gas manifold with passive check valves, one inlet and one outlet for each compressor element, completes the compressor assembly and is used to direct the gas ow. The cold end incorporates a contamination lter, a porous plug ow restrictor (the J-T expansion device) and three liquid reservoirs. The rst reservoir of each cooler is mated to the RAL 4K cooler to provide precooling at < 18K . The second liquid reservoir is used to provide the rest of the refrigeration required to cool the LFI, shield the HFI and to intercept parasitics to both instruments, totalling approximately 1.2W at 20K. The third reservoir is controlled at approximately 24K to wick and then sublimate any excess liquid refrigerant. Figure 3.21 shows a schematic of a Planck 20 K sorption cooler. Figure 3.22 shows a drawing of the compressor elements currently being fabricated as prototypes for Planck . The cold end and the compressor are connected by a tube (6.35mm o.d.)-in-tube (3.18mm o.d.) heat exchanger which is approximately 5m long. The hydrogen refrigerant gas is precooled at each radiation shield to minimise the heat rejected on the optical bench. For a telescope with three radiatively cooled shields at 140, 80 and 50K, the heat rejected by is respectively 364, 511 and 727mW. The cooler is sized to provide the cooling required by the Planck instruments with substantial margin. The baseline operation presumes a 50K telescope enclosure and precooling temperature. In this con guration the nominal performance for the cooler is 1.2W cooling for 360W of inFigure 3.22: AutoCad drawing of the compressor element cur- put power. The required cooling loads rently in fabrication. It serves as a prototype for the Planck for several o design point cases are cooler under the NASA funded Vibration-Free Cooler tech- presented in Table 3.10. Requirement nology development program being conducted at JPL. This changes necessitated by a warmer than element is sized to meet all current Planck performance re- baseline environment can be easily acquirements commodated through linear scaling of the compressor length and increased input power. It is clear from the results shown in Table 3.10 that the environment should be maintained below 70K. Heat Loads 80K 70K 60K 50K (mW) enclosure enclosure enclosure enclosure QHEMTs 550 550 550 550 Qwaveguide 303 243 186 132 Qstruts 40 30 20 16 Qcable 20 14 12 10 Qred. cooler 25 15 10 7 Qradiation 237 140 75 40 QHFI interception 80 70 60 50 Subtotals 1255 mW 1062 mW 913 mW 805 mW 40% Margin 502 mW 425 mW 365 mW 322 mW Totals 1757 mW 1487 mW 1278 mW 1127 mW 20K cooler eciency 600 W/W 475 W/W 360 W/W 300 W/W Total power required 1054 W 706 W 460 W 338 W Table 3.10: Total instrument cooling and 20 K sorption cooler input power required as a function of telescope enclosure temperature. It is assumed that the precooling temperature is that same as the enclosure temperature

Chapter 4

INFORMATION MANAGEMENT AND DATA PROCESSING PLANS 4.1 Introduction

In order to take maximum advantage of the capabilities of the Planck Surveyor mission and to achieve its very ambitious scienti c objectives, proper data reduction and scienti c analysis procedures need to be de ned, designed, and implemented very carefully. The data processing should be optimised so as to extract the maximum amount of useful scienti c information from the data set and to deliver the calibrated data to the broad scienti c community within a rather short period of time, as stated in the Science Management Plan. As demonstrated by many previous space missions using state-ofthe-art technologies, the best scienti c exploitation is obtained by combining the robust, well-de ned architecture of a data pipeline and its associated tools with the high scienti c creativity essential when facing unpredictable features of the real data. Although many steps required for the transformation of data can and must be de ned early in the development of the pipeline (and some of them have been already tested and implemented in the simulations made by the teams of the proposing Consortia), some of them will remain unknown before ight data are obtained. Planck is a PI mission, and its scienti c achievements will depend critically on the performance of the two detector systems, LFI and HFI, and on the telescope. The data processing will be performed by two Data Processing Centres (DPC). However, despite the existence of two separate distributed DPCs, the success of the mission relies heavily on the combination of the measurements from both instruments. Moreover, some phases of the data processing are almost identical for both instruments and optimisation of resources demands a common implementation of those parts. A unifying approach is thus necessary in order to fully achieve the scienti c objectives of the mission, and to obtain a single nal set of data, derived optimally from all products of the two DPCs. The HFI and LFI Consortia and the Telescope Provider have agreed on the need to completely integrate their data reduction and scientic analysis approaches thus in this chapter, sections 4.1 to 4.4 are common to both HFI and LFI responses to the AO.

This section deals with a topic of wider scope than data processing (reduction and scienti c analysis): it also discusses aspects related to information management, which pertain to a variety of activities concerning the whole mission, ranging from instrument information (technical characteristics, reports, con guration control documents, drawings, public communications, etc.), to the analysis of the impact on science implied by speci c technical choices. In particular, an Integrated Data and Information System (IDIS) will be developed to allow proper intra-Consortium and inter-Consortia information exchange. IDIS is described below (section 4.4). IDIS is deemed essential for the proper information management in a project with many CoIs, Associates, engineers and technical and scienti c sta (the estimated number of participants is around 200), located throughout countries in both Europe and North America. For a proper discussion of data processing, the following subsection breaks down data processing tasks into levels, and proposes a scheme for the two DPCs. Organisational aspects, relations between 57

the two DPCs, management, and coordination are discussed in section 4.3. Section 4.5 nally details instrument-speci c aspects of the DPCs.

4.2 Data Processing Tasks The Planck DPCs are responsible for the delivery and archiving of the following scienti c data products, which can be considered as the deliverables of the mission (page 10 of ESA/SPC(97)27):

 Calibrated time series data, for each receiver, after removal of systematic features and attitude

reconstruction.  Photometrically and astrometrically calibrated maps of the sky in the observed bands.  Sky maps of the main astrophysical components. Others can be added to the formally de ned products of Planck mentioned above, e.g.:

 Catalogs of sources detected in the sky maps of the main astrophysical components.  Data sets de ning the estimated characteristics of each detector and the telescope (e.g. detectivity, emissivity, time response, main beam and side lobes, etc. ...)

Being naturally dependent on each other, all those products will progressively be built up from raw data. Iterative processing is required to successively re ne all the results. As an example, for the evaluation of the beam, a rst estimate of the main beam can be made from ground calibration data. A rst photometric map of the sky emission can be then be obtained from observations. The next step will then be to use this map for evaluating the contribution of the side lobes, thus obtaining a complete map of the beams. In the following, four levels of processing (numbered 1 to 4) are de ned. They are listed as if some ordering in their execution was implied however, as stated before, there will be iterative loops among them. The chosen scheme is more \logical" than \chronological". It should be added that all levels will operate simultaneously (although with dierent time constants) since it is necessary to get the full mission data for extracting information at the dierent steps. The activities of the Mission Operation Centre (MOC), carried out by ESA sta, are to be added to this scheme. The MOC is responsible for the tasks listed in page 6 of ESA/SPC(97)27. Besides MOC activity (to be considered as Level 0), Levels 1 and 2 of the DPCs described below, or some part of them, plus the preliminary step of acquiring data from the MOC, plus other possible steps TBD, should be active on a day-to-day basis during the mission, in order to provide information to the Science Team for possible corrections to be implemented by the MOC, and generally to provide the requested feedback and support. The actions corresponding to the various levels which are the responsibility of the instruments DPCs are detailed in the following sections.

4.2.1 Level 1 (Telemetry Processing) Level 1 can be conceived as being accomplished by a single, independent centre acting as an interface between the MOC and the two Instrument DPCs. It is to be noted that the actions of Level 1 are expected to be performed automatically using pre-de ned input from the technical teams, and do not include scientic processing of the data. Level 1 operations lead from P/L and S/C raw data to organised data time-lines for each parameter or signal. The input to Level 1 is the data as released by the MOC. The processing is intended to: a) Lead to the full mission raw-data stream in a form which is suitable for successive data processing by the LFI and HFI DPCs. b) Perform a routine analysis of the S/C and P/L Housekeeping (H/K) data, in addition to what is performed at the MOC, with the aim of monitoring the overall health of the payload and detecting possible anomalies.

c) Perform a quick-look analysis on the data to monitor normal operation of the observation plan and verify normal behaviour of the instruments. No processing of the scienti c data is included. All actions of Level 1 are supposed to be performed on a \day-to-day" basis during observation. In some more detail, the tasks to be performed by this level are:

 Acquire time ordered P/L scienti c and housekeeping data (from FINDAS).  Record the reconstructed pointing information from the MOC.  Analyse attitude reconstruction performance (accuracy, stability, etc.) comparing eld camera to   



    

pre-de ned point source catalog. Perform data de-compression (TBC). Check data integrity. Produce time-ordered raw data in a pre-de ned format, which should include at least: - absolute time (by synchronising all on-board clocks and timers) - signal in digital units (DU) from each detector - S/C Attitude - H/K data - other. Monitor all housekeeping data. E.g.: - temperature sensors output and stability - power consumption of individual units (when recorded) - cooling system parameters. and generate derived parameter time-lines useful for further steps. Quick-look at scienti c data allowing real-time links to the ST through the Data Processing Managers (DPCMs) in case anomalous behaviour is detected. Self-consistency checks on telemetry vs. \pre-processed" raw data. Monitor observation plan and sky coverage. Perform analysis to optimise possible corrections to the pre-de ned observation plan. Report to ST for new pointing programs for the S/C and/or modi cations of the instrument operation mode (compression mode, on-board S/W, gain modi cation ...) to be possibly fed back to the MOC. Archive all data and H/K processed up to this point (raw TOD archive). The output of Level 1 is the following:

 Data product 1 (deliverable): Raw time series data per receiver after attitude reconstruction, agging information, etc.  \Day-by-day" inputs to MOC (instrument health)

4.2.2 Level 2 (Data Reduction and Calibration)

At this level, the data processing steps requiring detailed instrument knowledge (data reduction proper) shall be performed. The Level 2 reduction steps will therefore be concentrated at two separate sites, one for each detector system, processing only data acquired from the related instrument. Level 2 will utilise the raw time series from Level 1 to reconstruct sets of calibrated scans per detector, instrumental performances and properties, and maps of the sky for each channel. The processing must be iterative, since simultaneous evaluation of a number of parameters is to be made before the astrophysical signal can be isolated and averaged over all detectors in each frequency channel. Continuous exchange of information between the two DPCs, achieved through the IDIS, will be necessary at Level 2 in order to identify any suspect or unidenti ed behaviour or results from the detectors. When appropriate, time-ordered data and maps, together with all other relevant material, will be made available to Level 4 to be prepared for public access. The instrument-dependent actions to be performed will be detailed in the Science Implementation Plan, as it evolves from its current draft stage. As an example, it is to be noted that utilisation of

the Earth, Moon (if available) and planets for beam pattern reconstruction and absolute calibration will require detailed models (or measurements) of their microwave emission at Planck frequencies. Similarly, there will need to be some sensible evaluation of Galactic foreground emission and systematic error contributions to the error budget. The development of these models is currently not considered part of the calibration process, but as simulations and modelling developments of each of the instrument Consortia. Level 2 actions can currently be summarised as follows:

 Evaluation of instrument health and performance, and quick-look analysis (QLA)  First-level statistics and checks for systematics

- For each data set and for each pixel, check distribution of the measured voltage in that pixel, Vpix (LFI+HFI). Remove spikes and glitches, etc. - Evaluate all noise components for each detector (e.g. white noise, 1=f noise). - Comparison of data relative to various coordinate systems (S/C centred, orbit-angle, spin angle, etc.) and time-periods - Perform \relative calibration", i.e., control of gain drifts in each receiver (or correlated) and possible subtraction of their eects. This typically involves monitoring of voltage dierences dVpix from opposed pixels in the observed circle or combination of signals from successive circles on the sky. - Control possible thermal drifts of the LFI reference load of the radiometers (and check for internal correlations). Perform analogous procedure for the HFI, in particular for monitoring the oset of bolometric channels. - Perform correlation between data (Vpix , Vrms ) and H/K data (e.g. temperature monitors, attitude/Solar-aspect angle, etc.) - Identi cation of and quick-look at strong-signal data (e.g. planets, strong HII sources, etc.). - Identi cation of any other systematic behaviour of signal (e.g. cross-talk among detectors, systematic errors synchronised with spin, etc.) - Extraction of a preliminary point source catalog (needed for astrometric veri cation and photometry checking)  Beam pattern reconstruction - Reconstruct the beam pattern using Earth and Moon radiation if possible during transfer to L2 and/or using an iterative process evaluating the contribution of the Galaxy in the far side-lobes - Reconstruct the main lobe of each beam using external planets or bright known objects as sources  \Absolute Calibration" - Evaluate the response (from volts to watts) of the detectors by using the on-ground calibration database and in- ight signals. - Relative calibration with CMB dipole (NB: to reach the nal accuracy, an iterative procedure between calibration and map making/component separation will be necessary) or the galactic ridge according to the channel frequency. - Absolute calibration will be done by reference to the FIRAS/COBE data which provides the best photometric calibration for extended sources throughout the Planckwavelength range. This step can be done only when maps have been built the comparison being done on areas larger than the FIRAS resolution. - Calibration using spacecraft orbital velocity. - De nition of procedures to cross-check data and systematics between HFI and LFI.  Frequency Maps production - At each frequency, combine all data according to the chosen pixelization scheme (HEALPIX is the current baseline) and using an optimisation technique for properly weighting each detector. Given the number of detectors per frequency there is probably enough redundancy to build more than one map: this would be extremely useful for attempts at assessing noise contributions (cf. `sum' and `dierence' maps with COBE-DMR). - Long-term closure checks. The output of Level 2 is the following:

 Data product 2a (deliverable): Time series data per receiver after removal of systematic features,       

etc.) Data product 2b (deliverable): Maps of the sky in the observed bands Data product 2c (internal): Maps of the sky per detector Data product 2d (internal): Point-source catalog associated with the sky maps Data product 2e (internal): Performances and beam of each detector, agging information, variance estimation for each pixel, maps of the systematic eects Data product 2f (internal): Updates to the Calibration Database \Day-by-day" inputs to MOC and Level 1 (instrument health and performance) \Longer-term" inputs to MOC and Level 1 (e.g. modi cations of the acquisition mode to be uplinked to the on-board computer, feedback parameters for Level 1 automatic processing, etc.)

4.2.3 Level 3 (Component Separation and Optimisation) The aim of Level 3 is to transform by means of pipeline processing the frequency maps produced by both instruments into preliminary maps of the underlying astrophysical components. Deliverable products will be prepared in Level 4 under the supervision of the ST. Data from both HFI and LFI need to be analysed jointly to produce the desired result. While it is expected that a number of institutes will participate in this work, two centres (representing the 2 instrument consortia and the Telescope Provider) will be responsible for performing the processing, and to guarantee the integrity and quality of the data products and their availability through the IDIS. The nal maps for each component shall be produced jointly by the two DPCs, under the supervision of the Planck ST. When appropriate, they will be made available through the IDIS, together will all other relevant material, to Level 4 to prepare for public distribution and for further detailed scienti c analysis by CoIs and Associates. It is to be noted that scienti c exploitation is outside the scope of the Level 3 pipeline as described here. A speci c policy for the \secondary" science goals of the mission, and based on peer-reviewed proposals has been de ned in ESA/SPC(97)27 (pages 12-14). The actions to be performed at Level 3 can be summarised as follows:

 Component separation { This section will require the development of software capable of disentan-

gling the various contributions with no (or minimal) assumptions on the parameters: - point sources catalog - extended source catalog - synchrotron (using polarisation information) - free-free - dust components - SZ eect - CMB anisotropies - polarisation (if possible) for all polarized components - checks with external data and surveys (e.g. DMR, Haslam, IRAS, FIRAS, balloon experiments, ...)  Noise maps and statistics data set  Inter-frequency cross-check - Internal correlation (at proper angular resolution) of (frequency independent) CMB maps at the various LFI frequencies - Internal correlation (at proper angular resolution) of CMB maps at the various HFI frequencies - Correlation of LFI/HFI maps - Analysis of possible systematic anomalous features in the correlations, e.g.: * If from instrumental eects, then re ne processing carried out in Level 2, then continue * If residual foreground contamination, re ne relevant analysis carried out in Level 3, then continue  Component Maps production

- Combining all data, produce all-sky maps of CMB, SZ, dust, etc. according to the chosen pixelization scheme (HEALPIX is the current baseline)  Produce catalogs of radio sources, IR-sources, SZ, etc. to be eventually delivered to the scienti c community through the IDIS. The output of Level 3 is the following:

 Data product 3a (deliverable): Preliminary maps of the sky for each of the main underlying com-

ponents).  Data product 3b (internal, to be eventually distributed with relevant S/W and documentation TBD): Source catalogs and object cross-identi cations.

4.2.4 Level 4 (Generation of nal products)

The action to be performed in Level 4 is reception, archiving, and prerelease preparation of all material needed for public release (at least nal TOD, frequency maps and component maps), with procedures to be de ned in more detail, which will be built in accordance with ESA/SPC(97)27 and with the Science Implementation Requirements Document (SIRD).

 Time-Ordered Data 

  

- Convert to a format convenient for access by the astronomical community sets of les of TimeOrdered Data, generated by Level 2, with full calibration and systematic error information. Final maps - Convert to a format convenient for access by the astronomical community the maps at each instrumental frequency generated by Level 2 - Produce maps for each diuse component (CMB and separated foregrounds) from the preliminary maps generated by Level 3, according to a pre-de ned pixelization and suitable data format(s). Prepare all related documentation and explanatory reports. Support transfer to FINDAS and maintenance of all S/W for reading, displaying and manipulating data. Transfer to FINDAS data products and documentation for distribution. The output of Level 4 is the following:

   

Data product 4a : Files of the Time-Ordered Data with calibration and systematic error information Data product 4b : Final maps of the sky at each of the Planck detector frequencies Data product 4c : Final maps of the sky for each of the main underlying components Data product 4d (To be distributed through FSC and FINDAS) : Associated explanatory documentation and basic S/W for Final Data Product I/O and display  Data product 4e (To be distributed with procedures TBD): catalogs of sources extracted from the sky maps.

4.3 Organisation of the pipeline To achieve the goals mentioned in the previous sections, two separate DPCs will be created, performing operations related to Levels 2 (only on associated instrument) and 3 of the pipeline. This guarantees speci c competence on the associated instrument during the daily checking and reporting of instrument performance, and during the process of data calibration. At the same time, the scienti c competence available at the DPCs can be used for part of the component separation and inter-frequency crosschecks using data from both instruments. Thus, a proper level of cross-checking can be achieved, by allowing redundant scienti c analysis at Level 3, where data from both LFI and HFI are processed in both DPCs. Level 1 is carried out by a separate Centre common to the two instrument consortia and the TP. This choice leads to optimisation of costs, and allows quicker reaction and immediate cross-checking

between telemetry from the two instruments. On the other hand, tasks included in such a Level are not expected to be instrument-critical, and detailed instrument knowledge can be concentrated elsewhere. To help homogenise the work of the two DPCs, a separate group common to the two instrument Consortia and the TP will carry out tasks related to Level 4. Such a group could also participate in the de nition of the processing related to Level 3 in parallel with the other sites, so that a proper level of duplication and cross-checking is made on scienti cally-usable (calibrated) data. The selected approach respects most of the concepts of the original scheme de ned in the ESA/SPC(97)27 document, it allows the optimisation of resources in the preliminary steps of the processing chain, and to exploit instrument knowledge where it is most needed (Levels 2 and 3). At the same time it allows scienti c redundancy and cross-checks together with a greater amount of exibility necessary in the critical phase of the data processing work (Levels 3 and 4). The Planck Science Team (ST) is always kept in the focus of scienti c operations. As a consequence, the two DPCs are organised in a geographically distributed fashion, to take maximum advantage of the expertise available throughout the Consortia, without the heavy costs associated with gathering many experts at two localised centres. This approach is also justi ed by the indispensable sharing of the high costs of the data processing activities within all participant countries. The eld of networking is currently developing extremely rapidly and by the time the full Planck data processing system becomes operational, it is unlikely that its distributed nature will cause any diculty. In particular, the Integrated Data and Information System (IDIS) to be developed is expected to ease communication and information exchange among the dierent locations constituting the DPCs. The IDIS is described below (section X.4). From the geographical point of view, the DPCs will be organised in 5 Principal Sites, based on the following scheme:

Level 1 - A single site representing the two instrument consortia and the TP, at the Observatory of Geneva, at the premises where the Integral Science Data Centre (ISDC) is currently located.

Level 2 - Two sites, one for each instrument: Orsay (POSDAC) for the HFI Consortium, and Trieste (OAT and SISSA) for the LFI Consortium. The TP consortium will participate in the work at both sites.

Level 3 - Two sites, one for each instrument: Cambridge (CPAC) for the HFI Consortium, and Trieste (OAT and SISSA) for the LFI Consortium. The TP consortium will participate in the work at both sites.

Level 4 - A single site representing the two instrument consortia and the TP, at the MPA in Garching. It is to be noted that Levels 2 and 3 for the LFI Consortium are carried out at the same physical location. For HFI , three major sites are involved and will coordinate closely their activities : Orsay, Cambridge and London (LPAC). It is the responsibility of the Principal Sites to perform the related steps of the processing pipeline, and to guarantee the integrity of the information and data stored therein. A number of Secondary Sites are envisaged as participants in the various levels of data processing. Some secondary sites are already well identi ed: London for HFI and IPAC (Infrared Processing and Analysis Center, Pasadena) which should act as the entry point in the USA for the Planck project (TBC). Other sites are still TBD. Each Secondary Site will have some speci c expertise on particular aspects of the Planck mission, either technical or astrophysical. In each of these sites one ore more mission Co-Is are expected to be working. Each Secondary Site will be allowed to gather information from the Principal Sites, and to feed back information (with appropriate ltering) to them through the IDIS and according to ST-prescribed data access rights. It should be easy for project sta at a Secondary Site to retrieve information and data from a Principal Site, while ingesting information in a Principal Site will be rigidly checked.

Geneva

FINDAS

LFI

level 1

HFI Orsay (POSDAC)

level 2 London (LPAC)

Trieste Cambridge (CPAC)

Garching

level 3

level 4

supervision by the ScienceTeam through DPCMs

MOC

Figure 4.1: Data ow among sites forming the Planck data processing distributed structure. Five Principal Sites (rectangles) and several Secondary sites (circles) are identi ed. Principal Sites have the responsibility of guaranteing the integrity of data and of processing procedures. In the gure, lines indicate the logical ow of data (all these lines do not imply a physical link between all locations : the implementation of these links will be done according to the best and cheaper available network) information exchange among the sites occur within IDIS.

4.3.1 Data Flow Two gures are presented in this section, basically representing the data ow among dierent sites, and levels, related to Planck data processing. Figure 4.1 shows the ow of data among the various Principal and Secondary Sites envisaged for the Planck DPCs. Figure 4.2 describes data exchange for the overall data processing activity in Planck, regardless of geographic distribution. Each Level produces and maintains data in proper archiving structures, which are to be shared with other Levels of the data processing system. From both diagrams, the complexity of the data ow in Planck data processing can be appreciated. While the time-lines and rules for data exchange between Consortia are still TBD, it is clear that a common mechanism for data sharing is absolutely necessary. This will be provided by the IDIS, which is described in the next section.

4.3.2 Management of Overall DPC Structure The complex and distributed structure of the Planck DPCs requires proper management to be

eective, in particular during day-by-day operations. A DPC Coordination Group (DPCCG) will be created. This group will have representatives of all Principal Sites and Consortia, and will closely liaise with the Science Team. The DPCCG will be a distributed management structure for the overall Planck data processing working on a day-by-day basis, and is expected to have continuous exchange of information among its members, by means of e-mail or tele-conferences which will occur at least on a daily basis during critical periods (launch preparation and initial phase of operations).

raw telemetry

Level 1

raw TOD

Level 2

cal. TOD

cal. TOD

freq. MAP

freq. MAP

comp.MAP

Level 3

Level 2

comp.MAP

final MAP

Level 3

Level 4

final products

Figure 4.2: Diagram illustrating interactions and commonalities between the HFI and LFI DPCs. Data archives produced by the various Levels of processing are also illustrated: they are to be shared by the two instrument Consortia and the TP through the IDIS.

4.4 Integrated Data and Information System (IDIS) Large amounts of information, data and software will be generated during the various stages of the Planck project. These include the development, test and operation of the Planck instruments, and the subsequent processing of the observed data. The IDIS conceptualises a system to provide the infrastructure and the tools necessary to manage these objects eciently and exibly. The general requirements for the IDIS have been layed out in the URD (! Science Implementation Plan). FINDAS will be evaluated as a baseline for satisfying these requirements in the implementation phase. Hereafter, the term \the IDIS" refers to the system that provides the functionality of the IDIS environment, while the term \software" encompasses all kinds of software running under the control of IDIS.

4.4.1 IDIS Philosophy

Integration. The IDIS must comprise (at least) three dierent components to carry out its broad diversity of tasks:

 A data management component that allows the ingestion, ecient management and extraction of the data (or subsets thereof) produced by Planck activities.  A software management component that encompasses the software required to administer, handle, and analyse these data.  A document management component that contains documentation pertaining to both software and data controlled by the other two components, or other relevant documents.

Ecient use of the IDIS requires that these components be inter-connected. It must be possible to relate objects under the control of one component to objects controlled by another component. Accordingly, the IDIS can be thought of as a federation layer consisting of software, data and documentation standards, and software supporting these standards (cf. Fig. 4.3). It provides the infrastructure within which all Planck-related activities may be performed in a consistent and coherent way, from the development of the payload to the delivery of the nal data products.

Figure 4.3: IDIS structure

Distributed Environment. Users of the IDIS will be situated at many institutes in several Euro-

pean and North American countries. Ecient use requires that the IDIS be a system geographically distributed over many sites (see also x4.4.3). Certain types of tasks can then be performed locally at these sites in order to minimise access and execution times. Sites need to communicate regularly, in an automated fashion if practical, in order to guarantee consistency of the system and maintain con guration control. Suciently fast and reliable network connections between the sites are thus necessary, and the system must provide automated procedures for mirroring data and replication of software to all appropriate sites.

Classes of Users Typical Tasks. The prospective users of the IDIS can broadly be grouped into

four classes. They are listed and described here together with the typical tasks they are likely to perform within the IDIS.  Site Managers are people chie y responsible for the operation of the IDIS at their site. Their detailed responsibilities are given in x4.4.3. The IDIS must support the tasks of site managers via federation components which track the objects and users in dierent parts of the system.  Planck Scientists form the largest group of the IDIS users. Typical IDIS tasks performed by this group include: - accessing and up-loading documentation and data - communicating with other users of the IDIS using standard mail protocols - accessing, visualising, analysing, and processing data stored in the IDIS - creating and distributing data and documents - creating and running software modules on data stored within the IDIS, in the process generating new data The IDIS must provide such users with the necessary tools and access rights required to carry out these tasks. Apart from data and document management tools, the issue of software pipelines is particularly important. It will be further detailed below.  Developers are people who develop and test new components for the IDIS. IDIS standards for coding and interfaces must be de ned. All IDIS components must conform to these standards. Developers must be able to access data and software controlled by the IDIS, and to submit new or revised components together with the respective documentation for insertion into the IDIS. For that purpose, a software release policy needs to be de ned, and the IDIS must contain tools that allow site managers to implement this policy. Con guration control is essential to track the released versions of software and documentation across the distributed system.

 Astronomical Community : Ultimately, Planck data, software, and documentation will be released

to a broader astronomical community. A limited number of the IDIS services must then be accessible to such users, and the IDIS must contain web-based components implementing and facilitating such use of the IDIS. It is envisaged to use the IDIS also to prepare and distribute all kinds of material required for PR activities as requested by ESA in the Public Relations Plan. Given the diversity of prospective users and tasks, the IDIS must provide a mechanism to control the access rights of users to all objects within the IDIS components. This requires tools with which the IDIS managers can set, change, and track user privileges across the distributed system. Evidently, corruption of the system by users must be prevented.

External Interfaces. There must be an IDIS user interface to query the system. Tools and application programming interfaces must be provided for basic visualisation and analysis of data objects. Application programming interfaces will be written for component developers. Web-based interfaces will be developed at a later stage to allow the general astronomical community access to a limited number of IDIS services. The IDIS must support an interface to receive data from, and deliver data to, FINDAS. An estimated 5 GBits per day will be transmitted from FINDAS to the IDIS. To allow the ingestion of this amount of data within a few hours, data transfer rates of order  0:5 MBits per second are necessary between FINDAS and one of the IDIS principal sites. Software Pipelines. Data sets produced during the development, testing, and operational phases of the Planck instruments will be visualised and analysed by a multitude of users in many dierent ways.

Suciently exible use of the IDIS calls for modular software whose components can be combined to form complex software pipelines . This has several implications:  Software components must be modular, and they must comply with coding and interface rules.  Users must be able to construct suciently complex software pipelines from the IDIS software modules. Several such pipelines can exist in parallel, and they can be partially identical. It should also be possible to run parts of existing pipelines independently, and it must be possible to run pipelines iteratively.  The IDIS should allow parts of pipelines to be run locally, and parts remotely. This requires appropriate messaging between sites, and the control of access rights.  The IDIS must allow the storage of existing pipelines for later re-use.

Self-Documentation Linking of Objects. Flexibility and eciency also demand that all objects

in all the IDIS components be uniquely identi ed, and that software and data be documented. The IDIS must therefore link together objects managed by dierent components, and maintain a controlled relation between software, data objects, and their respective documentation. In addition, data objects should contain headers clearly and uniquely de ning them, their originating software and the parameters with which they were generated.

Final Data Products. The IDIS will maintain a complete, distributed archive of all data produced

by all processing Levels, including house-keeping data. Final data products will be released to the public, with the IDIS managing the corresponding data archive and relevant software for exploitation and analysis of the data. Complete documentation of instruments, data products, and software will be prepared and released through the IDIS. Finally, the IDIS will deliver data products, documentation, and relevant software (TBD) to FINDAS.

4.4.2 Development

Development Tasks. The development of the IDIS components and their integration into the

complete system will be carried out by three development sites , MPA Garching, OAT Trieste, and

SSD/ESTEC Noordwijk. Their joint activities will be monitored and coordinated by the IDIS Development Team (see below). The development sites will provide the manpower and resources required to develop the IDIS. Work-packages will be de ned and assigned to the IDIS development sites. A preliminary breakdown of work-packages is given as an attachment to the Science Implementation Plan.

Planck Pilot Project. The development and integration of the IDIS must begin very early in the project. For that purpose, a pilot project will be launched, aiming at a preliminary implementation of the IDIS by December 1999. In the course of the pilot project, the IDIS prototype will be tested on a realistic simulated data stream and will incorporate a preliminary documentation system for the Planck project.

4.4.3 IDIS Organisation

Figure 4.4: IDIS management structure

Sites Site Managers. The IDIS will be geographically distributed over many sites. Three of them (MPA Garching, OAT/SISSA Trieste, and SSD/ESTEC Noordwijk) will be development sites concerned with the development, implementation and management of the IDIS itself there will be a larger number of user sites making major contributions to the Planck project in other areas. Accordingly, the IDIS must operate and be managed in a decentralised manner. The development sites jointly guarantee the integrity of the IDIS. They support the setup and the initial phase of operation of the IDIS at user sites. Apart from this, each site will be responsible for proper operation of its own resources. Each site will nominate a site manager . These will be responsible for the proper operation of the IDIS at their sites. In particular, sites must run the appropriate tools to maintain communication between sites. Site managers will administer access rights to the IDIS, and they will also be responsible for carrying out speci c tasks assigned to their site. Finally, development site managers will decide about the acceptance of any object to be ingested into the IDIS at their site, following established guide-lines. IDIS Development and Management Teams. An additional IDIS manager will be nominated who coordinates and oversees the eorts of the three development site managers. These four managers form the IDIS development team (hereafter IDIS-DT). The IDIS-DT will be chaired by the IDIS

manager. While the IDIS-DT is responsible for implementing and maintaining the software providing the functionality of the IDIS environment, the responsibility for the overall data reduction and analysis pipelines ultimately rests with the DPC managers. An IDIS management team (IDIS-MT) will be formed comprising the IDIS-DT, the DPC Managers, and the Principal Site Managers. It will be the responsibility of the IDIS-MT to direct the design, development, integration, and testing of the IDIS, keeping track of the current status of the IDIS and assigning speci c IDIS-related tasks to sites as required. The IDIS-MT will meet at regular intervals and report to the Planck Principal Investigators directly through each instrument's data processing manager. The IDIS-MT will resolve con icts among the IDIS parties. The IDIS-MT will liaise to the Planck Science Team via a coordination group (identical with the DPCCG once this is set up). This group will advise the IDIS-MT on all aspects of the IDIS development, maintenance, and operation (e.g. by suggesting guide-lines for granting access rights of individual users to the IDIS, etc.). Further detail on IDIS management is given in subsection 2.3.2 of the IDIS URD.

4.5 HFI-speci c implementation The previous sections describe the concepts of data reduction and analysis, give a global view of the overall architecture and explain what are the common tools HFI and LFI intend to develop. This section will describe the HFI speci c implementation of these concepts. Most of the data processing steps are complex, iterative and, at the moment, the design is still strongly dependent on open scienti c, technical and mathematical problems. One the other hand, the huge amount of data and the quite short proprietary period before the public release require an ecient and reliable pipeline with an accurate and permanent control on the data ows. Thus, it is essential to de ne an organisational structure, a development philosophy, a management scheme and an implementation of data centres allowing to nd the best trade-o between the required creativity and the robustness of the data pipeline.

4.5.1 HFI-DPC General philosophy and implementation

As described in section 4.3, level 1 activities will be implemented in a common (HFI-LFI) data centre in Geneva. This level does not require any scienti c processing of data and through this choice Planck will bene t of the experience gained in ISDC by the Swiss team. HFI Data should thus be delivered by ESOC to the Geneva centre as part of the HFI DPC. For the processing of levels 2 and 3, it is very important to gather in the same location scientists with dierent skills (specialists in theory, modelling, algorithm development, manipulation of large amounts of data,...) with ecient and experienced technical sta. Due to the distribution of this expertise within Europe and the US and the necessity to share the rather large costs of the DPC, the HFI consortium has chosen to develop, to implement and to operate the levels 2 and 3 of data processing in several dierent institutes. This approach, made possible by the good network connectivity already achieved between our institutes (German, UK, French and Swiss academic networks are connected through the TEN-34 European inter-academic network), will increase our ability to give access to the data to all HFI scientists and thus to reach high eciency during the development phase and the operations by continuous improvement of the pipeline procedures. A requirement of this approach is that the management structure and the sharing of responsibilities must be clearly identi ed and the integrity and traceability of data guaranteed at all times. We have therefore decided also to reduce the number of Centres involved in the major aspects of the pipeline development and operations at levels 2 and 3. Three Planck Analysis Centres will be setup: two in the UK (Cambridge and London) and one in France (Orsay). Each center will be implemented in a unique place (a host institute) but will give access (through a high bandwidth network) to other laboratories within the local geographical area. The London Planck Analysis Centre (LPAC) will take a substantial role in de ning and developing software and procedures for levels 2 and 3. In close collaboration with LPAC, the Paris-Orsay-Saclay Data Analysis Centre (POSDAC) will be in charge of the development, integration and operation of

the level 2 pipeline. It will also be responsible for the archiving of level 2 products and will contribute to level 3 development. The Cambridge Planck Analysis Centre (CPAC) will develop, implement and run the pipeline at level 3 and will be responsible for the archiving of level 3 products. It will do so in close collaboration with LPAC and POSDAC. This scheme seems optimal given the goals and resources within the Consortium. In order to run eciently, it requires close collaboration of the three centres within a HFI-DPC Management structure under the responsibility of the DPC Manager. Even if it is too early to de ne precisely all the hardware needed to implement such an architecture, it is obvious that three network links are critical: a high bandwidth link between Geneva and Orsay for getting continuous access to level 1 products (the French side will take charge of this)  another link between Orsay and London for providing the UK with level 2 products and level 1 products (if needed) and an intra-UK link between London and Cambridge (the UK will take charge of this). IDIS will be the major tool for reaching a high level of integration, coherence and eciency between our centres during the development and operational phases. MPA (Garching, Germany) will represent HFI in the IDIS Development Team and will be a IDIS Development site. MPA will give assistance to other HFI IDIS User sites for implementing and running IDIS locally. As agreed with LFI, MPA will also take care of the level 4 activities for both instruments and the TP. Other countries or sites will be part of the HFI consortium : Astrophysics Division of ESA will contribute to IDIS development and implementation, DSRI (Copenhagen) will give access to data to TP scientists and will contribute to S/W activities at level 1,2 and 3. Other sites (Edinburgh, Strasbourg, Toulouse, IPAC, Rome) are likely to be involved and will produce software, give advice and provide additional data sets useful for cross-correlation with other experiments, for component separation and for identi cation of sources. Other sites should certainly be added in future.

4.5.2 DPC Development Methodology

Since a strong and continued interaction is needed between scientists and software system engineers we need to de ne a development philosophy which can regulate these interactions, taking into account the dierent methods and techniques used by dierent partners. A high degree of involvement of all scientists is also required, organised by the Survey Scientist and the Science Consortium Coordinator. As demonstrated in many previous scienti c (and industrial) software projects, there cannot be too long a time between the de nition of requirements and the implementation of the software itself. First, because we need to make unavoidable adjustments of requirements following scienti c and technical improvements of the algorithms. Second, because too early an implementation will be too dependent on current computer hardware and will freeze technical evolution. And nally, because it is well-known that long projects (especially software projects) reduce the motivation of the teams involved. Two methods can address at least partially these diculties. The rst is at the heart of the IDIS philosophy: by de ning components in an object-oriented method, we can concentrate our eorts on the functionality and on the interfaces of the components almost independently of their exact implementation. The lifetime of a component can then be much longer than operating system versions or commercial software releases. The second approach uses a method inspired by the RAD (Rapid Application Development) technology: the nal product (the pipeline) will be progressively built up through the realisation of consecutive preliminary versions. In a new version, functionalities are added or improved by analysing and estimating the performances and defaults of the previous version. The complete timescale for specifying, designing, implementing and testing a version cannot exceed 2 years. Following the general schedule given by ESA for the Planck Project, we plan to deliver three versions of the data pipeline, a breadboard model, a development model and a nal model. Each model design and implementation will be subject to review by external experts. R & D activities in 1998 should precise the exact contents of each model. Another very important point for the development philosophy is that we need to build successively more and more realistic simulations of the data (and housekeeping) ow which will be used as inputs to

the pipeline for tests before launch. Eorts have to made for modelling and testing processing schemes (beam reconstruction, low frequency noise removal, separation of components,...). An important eort has already been made during the phase A study and for preparing this answer to AO, especially in France (the RUMBA group led by Fran.cois Bouchet) and in UK(Cambridge). These activities are essential. The example of such an iterative data analysis in which instrument and missions parameters must be derived self-consistently from the data themselves is Hipparcos. This work must be pursued at least until the last version of the pipeline is implemented. Theses tasks have to be shared and coordinated by the DPC teams and the scienti c groups in an appropriate structure like speci c working groups(TBD). The HFI Consortium wants to build as many commonalities as possible between the HFI and LFI teams. To avoid redundancy and waste of time and money, we intend to develop numerous common pieces of software. A detailed work breakdown, taking account of each development cycle, will organise the sharing of the work. We will integrate our eorts through frequent exchanges of students, post-docs, as well as working groups (TBD) gathering scientists and system and software specialists.

Development cycles One can de ne, at least tentatively, for each cycle the usual four steps:

 Requirements: This step will be driven by the scientists involved in the DPC and in scienti c

structures. It will be the moment to summarise the scienti c and mathematical state-of-theart and to include conclusions deduced from the previous cycle. The experimental software and models for new algorithms developed by scientists should be transformed into speci cations at this moment. During this phase, test procedures and benchmarks shall also be produced including the last version of data simulations.

 Design: By using the requirements and the general rules given by IDIS, the design phase, led by DPC S/W and system engineers, will produce an updated architecture, a new list of components to be implemented and an updated work breakdown to distribute tasks within the data centres.

 Implementation: The programmers will translate into S/W components all the elements detailed in the design phase. The standards de ned by IDIS will be carefully used for the programming and the associated documentation.

 Testing: This task should be common to S/W specialists and scientists so that the new implementation can be validated by prede ned procedures and using test data sets. After testing and bug-correction, the new version will be put under strict con guration control.

This 4-stage cycle is well adapted to clarifying the respective activities of scientists and engineers. A similar scheme has been applied to previous space projects(COBE for example) and should be used (with a shorter timescale but with the same steps and with a strict con guration control) for improving the dierent levels of the pipeline during the operations when facing unpredictable behaviours of the instrument or unknown features in the data.

Quality assurance A quality assurance policy must be de ned according to the development scheme described in the previous subsection and following general rules for space projects. It should be noted once more that the involvment of scientists in the data processing pipeline is crucial for the success of the project. But most of them are not familiar with quality control procedures and documentations and these rules are often seen as bureaucratic and heavy constraints. The HFI Consortium intends to help scientists to improve their programming methods. But rules are still to be adapted to the role of the dierent pieces of software. For example, experimental prototyping has always been a way to jump start development and special considerations shall be made in the IDIS standards and setup to facilitate prototype coding.

However, all products (software, data and documentation) will be subject to standards and QA procedures. The QA procedures are still TBD but will be based on an existing system such a ESA's PSS-05 standards. More speci c coding standards must be developed per language to be used. Softwares to be included in the pipeline must not only meet QA and coding standards but will need to adhere to interface standards for compatibility with other pipeline components. Good and reliable documentation of all IDIS software is essential for the useability of the system. When any code becomes part of the operational system it shall be subject to the full set of applicable documents.

4.5.3 DPC Operations

The functional scheme for the HFI DPC during operations follow the requirements set by ESA in the Science Implementation Requirements Document (SIRD). On a daily basis, two basic tasks will be performed:

 Instrument health and performance monitoring will be performed with Real-Time Analysis (RTA) software at the Level 1 of the HFI DPC. This includes acquisition of all telemetry from the MOC (via FINDAS), the display and monitoring of relevant information on instrument status, checking of limits, and deriving and display of additional parameters. The network must allow all the HFI technical sta to have access in quasi real time to all information concerning the overall health of the instrument.

 Instrument Quick-Look Analysis (QLA) will be performed at Level 2 of the HFI DPC. This

includes acquisition of all data from Level 1 of the DPC, selection and display of science data, calculation and display of additional parameters, assessment of detector behaviour, pointing veri cation, identi cation of required changes in the instrument parameter settings, trend analysis of the thermal behaviour of the P/L and operations scenario. All TC will be prepared by the PI team and sent as requests to the MOC via FINDAS and by agreement with the PS.

Levels 2 and 3 of the HFI DPC will provide continuous data processing during operations. At regular intervals (TBD) intermediate data products will be exchanged (through IDIS) between Levels 2 and 3 of the HFI DPC and the corresponding Levels of the LFI DPC in order to provide comparison and cross-check. Level 4 will produce internal pre-release products. No direct interaction of DPC operations with spacecraft operations is foreseen. All communications with spacecraft operations (handled by the MOC) occur through the FINDAS interface under responsibility of the PS.

4.5.4 Post-operations phase

During the post-operations phase (i.e. after the completion of the second Planck survey), the MOC and Level 1 of the HFI DPC will rapidly become inactive, Levels 2 and 3 will gradually decrease their activity, and Level 4 will start (or increase) its own activity. As described in the SIRD, three sub-phases can be envisaged:

Data reduction phase - lasts at most 1 year after end of operations. In this period, Level 2 pro-

cesses all the data of the mission with the last implementation of the pipeline. After 6 months (TBC), the data products will contain at this stage only well understood anomalies and/or peculiarities. Level 3 of the HFI DPC, through data exchange (through IDIS) with Levels 2 and 3 of the LFI DPC will continue to implement the separation of astrophysical components, point source catalogs and other related products.

Proprietary period phase - lasts at most 1 year after the end of data reduction phase. Level 2

of the HFI DPC may remain active for a last iteration of the Level 2 products, while Level 3 will nalise and iterate its production. Data products will be exchanged (through IDIS) with the corresponding Level of the LFI DPC for comparison and cross-check. Level 4 will prepare procedures and documentation for the distribution of the nal products from those delivered at the end of the data reduction phase. Enough time before the end of the proprietary period, Level

4 will prepare the nal products to be released. Principal Sites, secondary sites and institutes involved in the overall Planck collaboration are involved in the scienti c activities for preparing the Consortium scienti c publications.

Data products distribution and access phase - the nal products of the mission are made available (through IDIS) to Level 4 of the LFI DPC (in common with HFI). Level 4 will nalise the distribution of the data products and of the associated documentation to the astronomical community via a convenient medium (TBD) and, where appropriate, through FINDAS.

4.5.5 Organisation structure The higher level of HFI and LFI pipeline coordination is the DPCCG as de ned in section xxx. The DPCM is member of the IDIS-MT and MPA is member of the IDIS-DT. For HFI-speci c development, implementation and operation of the pipeline, a HFI Pipeline Executive Group (HFI-PEG) will gather representatives from POSDAC, LPAC and CPAC. It will be chaired by the DPCM assisted by a Data processing technical manager (DPTM) and will be composed of the scienti c coordinators and technical managers of the three centers, the Survey Scientist and the Science Consortium Coordinator. Each site shall have an IDIS site manager. Working groups(TBD) at the Planck(HFI, LFI, TP) level or at the HFI level could be organized for speci c tasks : coordination of principal and secondary sites, coordination of scienti c simulations activities with DPC,...

4.5.6 Local implementation of the HFI major Data Processing Sites DPC Management

The overall management of the DPC will be led by the DPCM (R. Gispert, IAS) under the responsibility of the HFI PI. He will be assisted by a Data Processing Technical Manager for coordinating all the technical activities within HFI Sites and with LFI. Industrial subcontractors could be requested for speci c support. IAS will give the administrative and technical support for all these activities.

Geneva The activities of the Level 1 processing will be based on the experience gained in the INTEGRAL Science Data Centre (ISDC). The responsibilities of this centre for the INTEGRAL mission include those proposed here in Level 1. The centre is attached to the Geneva Observatory which is the institute of astronomy of the Geneva University. The Planck Level 1 data processing will be performed by a group of people independent but closely related to the ISDC, who will to be involved in IDIS Activities. The INTEGRAL system that is being developed now is based on several subsystems. Three of them are relevant here:

The data receipt that receives all the spacecraft and instrument telemetry together with the associated auxiliary data (attitude and orbit) from the MOC in near real time.

The preprocessing which decodes the telemetry and stores the data in appropriate formats (for example FITS) in the archive for use in all subsequent analysis.

The observation status monitoring which is a set of tools to monitor the progress of the opera-

tions and data collection and matches this with the expected data based on planning information. This subsystem has an automatic and an interactive part.

We intend to use a very similar architecture and to modify the tools that will be available for INTEGRAL to match the needs of the Planck mission and data, and following the concepts of the IDIS. The daily operations will also be based on the experience gained during the life of INTEGRAL,

in particular for what concerns the daily monitoring of the housekeeping telemetry which we expect to be very similar. It is not possible now to de ne the hardware that will be used in 2005 for the type of work envisaged here. The evolution of the products is such that any prediction will be highly inaccurate. We nonetheless foresee that each person will require the equivalent of a present day workstation and that some central archiving facility will be needed (it is quite possible that by the launch all data can be stored on a disk system). Planck data will be sent from the MOC to the ISDC location via a dedicated network connection, possibly using the tools in place for INTEGRAL. The bandwidth needed is still TBD, and will be de ned in the Science Implementation Plan (SIP).

Orsay (POSDAC) Level 2 and Level 3 activities at Orsay will be carried out by the Paris- Orsay-Saclay Data Analysis Centre involving several institutes:

    

the Institut d'astrophysique spatiale (IAS, CNRS-INSU, Universit#e Paris-Sud, Orsay) the Institut d'astrophysique de Paris (IAP, CNRS-INSU, Paris) the Laboratoire de l'Acc#el#erateur Lin#eaire (LAL, CNRS-IN2P3, Universit#e Paris-Sud, Orsay) Physique Corpusculaire et Cosmologie (PCC, CNRS-IN2P3, Coll/ege de France, Paris) Service d'Astrophysique (SAp) and Service de Physique des Particules (SPP) of the Department DAPNIA (Commissariat /a l'E# nergie Atomique, CEA, Saclay)

These institutes are all involved in many astronomical ground-based (some of them are surveys program like EROS, DENIS or MEGACAM) or space projects (including data processing responsibilities like in SOHO, ISO, XMM,...). They have decided to gather their expertise and their resources to set up a common data centre. IN2P3 ad DAPNIA are also involved in large particle physics experiments at CERN as well as other accelerator sites. This should provide the Planck consortia an unique experience in the eld of managing large scienti c collaborations and data-processing structures. The most important hardware equipment (link towards Geneva and London, central computing resources, archiving robots) and (tele)-conferencing rooms will be hosted by IAS which will provide the general logistics support. A signi cant part of the 500 square meters of the institute building currently occupied by the MEDOC (SOHO) and ISO centres will be progressively devoted to Planck. All the institutes are equipped with modern workstations and peripherals and are connected together by an academic medium bandwidth network (2 MBps). They are involved in an experimental network (ATM @34 MBps) which should be operational by the end of 1998. This network allows a eective sharing of resources and the test of new groupware tools. POSDAC members can use national computational resources from CNES, IN2P3 or CNRS for heavy Planck modelling. Nevertheless, they intend to apply in 1998-99 for a medium-class computer (32 processors, 8GB memory) for Planck developments. POSDAC will be advised by a scienti c steering committee chaired by F.R. Bouchet (IAP). R. Ansari will be the scienti c coordinator of POSDAC and will be assisted by a Technical manager (TBD). A management structure involving representatives from all institutes will be created.

London (LPAC) Level 2 analysis and Level 3 participation in London will be carried out at the London Planck Analysis Centre (LPAC), which will be based at Imperial College, but with a strong involvement in level 2 de nition and design by Queen Mary and West eld College. The groups at Imperial College are based in the Astrophysics and Theoretical Physics Groups of the Blackett Laboratory. The group at Queen Mary and West eld College is in the Physics Department. Management will be an LPAC Steering Group with representatives from the groups involved and chaired by M.Rowan-Robinson. There will be an LPAC Project Manager, with T.Sumner acting in this role for the moment. Support for sta for software development, for computers and for data

networking, are being sought from PPARC. Space and infrastructure support will be supplied by the Astrophysics Group in the Blackett Laboratory. Permanent sta involvement will be:

ICSTM

M.Rowan-Robinson: Professor of Astrophysics and Head of Astrophysics Group, Planck-HFI co-I, UK Planck-HFI Data Analysis Coordinator T.Sumner: Senior Lecturer, Planck-HFI co-I, acting Project Manager for HFI activity at ICSTM S.J.Warren: Lecturer in Astrophysics, Planck-HFI Associate Scientist A.Albrecht: Reader in Theoretical Physics, Planck-HFI Associate Scientist R.Mann: a Postdoctoral Research Associate in Astrophysics, is also a Planck-HFI Associate Scientist

QMW

P.A.R.Ade: Professor of Physics, Planck-HFI co-I, UK Planck-HFI Instrument Scientist M.J.Gri n:Reader in Physics, Planck-HFI co-I

Cambridge (CPAC)

Level 2 participation and Level 3 analysis at Cambridge will be carried out by the Cambridge Planck Analysis Centre involving three university departments: Mullard Radio Astronomy Observatory (MRAO), Institute of Astronomy (IoA) and Department of Applied Mathematics and Theoretical Physics (DAMTP). CPAC will be overseen by a steering committee, chaired by G. Efstathiou, which will include members from other institutions involved in Planck. A. Lasenby will be Director of CPAC. A science manager (currently A. Jones) will coordinate scienti c expertise within the three Institutions and a pipeline manager (F. van Leeuwen) will be responsible for delivering pipeline analysis software. The sta eort of CPAC will comprise of scienti c sta associated with CMB interferometric programmes at Cambridge, principally the Cambridge Anisotropy Telescope, Ryle Telescope and Very Small Array, and with theoretical programmes at IoA and DAMTP. Currently, this comprises 19 academic sta and postdoctoral fellows. The key sta involved in CPAC are as follow: G.P. Efstathiou: Professor of Astronomy, Survey Scientist Planck HFI. UK Principal Applicant, Planck HFI. A. Lasenby: Reader in Physics, Co-I Planck HFI, Director of CPAC. N. Turok: Professor of Theoretical Physics, Associate Planck HFI F. Van Leeuwen: Head of Hipparcos group, RGO. Pipeline Manager CPAC.

Garching

Level 4 of the Planck Data Processing will be supported in Garching by the Max-Planck Institute for Astrophysics (MPA) for both the LFI and HFI consortia. The Garching data centre will support the two instruments equally. MPA will also take a substantial role in developing the IDIS for both instruments and is responsible for ensuring that it conforms to the needs of the HFI pipeline. From the point of view of infrastructure and hardware con guration, the equipment listed in the following is available or will be provided.

Network: assured average bandwidth to Trieste, Paris, Cambridge (requirements TBD), router Computers: multi-CPU server for running Pipeline elements (dedicated 2-CPU SUN UltraSPARC

II workstation with 1 GByte core memory already in place), network servers, database server, data server, science analysis workstations (in total 18 IBM RS6000 workstations with up to 512 MBytes core memory, plus graphics workstations), X-terminals. Access to the Computer Centre of the Max-Planck Society in Garching (RZG), operating among others a 688-CPU Cray T3E supercomputer with 85 GBytes core memory (currently the ninth largest in the world), a 20-CPU IBM/RS6000 SP2 workstation cluster, and several high-end graphics workstations.

Peripherals: disk storage (of order a TByte), CD-ROM juke-box, disk units, CD-ROM writer and

reader, DAT units, tape robots, colour- and gray-scale laser printers. Access to mass-storage devices at RZG with a capacity of over 130 TBytes. Software: DBMS (client-server, development, mirroring licenses), data processing environment (multiCPU license), software for peripherals, project speci c Data-Processing Software. The MPA will provide the infrastructure and logistics to support Level 4 of the Planck Data Processing. About 200 square meters of the institute building currently occupied by MPE has been allocated for use by the Planck Level 4 facility, and will be available by the beginning of 1999. This area will be modi ed to accommodate an appropriate environment for the Planck-dedicated computer system and user terminals, together with oces for the MPA Planck sta. Further oce space will be available to support guest scientists.

Astrophysics Division of ESA

The Division will contribute in approximately equal part to both the LFI and HFI projects by means of developing part of the IDIS system for use by the DPCs and development teams. It is ideally placed in establishing interfaces with FINDAS. Post launch SA/ESA will assist in maintenance of IDIS and is able to host a data-repository. SSD has a well equipped Computing Infrastructure based mainly on SUN Solaris work-stations. Oce automation uses both PC's and Macintosh. An in-house developed Document Management System is available for Planck use. Presently the division shared in 2 Mbps connections to NL-NET, which must be supplemented to support the Plant Intranet of IDIS. Bulk data are stored on a, multi-project, Terabyte optical storage device in an AMASS hierarchical data-management system dedicated to divisional use. This incorporates a AIT/DLT robot system, initially sized at 9 Tbyte (multiply expandable). The in-house DBMS is Oracle 7 (8). The computer system is supported by a full-time system administrator/manager and several parttime operators.

Summary of stang The following two tables give a preliminary view (in fte.yrs) of the technical sta needed for the HFI data processing activities and provided by the dierent partners (for non permanent positions this amount is dependent on the exact funding by space agencies). The total amount is comparable with other previuos space surveys like IRAS or COBE. For parts common with LFI, the amount is total (and not split into HFI and LFI parts). Technical sta DPC Magt Geneva POSDAC LPAC CPAC Garching SSD

Total

98 0.5 2.5 1.3 3.0 5.0 1.5 13.8

99 00 01 02 03 04 05 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0 2.0 3.0 4.0 5.0 3.0 4.5 6.5 8.8 11.3 12.0 12.8 1.8 1.8 2.3 4.3 4.8 5.0 7.0 3.0 4.0 4.0 4.0 4.0 4.5 5.5 5.0 8.0 8.0 8.0 8.0 9.0 9.0 2.5 2.5 2.5 2.5 2.5 2.5 2.5 17.3 22.8 26.3 30.6 34.6 38.0 42.8

Table 4.1: Technical sta (permanent positions or contracts) during the development phase The following two tables are less accurate and give only an indication on the amount of work of the scientists directly involved in the data processing. Each individual scientist is taken into account with a maximum of 50%.

Technical sta 06 07 08 09 10 DPC Magt 1.0 1.0 1.0 1.0 1.0 Geneva 5.0 5.0 POSDAC 12.3 11.3 10.8 6.8 4.5 LPAC 7.0 7.0 6.0 5.0 4.0 CPAC 5.5 4.5 2.5 2.5 1.5 Garching 10.0 10.0 10.0 9.0 6.0 SSD 2.5 2.5 2.5 2.5 2.5 Total 43.3 41.3 32.8 26.8 19.5 Table 4.2: Technical sta (permanent positions or contracts) during the operations and post-operations phases

Scienti c support DPC Magt Geneva POSDAC LPAC CPAC Garching SSD

Total

98 99 00 01 02 03 04 05 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1.8 2.3 2.3 2.5 3.8 3.8 3.8 3.8 1.5 1.7 1.7 2. 2.5 2.5 4.5 4.5 2.0 2.0 3.0 3.5 3.5 3.5 4.5 4.5 2.0 2.0 2.0 2.0 2.0 2.0 3.0 3.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 8.3 9.3 10.5 11.5 13.3 13.3 17.3 17.3

Table 4.3: Scienti c support (academic positions or post-docs) during the development phase

Scienti c support DPC Magt Geneva POSDAC LPAC CPAC Garching SSD

Total

06 07 08 09 10 0.5 0.5 0.5 0.5 0.5 0.5 0.5 4.0 4.0 4.3 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 3.0 3.0 3.0 3.0 3.0 0.5 0.5 0.5 0.5 0.5 18.5 18.5 18.3 19.0 19.0

Table 4.4: Scienti c support (academic positions or post-docs) during the operations and postoperations phases

Chapter 5

TEST AND CALIBRATION PLAN 5.1 Test plan Most subsystems shall have their performance tested on preliminary models or breadboards in the different hardware producing institutes : bolometers, hornfeeds, optical bandpass lters, JFETs, readout electronics, dierent cooling systems and electric units. The on-board software will be no exception and shall be tested rst on its development bench, then with the Data Processing Unit and the experiment Electrical Ground Support Equipment all these subsystems will be developed by an integrated team in Orsay. HFI Readout Electronics scheme is already used on test facilities in several in laboratories involved in the Planck HFI and used in a ground based experiment (DIABOLO). It is worth mentionning that the sorption cooler shall be an adaptation of an already own hardware a development model is being built at JPL. The 4K cooler is a copy of hardware quali ed under ESA contract at RAL and MMS-UK. While the 0.1K dilution cooler is a ight version of existing ground hardware in use in dierent laboratories/observatories, a demonstration model of the HFI Focal Plane Unit and associated cooling systems is being prepared for quali cation, performance and life tests at IAS, under a joint contract of ESA and CNES. It is foreseen to use o the shelf quali ed helium storage and ancillary pneumatic components. A speci c ground model of the dilution cooler shall be produced, including the lters and their heaters at the dierent levels, in order to perform long duration cooling tests as a way to validate the used gas on ground and in- ight cleaning procedures. Another model more representative of the thermal performances of the ight unit shall be used to validate the thermal mathematical model of the HFI Focal Plane Unit. An Electrical Model shall be built which functionalities and interfaces will be fully tested from hardware and software point of view before delivery. A twin brother of the Electrical Model shall stay undelivered to be integrated in the experiment Thermo-Optical Model (TOM). That model shall include a development model of the Focal Plane Unit, with its dilution cooler which shall be quali ed at unit level. Cooling performances shall be established with use of an existing ground 4K cooler, and a simulation of the sorption cooler using pumped hydrogen from high pressure bottles. After electrical integration of a reduced number of detections chains, functional tests, autocompatibility tests, and performance optimization shall be performed, progressively integrating "calibration" facility test and validation. The TOM will use the experiment on-board software, the regular HFI EGSE and a spacecraft Simulator, and will also be used to check all operational procedures of test, telemetry and telecommand mnemonics and transfer functions databases, commanding tools and establish a preliminary experiment performance database. Prior to delivery the HFI Quali cation Model (QM) shall be submitted to all unit level environment and quali cation tests in compliance with last version of IID A requirements and quali cation levels. If successful quali cation has been secured with QM test program the HFI PFM shall be submitted to the test sequence agreed with the Project at acceptance levels. A full functional test of the whole cryogenic chain will be performed. Thanks to NASA funding already secured, the sorption cooler QM can be procured quite early. This allows this model enough time to come to IAS for this integrated test of all coolers with the HFI FPU. This compatibility test 78

shall be performed in the IAS existing large vacuum chamber. A cryogenic screen just in front of the detectors will keep the background low enough for them to be operational. This test will allow the veri cation of the temperature performances of the dierent cryogenic stages. This test will also allow to check for any eect of the cryogenic chain on the detector noise. This test is separated from the ground calibration test of the HFI FPU which will be carried out in a cryostat with the dilution cooler operated from Hydrogen pressurised tanks. Prior to delivery as Flight Spares, units refurbished from QM ones shall be submitted to a series of tests which shall be speci ed on a unit by unit basis, taking into account the activities performed with the units during the quali cation, the after delivery Assembly Veri cation and Veri cation (AIV) and the refurbishment. In case of mechanical disassembly, functional veri cations, bake-out, cleaning and simpli ed acceptance level vibration test veri cations will be de ned.

5.2 Performance tests and calibrations plan 5.2.1 Ground calibrations

The ground calibrations will provide the set of data needed to check and understand the instrument behaviour as an integrated system. These data will be obtained from measurements at the component, subsystem and integrated instrument levels.

5.2.2 Test and calibration facility

The test and calibration facility will provide:  a functional test of the integrated HFI instrument under realistic optical uxes and thermal background,  means to check that selected components and subsystems characteristics are preserved in the integrated HFI instrument,  means to study the optical response of the integrated instrument under various operating conditions and uxes. The operating thermal environment and optical sources will be set in a range covering those predicted in orbit. The absolute calibration will be carried to about 10% precision.

5.2.3 In-ight calibration

The in orbit calibration will provide the accuracy needed for the data processing. The goal is a 1% photometric accuracy. The FIRAS experiment onboard COBE has provided the best photometric calibration for extended sources in the millimeter/submillimeter wavelength range. The DMR experiment has measured the dipole component of the CMB at long wavelength. The in- ight calibration of the HFI will rely on both. In the lowest frequency channels, the dipole signal will be detected with high signal to noise in a few minutes and will allow a continuous relative monitoring of the calibration in time. At higher frequency the crossing of the galactic disc will provide an equivalent information. The nal photometric calibration will be adjusted on the FIRAS calibration after proper averaging of large enough regions on the sky. The HFI-PLANCK experiment has no on-board calibrator. Data obtained from the ground tests and calibrations will be used, when necessary, to establish the in- ight calibration data : for example, the knowledge of instrument spectral response is critical to the HFI broad band calibration and this spectral response cannot be measured in ight. The beam pattern will be measured in ight. The main beam will be measured using the outer planets. The far side lobes of the beam pattern will be recovered using the galactic emision, the sun and the earth. The feasability of this method has been investigated by a numerical simulation (see chapter 2 and appendix C).

5.3 Calibrations parameters The parameters needed to understand the response and behaviour of the instrument are listed in Table 5.1. For each of them, the level at which they are studied (component, sub-system, HFI ground tests and calibrations, in- ight) is indicated. The HFI test and calibration facility will provide the

possibility to study the eect of the selected parameters on the response of the instrument. In order to keep the calibration phase short, part of the measurements obtained on sub-systems will not be reproduced when there is no simple way to get signi cative information from the instrument calibration phase. The test calibration facility will also provide the environment needed to make an integrated test of the instrument and a way to check some of the compatibility issues (EMC self and with LFI, thermal environment).

Ground

Component Beam on the sky Far side lobes Horn angular response Spectral response Time response Optical polarization Responsivity (electrical, optical) Linearity Sensitivity to -T0 (100 mK stage) -Tbackground (optical load) -T4K (4K stage) Absolute response Detectors noise Crosstalk(electrical and optical) Readout electronics (capacitance stability, gains, noise) Compatibility: - particles - EMC - LFI - vibrations

Sub-system

horn+telescope horns horn+telescope horns horn+lters (chromaticity) +detector lters id. detectors id. polarizers id. detectors id. id. detectors id.

id. id.

detectors

id. id. partly lled focal plane

readout electronics detectors

HFI test and calibrat. facility

In-ight Planck payload p p

pupil plane p p

2nd priority 2nd priority 2nd priority 1st order p

p

p p

electrical

self compatibility one LFI channel cryocoolers

Table 5.1: Test and calibration parameters

5.4 Test and calibrations facility

5.4.1 Thermal environment and interfaces

5.4.2 Optical uxes Background

10-11 10-12

Power on detector (W)

The calibration facility will allow realistic conditions of operation for the HFI. Since the HFI includes a cryogenics system between 50 K and 100 mK, thermal interfaces will be provided simulating the Planck payload environment. The thermal environment will consist in a 2 K enclosure containing the calibration sources, and heat loads on the 18 K shield of the HFI to simulate the thermal load of the LFI.

HFI

10-13 10-14 10-15 10-16 0.2

Astrophysical Telescope Total in orbit 2K blackbody in front of HFI

1 Wavelength (mm)

5

The main diculty to carry a representative calpower falling on single ibration lies in the simulation of the background Figure 5.1: Background 2 mode detectors ( etendue) and a  = =4 bandwidth.

radiation that will be present during the mission. It is composed of two major components : the telescope emission (simulated by a 60 K Planck spectrum with an emissivity of 0:005 (=1mm);1=2 ) and the CMB (2.73 K blackbody). The spectrum of this background is shown on gure 5.1 for a single mode detector. It is relatively at in the range of wavelength of the HFI. The superimposition of two sources spectraly equivalent to these components is needed to reproduce this background in the calibration facility. A submillimetre absorber, such as Eccosorb, already extensively used at cryogenic temperatures, will cover large areas of the 2 K shield to control straylight. The spectrum of this bae is shown on gure 5.1. It is at least a factor of 3 below the expected background in ight.

Continuum sources To reproduce the combination of the background and source in the main beam coming on the detectors, we will develop an integrating sphere at 2.7 K fed by two types of sources (a 3 K to 60 K blackbody and a spectral source). In front of it, a neutral density at 2.7 K will be placed if necessary to control the optical background. Modulated point sources at cryogenic temperatures (3 K and higher) will be used for angular response measurement.

Spectral sources Although every component used in the ltering scheme will be individually measured, a spectral reponse will be carried at low resolution (0:1cm;1 ) from 1000cm;1 to 1cm;1 to check the integrated instrument. This will be provided by a Fourier Transform Spectrometer (FTS), operating at ambient temperature through an input port of the cryostat and the necessary neutral density lters, illuminating simultaneously all the detectors. An absolute bolometer will provide the reference interferogram for the FTS measurements.

Time modulation Time modulation will be obtained by means of a mechanical chopper or an electrical modulation of the small blackbody sources.

Spatial coverage The goal is not to measure the imaging caracteristics of the HFI, but to be able to measure the main beam pro le of the detectors in the pupil plane. A modulated point source will be moved in this plane or its image through the calibration optics.

50 K 18 K 2K

S FT

so

ce ur

Ø1m h 0.6 m

s

5.4.3 Control and data s ou

Control of the calibration operations and data acquisition will be handled by the HFI Ground Segment Equipment (GSE) and the calibration facility computer. Data processing will use a set of subroutines from the HFI data processing software plus speci c subroutines. The DPU will allow standard format but also a speci c data ac- Figure 5.2: HFI calibration setup ( 20 beam repquisition mode and format to allow a large over- resented). sampling of selected channels. Calibration data consist in the data coming from the component, sub-system and calibration facility tests. Calibration data will be archived in the IDIS database to be used during the in- ight calibrations. rc e

s

5.4.4 Optical setup

Figure 5.2 displays the calibration setup.

5.4.5 Cryostat

The ISOCAM calibration cryostat (internal plate 0 1m), located in the calibration facility at IAS, will be modi ed to accept the HFI and its calibration optics. Special care will be taken on the 2 K shield (light proof and submillimeter absorber to control straylight).

5.4.6 Test and Calibration facility development plan

Actions will start at the begining of the project to select appropriate materials and devices on absorbers, neutral densities, light proof vacuum ports, small cryogenic modulated blackbody sources and integrating spheres in the millimeter and submillimeter range. The test and calibration facility will be made ready for operations with the TOM: this phase will be used to test both the TOM and the facility. Calibration of the QM and of the FM will then be carried according to the general planning.

Chapter 6

SYSTEM LEVEL ASSEMBLY, INTEGRATION AND VERIFICATION 6.1 The HFI-LFI focal plane units interface The two instruments share the 18-20K sorption coolers (main and redundant). The HFI FPU can be disconnected from the sorption coolers at the level of the heat exchanger. The HFI FPU will be mounted onto the 20K plate of the LFI FPU. A baseline mechanical and thermal interface has been de ned. It will be iterated upon if needed during the detailed de nition phase. The LFI needs a reference load for its radiometers. The optimum temperature for such a load is 4K. The LFI has a concept of a thermal coupling of a set of reference feedhorns looking at a reference ring on the HFI 4K box. The heat input of such a system should stay below 1mK. The system implications (mechanical, thermal and for integration) of such a device have not been studied. Considering the improvement it would bring to the overall Planck mission over the use of a 20K reference load, the HFI will try to accomodate this device through discussions of a more detailed design with the LFI.

6.2 Contribution of the HFI Consortium to the Payload Module (in reply to section 1.3.2 of the AO) The PLM of Planck Surveyor is an integrated system aimed at a single main objective for which it is optimized. As described earlier, the selected con guration the two instruments in the Planck telescope focal plane is such that HFI part of the common Focal Plane Unit is completely embedded into the LFI part of this unit. The LFI plans to integrate its focal plane unit together with the back-end, to test it and to keep it as a single unit with a rigid mechanical structure linking the FPU to the back-end around the wave guides. This, added to the extremely high level of integration of the 300K-18K cryo harness and pipework with the Planck Payload Module hardware, leads to the conclusion that we must deliver the two parts of the FPU (from HFI and LFI) separetely to the PPLM Contractor. (The LFI element is already encompassing all temperature stages of the focal plane unit from 20 K to 280 K). The rst step of the mechanical PPLM integration is then to intregrate the HFI 4 K focal plane box into the LFI FPU. This is in agreement with the LFI proposal for integration. The detailed procedure shall be strongly dependent on the detailed design of the two elements of the FPU, but obviously also of the detailed design of the PPLM. It shall also be extremely dierent in the case of the Merger con guration and of the Stand-alone one. In the Stand-alone con guration the telescope focal plane is perpendicular to the PPLM bench, and this situation oers a much better access to perform the connections between the cryo harness and the rear of the FPU. In the case of the Merger the FPU connections are located between the unit and the so-called 50K bench and have to be reached through it, and the dierent thermal screens located inbetween. LFI and HFI PIs propose to participate, in an integrated team with the PPLM Contractor to the mechanical and 83

thermal design of this area of the Payload Module and simultaneously to prepare the integration and test plan. We do not propose to take any responsability in the detailed design and procurement of the PPLM outside the HFI, but the PI team is also willing to join an integrated team with PPLM Contractor for a participation to the integration and tests of the PPLM which could extend beyond the tasks stricktly needed for the HFI elements.

6.3 Proposal for a contribution to the Planck telescope alignment The main goals of the telescope characterisation proposed here are 1) to nd the position of the best focus along the optical axis and adjust the secondary mirror relatively to the primary dish , and 2) to control the image quality in the whole area that will be used for the HFI and LFI horns, to identify the o-axis aberrations and check that the speci cations are kept. This is a critical point considering the coupling of the telescope with the multi-beam detectors horns. With a 10m surface accuracy for the mirrors, this characterisation has to be performed in the submillimeter range. During the last few years, CESR has developped experience and set-up for the alignment of two submillimeter telescopes, both dominated by scattering in the visible range. The 2m segmented primary mirror of the PRONAOS balloon-borne experiment was aligned with a 20m WFE, de ning a procedure ioncluding measurements in both in the visible and submillimeter wavelength ranges and described in (Ristorcelli et al., 1997). This method allowed in addition to measure the optical axis with respect to the payload reference axis (and later with reference the star sensors). The characterisation of the ODIN gregorian telescope (0=1.1m, F/4.4) has been performed using a one meter diameter collimator operating both in the visible and submm range. This collimator (newtonian, F/D=5.3) is quali ed in the visible at =5, and with a gold coverage to oer a re ectivity better than 95%. It is equiped with a motorized source system (mercury lamp) which, together with a submillimeter detector (bolometer at 4.2K) at the focus of the telescope to be characterized, allows the mapping of the point spread function. For instance, an analysis based on encircled energy plots, allowed to determine the position of the ODIN telescope best focus with an accuracy of 0:5mm. We propose to use the same system for the alignment and characterization of the Planck telescope. The 1 meter aperture of the collimator gives an illumination of a large part of the Planck primary dish useful area. Using dierent passband lters in the detection system will allow to perform the characterization at several Planck frequencies. As underlined in the phase A report, the estimate of the WFE error due to cool-down of the telescope (from 300K to 100K) does not dominate the overall error budget so that the optical quality need only be veri ed at ambient temperature. The impact of the cooling down on the focal position will be determined from the geometrical changes in the relative positions of the two mirrors. These changes will be measured using both laser and theodolite measurements on reference points and/or optical cubes xed on the telescope at ambient temperature and during the passive cooling test. Then the new focal plane position will be computed using a numerical model. This characterisation can be done on both the QM and the FM telescope or only on the QM. The proposed test requires to support and move the Planck telescope already integrated in its PPLM support stucture. The availability of the proper MGSE is an important factor to decide where such a test, if accepted, should take place.

6.4 Assembly, Integration and Veri cation All described post delivery AIV activities shall take place at PPLM and SVM contractor(s), or under their responsibility in the chosen facilities, and shall be performed in agreement with the procedures produced by the Prime Contractor or his deputy. Each procedure shall be approved by the ESA Project and the PIs more than TDB working days before the corresponding test starts. HFI team shall support the overall AIV process with appropriate experts on a case by case basis as identi ed in each procedure. HFI Electrical Model shall be the rst delivered model. This delivery shall also include an experiment Database and one set of experiment Ground Support Equipment. First step shall consist

in deliverables in-coming inspection and tests :  HFI Database identi cation shall be checked, then database shall be installed into proper bank, such as IDIS (TBC),  HFI hardware identi cation shall be checked, as well as accompanying documentation, then its electrical interfaces shall be checked/measured as speci ed into prepared procedures,  HFI EGSE identi cation and related documentation shall be checked, then compatibility between the experiment EGSE and the Central Check-out System shall be tested as speci ed into prepared procedures. HFI hardware shall then be integrated with the spacecraft dedicated Power Distribution Unit (PDU) and Remote Transmission Unit (RTU). Which shall be followed by a set of HFI dedicated Functional Tests including the experiment and telemetry dierent modes. Ability to command the experiment through the CCS and spacecraft hardware shall also be tested. It may not be necessary to perform an exhaustive validation of all HFI telemetry and telecommand mnemonics, and related parameters at this stage as a good part of them shall anyway be used at next test step. The following step is to perform an overall compatibility test of the spacecraft hardware and both LFI and HFI, running all together, and this in the dierent modes. Experiments databases shall then be fully validated, and on-board softwares compatibility veri ed in all foreseen situations. It shall be veri ed that no telemetry packets are lost at emission in any mode and transition between modes. Conducted EMC tests could also be performed at this stage in as much as present hardware permits. Described tests do not imply any special service from the spacecraft or used facilities, but it is obvious that only the use of actual spacecraft hardware shall fully validate the compatibility of the "participating" units, on the contrary the use of simulations shall not necessarily prove as de nitive. To be fully eective, described tests shall be performed early enough to allow the incorporation of their results in the design/manufacturing of following models. The Electric demonstration Model shall be maintained integrated and functional in order to be used to test any software upgrades or hardware evolutions without interfering with other models preparation. It could also prove usefull for Operations Ground Segment Equipment and software preparation. HFI Quali cation Model as indicated by its name shall be able to resist to tests at quali cation levels and thus to allow testing the Proto-Flight Model at acceptance levels. As a rst model build to Flight standards, the QM shall be able to sustain the whole set of quali cation and environment tests as well as tests performed to validate the experiment thermal and mechanical math models. First step after usual in-coming inspection and tests, already described for the EM, shall be to perform the mechanical integration of the dierent units, particularly the Focal Plane Unit for which the position and alignment are critical for the success of the mission. This operation shall obviously be performed at ambiant temperature but shall be done in such a way that tolerances after launch and at lower temperatures are met. Performing this mechanical integration shall be the drive for the validation of corresponding procedures. Note that a cold test could be necessary to measure/verify telescope defocus from ambiant situation, this does not necessarily involve the HFI as in the given test conditions the instrument by itself is unable to provide the required measurement. In agreement with IID-A speci cation, HFI QM shall be submitted to :  ambiant temperature functional tests, alone rst then together with LFI symmetrically HFI shall stay OFF during LFI rst functional test,  ambiant temperature EMC set of tests, involving actual SVM hardware,  vacuum thermal balance tests, full vacuum functional test and vacuum electro-magnetic compatibility tests as this is the only opportunity to have HFI detectors fully functional. This test is an elaborated one and involved constraints shall be addressed later in this chapter.  the full set of mechanical tests, as described in IID A Chapter 9, which supposes that all units of HFI are mounted on structural pannels representative of spacecraft mechanical properties,  each of these critical tests shall be followed by functional tests in order to check the experiment health and performances following them. Prior to the QM vacuum thermal balance test, some preparation of the experiment coolers are needed for tanks gaz lling, and leak testing. The tests chamber shall be equipped with a pipework driving the dilution cooler used helium out of the tank where it shall be pumped into a storage bottle

by HFI Pneumatic GSE. One should also be aware of the fact that cooling down and warming up of HFI are time consuming operations, of the order of 2 weeks each way. In order to have HFI detectors work in proper conditions when cooled down the infra-red ux they receive must be kept close enough of the ight observation conditions, which means that the telescope temperature and its baes should be less than TBD and a cryogenic screen located close to the FPU at a temperature of TBD. If prior to vibration tests it is necessary to have HFI helium bottles pressurized up to full launch pressure, i.e. about 300 bars, involved hazard(s) shall have to be addressed (TBC). Level of cleanliness for all QM AIV sequence is that of classical ight hardware, i.e. Class 100 000 or better, which is less a constraint than for the PFM. It is reminded that HFI QM is potentially

ight hardware as it will be returned to the PI for being refurbished/cleaned into Flight Spares, and should as such be treated with all the appropriate caution. HFI Proto-Flight Model is expected to have more or less the same AIV programme than the QM. With however a few dierences :  Cleanliness Plan shall fully apply to all PFM AIV activities,  if successfull to a sucient degree of con dence at QM level : the vacuum thermal balance tests may not be repeated for the PFM, in which case it could be replaced by a TBD thermal vacuum (at acceptance level TBC) and leak test,  Mechanical Tests may not include all tests performed at QM level (such as quasi static tests, models validation tests,..) and acceptance levels could be applied if quali cation was considered obtained by QM testing. HFI Flight Spares units shall be submitted to usual incoming inspection at delivery. They shall then be stored as agreed in an indoor area in conditions identi ed into joined documentation.

Chapter 7

FLIGHT OPERATIONS 7.1 HFI Flight Operation philosophy The thermal stability of the Planck payload is particularly critical for the HFI because of its very high sensitivity combined with the rising spectrum of the thermal emission from the telescope, the baes and all elements seen in the side lobes of the instrument. The sky scanning strategy is another critical item as is xes the degree of redundancy in the data and thus the ability to remove systematic eects and low frequency noise as demonstrated by the simulations. The capability to move the spin axis with respect to the antisolar direction is an essential tool to control the degree of redundancy. During the veri cation phase, tests will be performed on the thermal stability of the payload for dierent spin axis con gurations. The monitoring of the temperature of the critical elements of the payload module must have and accuracy and a sampling rate such that temperature variations of the passively cooled parts can be measured with a sampling of 30 (TBC) or better in the azimuth angle of the spin, and an accuracy better than 0.2 mK, after averaging over 100 rotations. These test will allow a choice to be made between a few prede ned sky scanning patterns. For normal operations the depointings of the spin axis for a period of a least a week will be uplinked and stored onboard. Small corrections to this plan will be made dayly to take into account the drift from the preplanned trajectory of the spin axis due to the accumulation of uncertainties in the depointing. Data lost in transmission to the ground (telemetry drops, ...) will be retrieved from the onboard memory the next day. Consequences of data lost at the acquisition (instrument or satellite) which leaves holes in the sky coverage and any instrumental problem will be assessed within the next 3 (TBC) days by the relevant groups (instrument experts, science team,...) and the sky scanning strategy will then be modi ed as required by the situation and uplinked. No change in the sky scanning plan other than the small adjustments to compensate drifts will be done on a time scale shorter than 5 (TBC) days. A smooth pattern of spin axis repointing will be kept as much as possible to minimise systematic eects. Degraded modes of operation will be studied to cope with thermal, power, ... problems which might be solved by operating only one instrument or a fraction of an instrument at a time. In such modes the full capability of the mission could be retrieved by observing dierent complements of frequency channels in successive sky coverage of 6 months.

7.2 HFI Modes description HFI Storage Mode

In storage mode the HFI is not powered, it cannot receive any command, it generates no data. Experiment dedicated non-op heaters are not powered, neither are so-called "spacecraft powered thermistors" installed on HFI. The instrument can stay safely in this mode without limitation of duration when stored at ambiant temperature and pressure. Strong limitation in duration exists for storage when in other environment conditions or during ight.

HFI Launch Mode

HFI Launch Mode is exactly the same as the Storage Mode, but for a few watts power being delivered to the experiment 4K cooling system. This launch power is used to maintain electro- mechanically 87

the cooler mechanisms for the duration of the vibrations generated during the propulsion phases of the launch. The need to provide such power during vibration tests performed on ground is to be con rmed.

HFI Mode during Coast Phase During FIRST/Planck , or Planck alone, coast phase HFI is exactly in the same condition as in

Storage Mode : no electric power and no commands are received, no data are generated. During this ight phase HFI temperature may vary quite rapidely, dierent possible coast scenarii impact on experiment thermal behaviour shall be assessed. During coast phase, as in any other phase of ight, spacecraft attitude shall not bring the sun into Planck telescope forbiden volume.

HFI OFF Mode

OFF mode dierence with Storage Mode is that, thanks to spacecraft power availability, despite experiment is not operating : HFI installed "non-op heaters", directly powered by the Service Module, can maintain the instrument inde nitely within acceptable temperature limits. In this mode, Spacecraft powered thermistors provide HFI dierent subsystems temperature redundant measurements which are included into a speci c S/C housekeeping speci c telemetry packet.

HFI Sleep Mode

HFI Starting Procedure(s) shall bring the instrument from o mode into a "Sleep Mode" which should more accurately be described by the awakeness of the experiment interface electronics. This transition shall be initiated using speci c Near Real Time (NRT) many hours sessions during which experiment data and commanding shall both be available. When in Sleep mode the experiment is powered up. The overall dissipation is limited. Some experiment units non-op heaters remain direcly powered by the spacecraft SVM. The experiment receives all synchronisation and clocks but that driving the HFI Science Telemetry packets reading. Commands are received, acknowledged, copied into HSK telemetry and executed if compatible with this mode units status. Only HouSeKeeping telemetry packets are produced, red and downlinked.

HFI Cool-Down and warm up transitions

Experiment cooling systems shall be started sequentially, from the warmest to the colder, and started system(s) must have reached a speci ed thermal regime before the following one is started. Initialisation of the dierent coolers shall be performed using speci c Near Real Time (NRT) many hours sessions during which experiment data and commanding shall both be available. In nominal situation two sorption coolers "in cold redundancy" can be used, when both systems are available LFI driven unit shall be used. At this stage the powered sorption cooler dissipation shall be the nominal budgeted one plus or minus a few percent. When appropriate thermal conditions are met the HFI 4K and the dilution cooler can be started. After TBD hours of operation the 4K cooler dissipation will reach nominal budgeted dissipation. When cooled hardware has reached low temperatures it is possible to switch the detectors readout electronics ON. Powering them shall increase the experiment overall dissipation close to nominal allocation. HFI cooling-down duration could take up to 2 to 3 weeks.

HFI Warm-Up Procedure

HFI warm-up prcedure consists in running the HFI cool-down procedure backwards as shall be more accurately described into relevant procedures. HFI Warm-up duration could take a little less than cooling-down. Nominally experiment warm-up is performed at the end of HFI-LFI sky surveys in the case of FIRST and Planck joint mission, or at the end of the mission in a Stand-alone option, as well as at the end of any cooling systems ground test.. Should some pipe clog-up occur, local hardware warm-up shall also be initiated, on ground or in- ight. Such a procedure can be used either when in cooling down process or in Standby Mode or even in Observation Mode.

HFI Stanby Mode

In Standby Mode HFI is ready to start observing without delay, all systems are operating including the production of Science Telemetry packets even if not read by the spacecraft OBDH. The experiment overall dissipation is the nominal allocated one, only HSK packets are downlinked, including reduced rate detectors measurements.

HFI Nominal Mode

HFI being in Standby Mode switches to Nominal Mode as soon as Science Packets are read by the

spacecraft. This change induces a negligible increase of power dissipation into the HFI Data Processing Unit. In Nominal Mode HFI uses the total allocated number of telemetry packets as well as the full allocated power. It is foreseen to have some exibility in the selection of the data that shall be included into HFI science telemetry packets without changing the total amount of downlinked data. Depending on observing parameters and data included into science packets one can identify HFI dierent Nominal sub-Modes : - Detection Fine Tuning Modes, - Far Lobes In-Flight Calibration Mode, - Observation Mode, - On-Board Software Dump Mode, - Data Compression Scheme Validation Mode,.. Resources needed for the experiment are the same for all of them. Spacecraft spinning movement and attitude variation programme(s) are compliant with nominal requirements. These sub-modes are described into IID B Paragraph 4.6.

HFI Partner Mode

This mode could be used when HFI data is given a higher priority than LFI one, for exemple during experiment sequential early commissioning : LFI being in OFF Mode or Standby Mode, HFI could use the science packets allocated to both instruments. Possible applications are faster commissioning, faster validation of data compression scheme,... Obviously LFI could bene t of the equivalent of HFI science packets allocation when HFI is OFF or in Standby.

7.3 Data continuity Systematic mapping of the sky requires a very high continuity of experiment Science Telemetry collection on one hand. On the other hand it is forbiden to o-point the spacecraft by more than 10 degrees from nominal attitude, i.e. its Z axis pointed towards the center of the solar disc. Another element is the fact that the data stored into spacecraft Solid State Recorder during a day shall be downlinked within a "pass" that can be as short as 2 hours, which means that any interruption of data collection corresponds to a period of observation 10 times longer. Reprogramming daily the spacecraft observation plan to re- observe the small missing parts of the sky is a very constraining, unecient and resources consuming way to enforce data full collection, switching from a spacecraft attitude to another may also create thermal discontinuities which is just what we do want to avoid. If the spacecraft SSR has the capacity to store a few days of scienti c data a much more elegant way to solve the problem is to re-dump a second time the missing data, which implies to be able to nd the location of these data in the storage area. As experimented in other programs, like SOHO, a third emission of the data is from time to time necessary to obtain the required data collection continuity. It is obvious that the spacecraft design shall have a goal of no-loss of data at emission, including at transition between dierent telemetry modes. Two hours passes are also a relatively short when compared to large storms or antennas heavy failure recovery duration : one cannot eliminate the risk of having a full daily telemetry session lost here and there if there is no fall back antenna solution. This risk has to be addressed for the spacecraft safety and commanding but also from the data continuity point of view.

Chapter 8

QUALIFICATION AND EXPERIENCE OF THE PI TEAM 8.1 Principal Investigator J.L. Puget (Director, IAS) has over 20 years experience in experimental and theoretical studies of the diuse IR backgrounds. He has lead the group who discovered the cosmic infrared background using the COBE/FIRAS data and the related population of infrared galaxies at 175 m with ISO. He has been involved in several balloon borne projects in the far infrared and submillimeter range and in several studies of space projects to study the CMB (CIRES, CIRBS, CRYOSPIR, AELITA, SAMBA, FIRE) and he is a mission scientist for the Infrared Space Observatory of ESA.

8.2 Co-Investigators P.A.R. Ade (Prof., QMW) heads the continuum receiver laboratory of the QMW Astrophysics Group, where teams of specialist scientists and engineers work on far infrared and submillimetre instrumentation for astronomy and atmospheric science. Prof. Ade has over 25 years of experience in astronomy and instrumentation, and has worked on a number of important earth-based and space-borne instrument projects. He pioneered the development of 3He-cooled submillimetre photometers, opening up this new spectral band for the astronomical community, and has developed unique capabilities in the development and manufacture of far infrared and submillimetre lters. He is a Co-Investigator on the ISO LWS instrument, for which the QMW group provided the detector subsystem, and of the Cassini CIRS instrument. He was responsible for the design, construction and delivery of the bolometer arrays for SCUBA. Prof. Ade is also a Co-I on the FIRST SPIRE instrument. K. Bennett (Sta member, Space Science Department, ESTEC/ ESA) is a specialist of space high energy astrophysics and has had responsabilities in several projects: Project Scientist for the COS-B mission, Study Scientist for several ESA X-ray and gamma-ray missions, Co-PI of COMPTEL on GRO, he has developped detectors for gamma-ray telescopes and is presently Deputy head of the Astrophysics Division resp. for Data Archives and Analysis. A. Benoit (Directeur de Recherche, CRTBT) is a low temperature physics and solid state physics specialist with many achievements in this eld. He has developped a dilution refrigerator working in zero G which has been space quali ed by CNES in .... He has played a key role in the developpement of the DIABOLO experiment including the total power read-out electronics. P. de Bernardis (Associate Professor, Dipartimento di Fisica, Universit/a La Sapienza) his eld of research has been Experimental Cosmology and speci cally CMB measurements with ballon borne experiments for 18 years. He has been PI for the ARGO balloon experiment (ASI-CNES 19881993) and he is PI for the BOOMERanG long duration balloon experiment (ASI-NASA 1993-1999). He is also involved with the MAXIMA balloon Payload of UC Berkeley (NSBF-CFPA-NASA, 19951999). His competence in instrumentation covers Cryogenics, Cryogenic Bolometers, Low Noise Analog Electronics, mm-Wave Optics.

90

J.J. Bock (JPL) will play a key role in the development of the bolometric detectors, which he invented. He has ten year's experience in infrared optics, lters, detectors, space cryogenics and cooled optics. He has supplied similar detectors to PRONAOS, BOOMERANG, MAXIMA and SuZIE, and is a member of the US consortium working on bolometric detector arrays for FIRST. F.R. Bouchet (Directeur de Recherche, IAP) will play a key role in the analysis and interpretation of data. His primary scienti c interests are in the formation and evolution of structures in the universe, from galaxies to the largest observed ones. He has pioneered analytical approaches to the description of large scale structures, novel analysis methods of galaxy catalogs, and numerical simulations of defects evolution and their signature on the CMB. He has coordinated the detailed studies of the capabilities of COBRAS/SAMBA for the red book and for the HFI proposal. He will be the science coordinator of the HFI consortium. T. Bradshaw (Head Cryogenic Section, RAL) will lead the provision of the 4 Kelvin cooler system, including the integration and test at system and integrated levels, as well as providing expertise to the overall thermal design. He has pioneered the development of coolers for space applications and has 20 years experience in cryogenics and low temperature physics. S.E. Church (Senior Research Fellow, Caltech, Pasadena) will play a key role in the development of the focal plane optics. She has 8 years of experience in IR and mm-wave optics, lters and detectors for ground and space applications. She is a member of the ISO LWS Science Team and is PI of the Polatron experiment. F. Couchot (Charg#e de Recherche, Labo. de l'Acc. Lin., Orsay) is an experimentalist used to particle physics detector building and data analysis. He worked mainly on the search for glueballs in radiative J/Y decays, and on electroweak physics at LEP, for which he was responsible for the construction of low noise readout electronics of a silicon detector. He came to astrophysics in 1993 through Macho searches in the EROS2 collaboration for which he lead the automatisation of a 39 inch telescope. He will be the manager of HFI general electronics, and will take part to the work on level 2 data processing. T.Courvoisier (Prof. adj. University of Geneva, Integral Science Data Center, Versoix) Participation in the EXOSAT mission as Duty Scientist: development of analysis software, participation in the orbit operations (1980-1984). Member of the ST-ECF: participation in the development of analysis software and simulations (1984-1988). Co-I on the XMM-RGS. Former chairman of the ESIS steering committee and member of the AWG. Research in the physics of active galactic nuclei. Since 1996 PI of the ISDC, the centre which has the responsibility to receive, analyse, archive and distribute all the INTEGRAL data to the community in the world. decisions will be taken on a time frame that is adapted to the Planck mission development. G. Efstathiou (Professor of Astronomy, Institute of Astronomy, University of Cambridge). His main scienti c interests are in the theory and origin of large-scale structure in the Universe. Together with Dick Bond, he developed many of the theoretical techniques that are used to model the primordial CMB anisotropies. R. Emery (Head Astrophysics Division, RAL) has extensive experience with astronomical instrumentation, particularly for IR and sub-mm observations, operated with ground-based, balloon-borne and space facilities. He has played a major role in a number of space programmes, most recently as Project Scientist for the ISO Long Wavelength Spectrometer, and will coordinate the work at RAL for Planck. His research interests include the interstellar medium and star formation. M. Giard (Charg#e de Recherche, Centre d'Etude Spatiale des Rayonnements). Experience in infrared and submillimeter astronomy, with scienti c production in the eld of modelling of interstellar molecules, very small dust particles, study of the large scale Galactic properties, measurement of S.Z. eect and modelling relativistic S.Z. eect. PI of the AROME balloon experiment. PRONAOS CoI: integrations, quick look and data analysis softwares. Responsible at CESR for the bolometer readout developments. R. Gispert (Directeur de Recherche, IAS) has been involved in pioneering ground-based and balloon borne projects for detecting the galactic diuse emission in the far infrared and submillimeter range (AGLAE, EMILIE). He led the data processing and software team for the IKS-VEGA space project and was involved in the management of the French astronomy. He has experience in computer science

and in R&D activities in this eld. He is a coordinator of the numerical simulations for Planck and will play the leading role in the HFI data processing. M. Gri n (Professor, QMW) Since then his research interests have included instrumentation for far-infrared and submillimetre astronomy and Earth observation (specialising in bolometric and photoconductive detector systems), planetary astronomy and star formation. He has participated in several instrumentation projects for the James Clerk Maxwell Telescope (JCMT), including SCUBA, the 100-mK bolometer array receiver. He is a co-investigator on the ISO LWS project and was responsible for the LWS speci cation and calibration of the LWS detectors, the cold readout electronics, speci cation of the analogue signal chain and EMC modelling. He is the leader of the LWS Solar System astronomy team. Between 1992 and 1997, he was a member of ESA's main payload study teams for FIRST and Planck. He is also the PI for the FIRST-SPIRE. S. Hanany (Assistant Professor, Racah Institute of Physics, Hebrew University of Jerusalem) has extensive experience with submillimeter instrumentation for CMB experiments. Most recently he had a key role in the design and implementation of the optics, cryogenics, focal plane, bolometric detectors and electronics for the MAXIMA balloon borne observatory. J.-M. Lamarre (Directeur de Recherche, IAS) has 20 years of experience in IR and submm space astronomy. He contributed to the development of this eld in France by leading or contributing to the conception of number experiments: EMILIE at South Pole, the balloon-borne AROME experiment, the space projects CRYOSPIR, AELITA, SAMBA, FIRE, FIRST. He was the PI of the imaging channel of IKS on the VEGA sounder to the comet Halley, and is the PI of SPM-PRONAOS that measured the positive part of the SZ eect. He played a major role in the birth of the Planck-HFI concept and design, and is the instrument scientist of this experiment. A.E. Lange (Prof., Caltech) will lead the US participation in the HFI. He has developed instrumentation for observations of the Sunyaev-Zeldovich eect, far IR emission from primeval galaxies, the diuse infrared backgrounds, and the spatial distribution and polarization of the cosmic microwave background. His group has pioneered many recent advances in space cryogenics and bolometric detector technology, and is the only group to date to have achieved sub-Kelvin temperatures in orbit. A. Lasenby (Reader in Physics, MRAO, University of Cambridge): His main work has been on development of CMB ground-based experiments, and on new methods of data analysis. The switched horn experiments in Tenerife grew out of his thesis project at Jodrell Bank, and since then he has been responsible for initiating both the CAT and VSA CMB interferometers. In the eld of data analysis, his main areas have been the introduction of likelihood, Bayesian and maximum entropy methods into CMB research. More recently, he has been involved in deriving the rst quantitative constraints on cosmological parameters using CMB data. On the submillemetre instrumentation side, he developed the phase retrieval holographic methods used for setting the surface of the James Clerk Maxwell Telescope Hawaii. J.Anthony Murphy (Lecturer, Maynooth, Ireland) His eld of expertise is in the area of submm-wave optics. He worked rst on an experimental submillimetre-wave array receiver for UKIRT and then as a Post-doctoral Research Associate at the Cavendish Laboratory, Cambridge on receiver development for the JCMT. In the last 10 years he has concentrated on quasi-optical design of receiver systems for the JCMT including SCUBA was also involved on the B-Band Array Study for the JCMT with SRON. I have been involved in 19 refereed Journal Publications in the area of submm-wave optics. H.U. Norgaard-Nielsen (Danish Space Research Institute, Copenhagen) has been project scientist for on the XSPECT/SODART mirror system. He is scienti c associate of the ISOPHOT Consortium and is involved in the CMB anisotropy ballon borne top-Hat experiment. F. Pajot (Charg#e de Recherche CNRS, IAS). Expertise in the eld of infrared and submillimeter instruments design and calibration. Participation to the rst measurements of the submillimeter galactic emission from the ground (EMILIE Hawaii and South Pole). Involved in the balloon programs AROME and PRONAOS. Responsible of the calibration of the submillimeter photometer of PRONAOS. I. Ristorcelli (Charg#e de Recherche CNRS, Centre d'Etude Spatiale des Rayonnements) Responsible for the HFI contribution to the Planck telescope alignment and characterisation. Background experience in Submillimeter astronomy. CoI on PRONAOS : responsible for the 2 meter multi-mirror alignment.

Responsible for the alignment of the ODIN telescope. Scienti c production in the eld of interstellar dust molecule processes: organometallic chemistry, very cold dust in galactic molecular clouds. M. Rowan-Robinson (Professor of Astrophysics and Head of Astrophysics Group, Blackett Laboratory, Imperial College) has been involved in submillimetre astronomy since the 1970s he is the Editor of rst book devoted to eld of 'Far Infrared Astronomy' (Pergamon 1975). Member of Science Team for IRAS mission 1979-84, responsible for completeness and reliability of IRAS Point Source Catalog (PSC). Led QDOT team in ground-based follow-up of IRAS PSC, resulting in several papers in the top 20 most cited astronomical papers in the world. PI of European Large Area ISO Survey (ELAIS), largest single ISO Open-time proposal, and of an EU TMR Network supporting this. PI of the UK SCUBA Submillimetre Survey Consortium, currently carrying out a survey at 850 and 450 mu with SCUBA on the JCMT. T.J. Sumner (Senior Lecturer, Imperial College) is involved in European Large Area ISO Survey at ICSTM. He is a co-Investigator on UK Dark Matter Experiment, co-I on MiniSTEP and LISA. His eld of expertise is on hot gas components of galaxies - soft x-ray modelling. He is the ICSTM Project Manager for the ICSTM contribution to ROSAT and ICSTM Pipeline manager for ELAIS. L. Vigroux (head of DAPNIA/Service d'Astrophysique). His theoretical works are related to the evolution of galaxies and cluster of galaxies. He was in charge of several instrumentation projects for ground based and space observatories in particular ISOCAM on board of ISO. He is the co-PI of SPIRE, the FIRST bolometer instrument. He has served on several international committees, in particular The Scienti c and Technical Committee of ESO, and presently, the Astronomy Working Group of ESA. Since 1993 he is the head of the Service d'Astrophysique. S.White (Managing Director of the Max-Planck-Institut f%ur Astrophysik, Garching) His primary scienti c interests are in the formation and evolution of galaxies and larger structures which he has addressed through theoretical and computational work over more than two decades. He is one of the principal architects of the standard paradigm for structure formation. He leads the German side of the Virgo Consortium, an international collaboration carrying out the supercomputer simulations of cosmological structure formation.

8.3 Key technical personal R. Ansari (Charg#e de Recherche, LAL Orsay) started his work in particle physics and the search for super-symmetry in the CERN UA2 experiment. He has been involved in the EROS project since 1990, which searches for MACHO's (Massive Compact Halo Object) through gravitational micro-lensing. He is presently responsible for the data analysis softwares in EROS and will be the scienti c coordinator of POSDAC for the HFI level 2 and 3 data processing. J. Charra (Ing#enieur de Recherche, IAS) HFI Project Manager. He was Project Manager of successful rocket and space borne experiments in collaboration with dierent space agencies and a number of dierent contries laboratories: PM and Instrument Scientist of the Solar instrument on rst french spacecrafts D2A and D2A-Polaire, KALOS OSO-I Cal rockets with U. of Colorado, NASA and US Navy, PM and Operations Manager of IKS infrared spectrometer of VEGA dual Halley y-by missions in collaboration with IKI (1986), PM and Operations Manager of GOLF SOHO in collaboration with Spain and dierent french institutes (1995). B. Cougrand (Ing#enieur de Recherche, IAS) Experience (for 26 years) in space astrophysics instrumentation management, design, fabrication and testing: System Engineer of IKS experiment on VEGA mission, Project Manager of IPHIR experiment on PHOBOS mission, System Engineer of GOLF experiment on SOHO mission and Project Manager of the European centre for operations and data of SOHO (MEDOC at IAS). J.J. Fourmond (Ing#enieur de Recherche, IAS) He is an engineer of ENSCI (1988) experienced in conceiving and testing thermal systems: GOLF experiment on SOHO satellite (now in orbit), ESEF experiment on MIR station (in ight), SAMBA. He works on thermal, mechanical and thermo-elastic simulations with I-DEAS. He is the I-DEAS specialist responsible of maintaining the thermal studies capability of IAS. He is Project Manager of 0.1K demonstrator. H. Lagard ere (Ing#enieur de Recherche, IAS) He is an engineer of ENSAM. He has been involved as

mechanical architect on quite all the IAS space or balloon experiments since twenty ve years: IKS on VEGA, SPM-PRONAOS, EIT and GOLF on SOHO. J. Narbonne (Ing#enieur d'Etude, CNRS) Responsible for HFI electronics architecture. Has been responsible for analog and digital electronics of several balloon borne and satellite experiments (AROME, PRONAOS, ODIN-AOS). R. Pons (Ing#enieur de Recherche, CNRS) Manager of the HFI readout electronics. Experience in developing and programming on-board computers and ground segment equipments for balloon borne (PRONAOS) and satellite astronomy projects (GRANAT, PHOBOS, MARS 96, ODIN). L.A. Wade (Principal Member of the Technical Sta - Low Temperature Science and Engineering Group - Advanced Technology Section - Jet Propulsion Laboratory) He has over 19 years experience in developing advanced cryogenic devices 12 of which have been spent developing sorption cryocoolers. His research has resulted in over 30 refereed and invited publications. He was the cognizant engineer responsible for the BESTCE sorbent beds own on STS-77. He has led or participated in the design or development of over 10 ight astrophysics missions (e.g. ExNPS) and Earth observing instruments (e.g. TES). Currently he is Principle Investigator for the NASA Vibration-Free Cryocooler Program which is developing long-life refrigerators for cooling between 20 and 1.5K.

8.4 Institutes unique capabilities and relevant experience Cambridge University: Cambridge University hosts the largest number of research astronomers in

the UK. Astronomical research in Cambridge is carried out in three departments: (i) Institute of Astronomy, world famous for research in theoretical and observational astronomy over a wide range of elds extending from theoretical cosmology to solar astrophysics. (ii) Mullard Radio Astronomy Observatory, Department of Physics, one of the world pioneers of radio astronomy and particularly of interferometers. Members of MRAO have extensive experience of observations of the CMB, with successful experiments at Tenerife and the rst interferometric measurements of CMB anisotropies. Currently building a 14 element CMB interferometer (the Very Small Array). (iii) Department of Applied Mathematics and Theoretical Physics, has a world leading theoretical cosmology group including Stephen Hawking and Neil Turok that has pionered many of the key ideas concerning the early Universe.

DAPNIA: Within the Direction des Sciences de la Mati/ere of the Commissariat #a l'Energie Atomique,

the Department of Astrophysics, Particle Physics, Nuclear Physics and Associated Instrumentation (DAPNIA) has developped competences in observationnal cosmology with people either in the Astrophysics or in the Particle Physics divisions. Since its origin, the SAp was heavily involved in space instrumentation in the elds of high energy astrophysics (COSB, SIGMA, XMM, INTEGRAL), and in infrared (ISOCAM) and submillimeter (FIRST). Particle physics groups are involved in several ground based astroparticle instrument, and have a long standing expertise in heavy data analysis. Both teams will work together on the PLANCk HFI project, mainly with POSDAC.

CESR: CESR is a CNRS space astrophysics laboratory involved in major ESA space missions (ISO, XMM, CLUSTER, INTEGRAL). The "Cold Universe" group involved in Planck-HFI has back-

ground experience in ground-based (DIABOLO), balloon borne (AGLAE, AROME, PRONAOS) and space borne (ISO, ODIN) infrared and submillimeter astronomy. The team has been responsible for building the analog electronics of the LWS instrument on ISO. The laboratory had a major contribution to the PRONAOS program and was also involved on the ODIN: alignment of the telescope.

IAS: A space science laboratory for 25 years, IAS has extensive experience in space astrophysics

instrumentation design, fabrication and testing. IAS has built UV, visible, IR and submillimetric space science instruments in cooperation with French and foreign labs and industries. These include the IR spectrometer IKS for the French/USRR VEGA mission (1986), 3 instruments (UV and visible) and the MEDOC operation data analysis Center for the ESA/NASA SOHO

mission (1995), and the SPM submillimeter photometer for the 2 meter ballon borne telescope PRONAOS. IAS has also performed cryogenic ground testing of equipment for the ISOCAM instrument own on ISO (1995). In every case, the cold optics were developed by IAS. IAS managed the center to help the french users of the ISO spectrometers. IAP: Institut d'Astrophysique de Paris, a laboratory of the french CNRS, has had a large investment in cosmological studies, both with a theoretical and observational point of view. The biggest group of theoretical astronomers within IAP has been involved for a number of years in both numerical and analytical developments in cosmology. IAP has been highly involved in the data reduction and science part of the DENIS project (South Sky I,J,K ESO survey) where it has a major responsibility for data pipeline and data analysis. IAP is also currently involved in the MEGACAM/TERAPIX experiment, having the responsibility for all the data processing and storage. Imperial College: The Astrophysics Group at Imperial College has a strong tradition of involvement in space astronomy missions and is one of the leading European groups in far infrared and submillimetre astronomy. The Group was involved in the design and construction of the Wide Field Camera on ROSAT and of the PHOT-S instrument on ISO. Members of the Group had a strong involvement in the IRAS mission and its ground-based follow-up. The Group also has strong activity in theoretical modelling of far infrared and submillimetre sources and in modelling source-counts and background radiation in the waveband. IN2P3: The French Institute for Nuclear and Particle Physics has been a department of the CNRS for more than 25 years. IN2P3 plays a major role in many international collaborations performing experiments on accelerators, detector R&D and construction, data acquisition and analysis. In the eld of Astroparticle physics, IN2P3 is involved in neutrino mass and oscillation measurements, solar neutrino detection, ground based cosmic ray detection and gravitational wave detection. More speci cally, in the eld of observational cosmology, IN2P3 is already a major partner in baryonic and non baryonic dark matter searches, and cosmological parameter measurements using type I Supernovae. IN2P3 will participate in the promising eld of CMB through the Planck mission. JPL/Caltech: The Jet Propulsion Laboratory is recognized as a world leader in the development of space hardware. The Micro-Devices Laboratory within JPL has supplied bolometric detectors of construction similar to the HFI detectors to PRONAOS, BOOMERANG, MAXIMA, and SuZIE, and is currently developing large format monlithic arrays of bolometers for FIRST. The Observational Cosmology group at Caltech has worked closely with the JPL MDL to test and characterize bolometric detectors for all of these missions. QMW London, UK: The QMW Astrophysics group has been at the forefront of experimental and observational astronomy in the infrared-millimetre range for the last two decades. Its experimental research programme includes building astronomical instruments and the development of new instrumentation and techniques. QMW pioneered the development of 3He-cooled bolometers for submillimetre observations, opening up this waveband for the astronomical community, and is recognised as the world leading group in the area of FIR/submillimetre lters and quasi-optical components. It has participated in many major ground-based and satellite instrumentation projects. It provided design expertise, feed-optics test facilities and the detector arrays and lters for SCUBA, and was responsible for the detector subsystem for the ISO LWS instrument. The group has also provided instrumentation for a number of other space projects including MARS96 and Cassini. Prof. Peter Clegg of QMW is principle investigator for the ISO LWS. RAL: RAL is one of the UK Research Council's laboratories, providing a centre for space science activities in its Space Science Department and also technology development in its Applied Science Department. The Laboratory has extensive experience in space instrumentation and engineering, including wide and successful participation in ESA and NASA space programmes involving many types of project, ranging from Earth observation, to orbiting astronomical satellites and probes

into the Solar System. This expertise relates particularly to infrared/sub-millimetre programmes where, for example, the Laboratory played a major role with the IRAS project, was involved with three of the ISO focal plane instruments, including provision of the AIV and calibration of the LWS, and has built Earth observing instruments, such as ATSR. For technology development relating to Planck, RAL is a world centre for closed cycle cooler technology, having sucessfully built and own coolers for three space missions and has participated actively in the related ESA cooler programmes. In addition, the Laboratory shares the strong scienti c interest in the projects through its own related programmes of research. University La Sapienza/Experimental Cosmology Group: is a University Laboratory with 25 years experience in the development of balloon experiments for CMB research (ULISSE, ARGO, BOOMERanG, TIR). The laboratory features cryogenic, mechanical, electronics, mm-wave optics facilities for CMB instruments preparation, test and quali cation. The laboratory has built and is running the Testa Grigia observatory in the Alps (at 3500 m osl), a 2.6 m mm-wave telescope devoted to CMB research, which is open for eld testing of CMB instruments.

Chapter 9

ORGANISATION AND MANAGEMENT STRUCTURE 9.1 General Management Structure

Figure 9.1 gives Planck HFI management higher level organigram. The PI is the single formal interface of HFI team with the ESA Project, however speci c instrument management or technical issues may be addressed through HFI Project Manager, instrument speci cation questions with the Instrument Scientist, data processing issues through the DPC Manager, scienti c ones through the Survey Scientist and the HFI science coordinator, all identi ed into the gure 8.1 organigram, but always with copy to the PI. Key management persons names and address informations can be found on this document cover page. ESA Principal Investigator J.-L. Puget Instrument Scientist J.-M. Lamarre UK Deputy : P.A.R. Ade US Deputy : S. Church Project Management J. Charra

Instrument Management J. Charra

Survey Scientist G.P. Efstathiou

DPC Management R. Gispert

Science Coordination F.R. Bouchet

Project Administration M.-T. Dorin-Gerald

HFI Coordination Activities

Product assurance

System activities

Science management

Project Secretariat

Project Control

System Engineering

IDIS

Key Programs Preparation

Financial activities

Development Control

Instr.modelisation S/W

Infrastructures

Scientific exploitation Coord.

Documentation

Configuration Control

Instr. design & manuf.

Pipeline implementation

Scientific simulations

Travels

Interfaces with LFI

Instrument AIT

Operations

Interfaces with S/C

Ground Calibration

Post-operations

Payload Level Activities

Simulation & prototyping

Experiment AIV on S/C Operations

Figure 9.1: Planck HFI management organigram

Planck HFI PI is responsible for the speci cation of the instrument performances, its procurement as well as that of the dedicated Data Processing Center, ight operations and nally the delivery of scienti c products to the international community. The HFI PI, or his deputy, is responsible for all 97

Public Relation (PR) activities related to the aims, procurement and results of the experiment. The Instrument Scientist (referred to as Instrument Manager in the SMP), is responsible for producing the experiment scienti c speci cations, and monitoring how they are translated in terms of instrument performance speci cations, as well as for assessing the impact of any change of the spacecraft or the instrument performances on the experiment scienti c return. The HFI Project Manager is responsible for the procurement and delivery in agreed delay, of the instrument. He is encharged of the general coordination of the dierent hardware/software producing teams, of the dissemination of Planck Project produced technical and programmatic information towards the dierent HFI consortium institutes. He shall ensure that agreed PA rules are enforced by the dierent teams producing

ight and HFI instrument dedicated ground hardware and software. The Project Manager shall report progress and any technical, or development issue, to the Project in compliance with IID A rules. He his responsible for the production and delivery in due time of all required instrument related documentation or data. The PM shall monitor any change in the spacecraft technical performances, or the knowledge of them, analyse the possible impact on HFI and eventually report to the PI and the IS. The other way around, the HFI Project Manager shall point out any experiment speci c requirement to the ESA Project attention, make it clear and check it was understood, and if applicable verify that agreed adequate measures are undertaken. The HFI Data Processing Center Manager is responsible for the implementation of the pipeline leading in due time to the delivery of the science and data products to the community. He is encharged of the coordination of the dierent centers contributing to the pipeline and of the IDIS implementation. He represents the HFI in the managment structures of the common elements of the Planck data processing. He is in charge of disseminating to the centers the relevant information from the Planck project team. The HFI Science Coordinator will organise the contributions of the HFI consortium to the scienti c teams which will be de ned following the AO for the core programme. He will play a key role, in close connection with the PI, as the HFI representative in the Science Collaboration International Coordination common to HFI, LFI and Telescope team which will be in charge of preparing the Core Program to be submitted to the Science Team through the organisation of Workshops to discuss and coordinates the proposals from dierent groups.

9.2 Instrument Design and Procurement organisation 9.2.1 Instrument System Design and Analysis System Engineering B. Cougrand Architects coordination

Scientific Specifications J.-M. Lamarre

Spacecraft Interfaces J. Charra

Optics Architect Y. Longval

Mechanics Architect H. Lagardere

Thermal Overall Architecture J.-J. Fourmond

Electronics Architect J. Narbonne

On-board S/W Architect F. Couchot

Cold Optics and Bolometers Architect P.A.R. Ade

Cold Electronics Architect P. de Bernardis

Dilution Cooler Architect A. Benoît

Closed cycle coolers Architect T. Bradshaw

AIT-AIV B. Cougrand

Calibration Architect F. Pajot

Interfaces with IDIS & DPC R. Gispert

Interfaces with Operations J. Charra

Product Assurance TBD

Figure 9.2: HFI Engineering Group

Instrument scienti c speci cations having been issued by the experiment scienti c team, technical performances and interfaces with the spacecraft, as well as instrument system studies are led by the HFI System Engineering Group formally chaired by the Project Manager, or in practice by the System Engineer. The System Engineering Group is composed of \Architects" bringing their expertise in one, or dierent, speci c eld(s) for the analysis of the instrument overall performances, the de nition of the dierent subsystems and their interrelations, the repartition of the dierent resource allocations from a system point of view. During System Engineering Group meetings or any other form of activity the architects are not representing any institute or organisation. HFI PI, Instrument Scientist, PA Manager, Calibration Manager,

Data Processing Centers Manager, AIT and AIV Manager are invited t o System Engineering Group meetings and receive the minutes. Any other expert may be invited to participate, this invitation is permanently extended to agreed LFI representative(s). Figure 9.2 gives HFI System Engineering Group formal composition. For speci c issues reduced meetings could be called.

9.2.2 Instrument Development and Procurement Control As already mentionned the HFI Project Manager shall keep the ESA Project informed of instrument development and procureHFI Coordination Activities ment status. For this purpose he shall analyse and synthesize J. Charra Local Managers Coordination informations provided by the HFI dierent institutes. In each HFI instrument hardware or software institute exists a Local Manager (LM) see gure 9.3. Local Managers are responsible Caltech CESR for the procurement and delivery in agreed delay, of subsysJ. Bock R. Pons tems of the instrument and associated ground equipment, as per the experiment Work Breakdown Structure. He is responCRTBT HUJI sible for establishing his institute needs in terms of manpower, A. Benoit S. Hanany funding and access to local facilities, and to report to HFI PI and PM on the situation in these elds. He is encharged of the IAS IN2P3 J.-J. Fourmond F. Couchot general coordination of his institute dierent hardware/software producing teams, and of the dissemination of Planck Project JPL Roma University produced technical and programmatic information towards them L. Wade P. de Bernardis and their contractors. He shall ensure that agreed PA rules are enforced by all dierent teams producing ight and HFI instruRAL QMWC T. Bradshaw P. Ade ment dedicated ground hardware and software. The LM shall report progress and any technical, or development issue, to the Figure 9.3: HFI Local Managers Project Manager on a monthly basis, or more frequently should problems arise. He his responsible for the production and delivery in due time of all agreed subsystems related documentation or data. The LM shall monitor any change in the subsystems technical performances, analyse the possible impact on HFI and in all cases report to the PM and the PI. The other way around, Local Managers shall point out any subsystem speci c new requirement to the Project Manager and System Engineer attention, make it clear and check it was understood, and if applicable verify that agreed adequate measures are undertaken.

9.3 Data Processing Center organisation The Data processing management structure is described in details in Chapter 4: Section 4.3.2 for HFI/LFI overall Planck Organisation, Section 4.4.3 for IDIS organisation and Section 4.5.5 for the HFI-Speci c organisation structure.

9.4 Operation after launch organisation An Instrument Monitoring Group will be set up under the Instrument Scientist to monitor the daily reports issued by the data processing level 1 center and the house keeping data introduced daily into IDIS. The Instrument Scientist will be in charge of informing the Project Scientist and the PI of any anomaly requiring immediate action. The Instrument Scientist will also produce monthly reports and trend analysis on the status of the instrument. The DPC manager will produce a monthly report on the data processing work. He will report immediatly to the Project Scientist and to the PI of any anomaly detected in the data which could require immediate action on the operations.

9.5 Science organisation All scientist in the HFI collaboration (co-investigators and scienti c associates) will be, together with scientist from the LFI and Telescope Team consortia, part of the Planck Science Collaboration. This group will meet during the development of the mission as well as during the operation and postoperation phases to discuss scienti c issues in a serie of workshops. Topics in these wokshops will be, besides general scienti c issues directly related to Planck , the discussions of the Core Program for which ESA intends to place an AO. The goal of these workshops will be to hear presentations of proposals for scienti c work on the data during the proprietary period and to coordinate as much as possible the groups wishing to work on the same subjects. An International Science Collaboration Coordination (ISCC) will be set up it will be composed of Co-Is of HFI, LFI and Telescope teams, taking into account a balance in eort and nationalities, chaired alternatively by the two instrument PIs or their nominated substitutes. This group will be in charge of the organisation of the worhops and of the preparation of the Core Program to be submitted to the Science Team. The work to be done by the ISCC will include proposals for the nomination of team leaders per subjects, de nition of data rights, publication policy,... Final approval of these rests with the Science Team.

9.6 Relationship with LFI and Telescope Team The coordination with the LFI and Telescope teams is required in three areas: Science, Data processing and integration and tests at payload level. The rst two have been adressed extensively in sections 4.3.2, 6.1, 6.2, 9.1 and 9.5. For the common work to be done at the integrated payload level a ccordination group should be set up by the project following discussions on the contributions of the PI groups.

9.7 Communications, publicity agreement The HFI consortium has no objection to the draft version (15 December 1997) of the Publicity Agreement for Scientists Involved in ESA Science Projects although the PI wishes that in section 4 (Pi and co-Is agreement) the fourth paragraph include a reciprocal comitment from ESA to mention the national agencies contribution in any autonomous PR activity. The Planck mission will bring very fundamental answers to questions about the structure of the Universe which are of great interest to the general public. These questions are commonly considered as archetypes of what fundamental research is about. There have been many examples of the high level of interest of the general public for cosmology over the years. Furthermore the media are well aware of this interest and in strong demand of inputs from the scientists on this subject. This is exampli ed by the recent requests from popular science magazines for articles describing the Planck capabilities for cosmology. This mission is considered word-wide as the major cosmological experiment of the coming decade. It is thus very suitable for PR activities during its developpement and operations. The maps of the primordial inhomogeneities of the Cosmic Microwave Background as well the all sky maps of other astrophysical processes will provide visual inputs for such activities. In terms of technology used by the HFI, the very low temperature detectors and the related cryogenic techniques can also be of interest for PR activities. The PI technical team intends to work in partnership with the main industrial subcontractors taking advantage of the unique character of a scienti c payload which require new technologies as a way to develop new capabilities both in industry and in scienti c institutes. Many of the institutes involved in the HFI consortium have developped over the years links to science museums, popular science magazines, science journalists in major newspapers or televisions. We intend to use these systematically in agreement with ESA to help in the organisation of PR activities related to Planck.

Bibliography Aghanim, N., D#esert, F.-X., Puget, J.-L., & Gispert, R. 1996, A&A, 311, 1{11 Aghanim, N., De Luca, A., Bouchet, F. R., Gispert, R., & Puget, J. L. 1997a, A&A, 325, 9{18 Aghanim, N., Prunet, S., Forni, O., & Bouchet, F. R. 1997b, submitted to A&A Albrecht, A., Battye, R. A., & Robinson, J. 1997, Phys. Rev. Lett., 79, 4736 Benoit, A. 1997, Proceedings of the ESA Symposium \ESA SP-400 Bhatia, R., Benoit, A., Bock, J. J., Grin, M. J., & Mason, P. V. 1998, 10th International Cryocooler Conference, May 1998, Monterey, California Birkinshaw, M., & Gull, S. F. 1983, Nature, 302, 315{317 Blain, A. W., & Longair, M. S. 1993, MNRAS, 264, 509+ Bond, J. R. 1995, Phys. Rev. Lett., 74, 4369 Bond, J. R., & Efstathiou, G. 1987, MNRAS, 226, 655{687 Bond, J. R., Efstathiou, G., & Tegmark, M. 1997, MNRAS, 291, L33{L41 Bouchet, F. R., & Gispert, R. 1998a, in preparation Bouchet, F. R., & Gispert, R. 1998b, in preparation Bouchet, F. R., Gispert, R., & Puget, J.-L. 1995. The mm/sub-mm Foregrounds and Future CMB Space Missions. In Unveiling the Cosmic Infrared Background AIP Conference Proceedings 348, Baltimore, Maryland, USA, E. Dwek, editor, pages 255{268 Boulanger, F., Abergel, A., Bernard, J. P., Burton, W. B., Desert, F. X., Hartmann, D., Lagache, G., & Puget, J. L. 1996, A&A, 312, 256{262 Burton, W. B., & Hartmann, D. 1994, Astrophys.and Space Sc., 217, 189{193 Caldwell, R. R., Dave, R., & Steinhardt, P. J. 1997, preprint astro-ph/9708069 Cheng, E. S., Cottingham, D. A., Fixsen, D. J., Inman, C. A., Kowitt, M. S., Meyer, S. S., Page, L. A., Puchalla, J. L., & Silverberg, R. F. 1994, Ap. J. Lett., 422, L37{L40 Clements, D. L., Puget, J.-L., Lagache, G., Reach, W., Gispert, R., Dole, H., Cesarsky, C., Elbaz, D., Aussel, H., Bouchet, F., Guiderdoni, B., Omont, A., Desert, F.-X., & Franceschini, A. 1997, Bull. American Astron. Soc., 191, #63.05, 191, 6305+ Crittenden, J. R., & al. 1993, Phys. Rev. Lett., 71, 324 Crittenden, J. R., & Turok, N. 1995, Phys. Rev. Lett., 75, 2642 Davis, R. L., & al. 1992, Phys. Rev. Lett., 69, 1856 De Bernardis, P., Aquilini, E., Boscaleri, A., De Petris, M., D'Andreta, G., Gervasi, M., Kreysa, E., Martinis, L., Masi, S., Palumbo, P., & Scaramuzzi, F. 1994, Ap. J. Lett., 422, L33{L36 De Oliveira-Costa, A., Kogut, A., Devlin, M. J., Netter eld, C. B., Page, L. A., & Wollack, E. J. 1997, Ap. J. Lett., 482, L17{+ Delabrouille, J. 1998a, A&A Suppl. Ser., 127 Delabrouille, J. 1998b, PhD Thesis, to be published Delabrouille, J., G#orski, K., & Hivon, E. 1997, submitted to MNRAS, preprint astro-ph/9710349 Delabrouille, J., Gispert, R., & Puget, J.-L. 1998, in preparation Desert, F.-X., Giard, M., & Benoit, A. 1997, Proceedings of the ESA Symposium \The Far infrared and Submillimetre Universe", 15-17 April 1997 (Grenoble, France) ESA SP-401 Draine, B. T., & Lazarian, A. 1997, preprint astro-ph/9710327 Efstathiou, G., & Bond, J. 1998, preprint Efstathiou, G., & Bond, J. R. 1986, MNRAS, 218, 103{121 Efstathiou, G., & Bond, J. R. 1987, MNRAS, 227, 33P{38P Fischer, M. L., Alsop, D. C., Cheng, E. S., Clapp, A. C., Cottingham, D. A., Gundersen, J. O., Koch, 101

T. C., Kreysa, E., Meinhold, P. R., Lange, A. E., Lubin, P. M., Richards, P. L., & Smoot, G. F. 1992, Ap. J., 388, 242{252 Gaertner, S., Benoit, A., Lamarre, J.-M., Giard, M., Bret, J.-L., Chabaud, J.-P., Desert, F.-X., Faure, J.-P., Jegoudez, G., Lande, J., Leblanc, J., Lepeltier, J.-P., Narbonne, J., Piat, M., Pons, R., Serra, G., & Simiand, G. 1997, A&A Suppl. Ser., 126 Gaier, T., Schuster, J., Gundersen, J., Koch, T., Seiert, M., Meinhold, P., & Lubin, P. 1992, Ap. J. Lett., 398, L1{L4 Ganga, K., Cheng, E., Meyer, S., & Page, L. 1993, Ap. J. Lett., 410, L57{L60 Gautier, T. N. I., Boulanger, F., Perault, M., & Puget, J. L. 1992, Astron. J., 103, 1313{1324 G#orski, K. M., Hinshaw, G., Banday, A. J., Bennett, C. L., Wright, E. L., Kogut, A., Smoot, G. F., & Lubin, P. 1994, Ap. J. Lett., 430, L89{L92 Guiderdoni, B., Bouchet, F. R., Puget, J. L., Lagache, G., & Hivon, E. 1997, Nature, 390, 257+ Guiderdoni, B., Hivon, E., Bouchet, F. R., & Mae%&, B. 1998, preprint astroph/9710340, MNRAS, in press Haehnelt, M. G., & Tegmark, M. 1996, MNRAS, 279, 545+ Hancock, S., Gutierrez, C. M., Davies, R. D., Lasenby, A. N., Rocha, G., Rebolo, R., Watson, R. A., & Tegmark, M. 1997, MNRAS, 289, 505{514 Haslam, C. G. T., Stoel, H., Salter, C. J., & Wilson, W. E. 1982, Astronomy and Astrophysics Supplement Series, 47, 1 Hauser, M. G., Kelsall, T., Arendt, R. G., Weiland, J. L., Freudenreich, H. T., Odegard, N., Dwek, E., Moseley, S. H., Silverberg, R. F., & Pei, Y. C. 1997, Bull. American Astron. Soc., 191, #91.01, 191, 9101+ Heavens, A. F. 1998, submitted to MNRAS Henry, J. P., & Arnaud, K. A. 1991, Ap. J., 372, 410{418 Hildebrand, R. H., & Dragovan, M. 1995, Ap. J., 450, 663+ Hobson, M. P., Jones, A. W., Lasenby, A. N., & Bouchet, F. R. 1998, submitted to MNRAS Hu, W., & White, M. 1997, New Astronomy, vol. 2, no. 4, p. 323-344., 2, 323{344 Jae, A. H., & Kamionkowski, M. 1998, preprint astroph/9801022 Janssen, M. A., & Gulkis, S. 1992. Mapping the sky with the COBE dierential microwave radiometers. In The infrared and submillimetre sky after COBE Proceedings of the NATO Advanced Study Institute, Les Houches, France, Mar. 20-30, 1991 (A93-51701 22-90), p. 391-408., pages 391{408 Kamionkowski, M., & Kosowski, A. 1998, preprint astro-ph/9705219 Kawara, K., & al. 1997. ISOPHOT Far-infrared survey in the Lockman Hole and high- redshift quasars seen by ISO. In The Far-infrared and Submillimetre Universe, ESA SP-401, ESA publications, Noordwijk, p.285, E. Dwek, editor, pages 255{268 Kibble, T. 1976, J. Phys. A: Math Gen., 9, 1387 Knox, L. 1995, Phys. Rev. D, 52, 4307{4318 Kogut, A., Banday, A. J., G#orski, K. M., Hinshaw, G., Bennett, C. L., & Reach, W. T. 1995, BAAS, 187, 2002 Kogut, A., Banday, A. J., Bennett, C. L., G#orski, K. M., Hinshaw, G., Smoot, G. F., & Wright, E. I. 1996, Ap. J. Lett., 464, L5 Lagache, G., & al. 1997, in preparation Leitch, E. M., Readhead, A. C. S., Pearson, T. J., & Myers, S. T. 1997, Ap. J. Lett., 486, L23{+ Lineweaver, C. H., & Barbosa, D. 1998, A&A, 329, 799{808 Lonsdale, C. J., Hacking, P. B., Conrow, T. P., & Rowan-Robinson, M. 1990, Ap. J., 358, 60{80 Meinhold, P., Clapp, A., Devlin, M., Fischer, M., Gundersen, J., Holmes, W., Lange, A., Lubin, P., Richards, P., & Smoot, G. 1993, Ap. J. Lett., 409, L1{L4 Murakami, H., & et al 1996, Publication of the Astronomical Society of Japan, 48L Netter eld, C. B., Devlin, M. J., Jarolik, N., Page, L., & Wollack, E. J. 1997, Ap. J., 474, 47+ Ng, K.-W., & Liu, G.-C. 1997, preprint astro-ph/9612006 Oliver, S. J., Goldschmidt, P., Franceschini, A., Serjeant, S. B. G., Efstathiou, A., Verma, A., Gruppioni, C., Eaton, N., Mann, R. G., Mobasher, B., Pearson, C. P., Rowan-Robinson, M., Sumner, T. J., Danese, L., Elbaz, D., Egami, E., Kontizas, M., Lawrence, A., McMahon, R., Norgaard-Nielsen,

H. U., Perez-Fournon, I., & Gonzalez-Serrano, J. I. 1997, MNRAS, 289, 471{481 Orlowska, A. H., Bradshaw, T. W., & Hieatt, J. 1995, Proceedings of the 8th International Cryocooler Conference, Vail, Colorado, USA , Ed. R G Ross Jr, Plenum Press, New York Ostriker, J. P., & Vishniac, E. T. 1986, Ap. J. Lett., 306, L51{L54 Pearson, T., & Rowan-Robinson, M. 1996, MNRAS, 283 Peebles, P. J. E. 1987, Ap. J. Lett., 315, L73{L76 Pen, U. L., Seljak, U., & Turok, N. 1997, Phys. Rev. Lett., 79, 1611 Platt, S. R., Kovac, J., Dragovan, M., Peterson, J. B., & Ruhl, J. E. 1997, Ap. J. Lett., 475, L1{+ Pointecouteau, E., Giard, M., & Barret, D. 1997, preprint astro-ph/9712271 Press, W., & Schechter, P. 1974, Ap. J., 187, 425 Prunet, S., Sethi, S., & Bouchet, F. R. 1998, in preparation Prunet, S., Sethi, S., Miville-Desch^enes, M.-A., & Bouchet, F. R. 1998, Submitted to A&A Puget, J. L., Abergel, A., Bernard, J. P., Boulanger, F., Burton, W. B., Desert, F. X., & Hartmann, D. 1996, A&A, 308, L5 Puget, J. L., Lagache, G., Clements, D., Reach, W. T., Aussel, H., Bouchet, F. R., Cesarski, C., D#esert, F.-X., Elbaz, D., Franceschini, A., & Guiderdoni, B. 1998, in preparation Rees, M. J., & Sciama, D. W. 1968, Nature, 217, 511+ Rephaeli, Y. 1995, Ann. Rev. Astr. Astrophys., 33, 541{580 Revenu, B., & al. 1998, in preparation Ristorcelli, I., Lamarre, J.-M., Giard, M., Leriche, B., Pajot, F., G., R., Safa, H., & Serra, G. 1997, Experimental Astronomy, 7 Sachs, R. K., & Wolfe, A. M. 1967, Ap. J., 147, 73 Savage, B. D., Drake, J. F., Budich, W., & Bohlin, R. C. 1977, Ap. J., 216, 291{307 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1997, preprint astro-ph/9710327 Schuster, J., Gaier, T., Gundersen, J., Meinhold, P., Koch, T., Seiert, M., Wuensche, C. A., & Lubin, P. 1993, Ap. J. Lett., 412, L47{L50 Scull, S. R., Jones, B. G., Bradshaw, T. W., Orlowska, A. H., & Jewell, C. I. 1996, Submillimetre and Far Infrared Space Instrumentation", "4-26th September 1996, ESTEC, Noordwijk, The Netherlands, ESA SP 388 Seljak, U. 1996, Ap. J., 463, 1+ Seljak, U. 1997, Ap. J., 482, 6+ Smail, I., Ivison, R. J., & Blain, A. W. 1997, preprint astro-ph/9708135 Spergel, D. N., & Turok, N. 1998, in preparation Spergel, D. N., & Zaldarriaga, M. 1997, Phys. Rev. Lett., 79, 2180 Starobinsky, A. A. 1985, Sov. Astr. Lett., 11, 133 Stompor, R., & Efstathiou, G. 1998, in preparation Tanaka, S. T., Clapp, A. C., Devlin, M. J., Figueiredo, N., Gundersen, J. O., Hanany, S., Hristov, V. V., Lange, A. E., Lim, M. A., Lubin, P. M., Meinhold, P. R., Richards, P. L., Smoot, G. F., & Staren, J. 1996, Ap. J. Lett., 468, L81{+ Tegmark, M. 1996, Ap. J. Lett., 464, L35{+ Tegmark, M. 1997a, Phys. Rev. D, 56, 4514{4529 Tegmark, M. 1997b, Ap. J. Lett., 480, L87{+ Tegmark, M., & Efstathiou, G. 1996, MNRAS, 281, 1297 Tegmark, M., & Hamilton, A. J. S. 1997, preprint astro-ph/9702019 Tegmark, M., Taylor, A. N., & Heavens, A. F. 1997, Ap. J., 480, 22+ Toolatti, L., Arg%ueso G#omez, F., De Zotti, G., Mazzei, P., Franceschini, A., Danese, L., & Burigana, C. 1997, preprint astro-ph/9711085 Tucker, G. S., Gush, H. P., Halpern, M., Shinkoda, I., & Towlson, W. 1997, Ap. J. Lett., 475, L73{+ Turok, N. 1996, Phys. Rev. Lett., 77, 4138 Veeraraghavan, S., & Davies, R. D. 1997. Low frequency Galactic Backgrounds. In Proceedings of the PPEUC conference, University of Cambridge, 7-11 April 1997, available at http://www.mrao.cam.ac.uk/ppeuc/proceedings/cmb prog.html Viana, P. T. P., & Liddle, A. R. 1996, MNRAS, 281, 323+

Wade, L. 1991, Adv. in Cryogenic Eng., Plenum Press, New York, 37 Wilner, D. J., & Wright, M. C. H. 1997, Ap. J. Lett., 488, L67{+ Wollack, E. J., Jarosik, N. C., Netter eld, C. B., Page, L. A., & Wilkinson, D. 1993, Ap. J. Lett., 419, L49{+ Wright, E. L. 1996, preprint astro-ph/9612006 Wright, E. L. 1997, preprint astro-ph/9711261 Zaldarriaga, M., & Seljak, U. 1997, Phys. Rev. D, 55, 1830 Zaldarriaga, M., Spergel, D. N., & Seljak, U. 1997, Ap. J., 488, 1+ Zeldovich, Y. B., & Sunyaev, R. A. 1969, Astr. Sp. Sci., 4, 301

APPENDICES

Appendix A

From timelines to maps We start with a model of the measurement and then describe a general solution to the map-making problem, before brie y discussing it's limitations.

A.1 Model of the measurement Let us call F ( e) the ux at a frequency  in the direction on the sphere e. We describe the instrumental response in the i;th channel by a spectral transmission vi ( ) and an optical transmission1 wi (e), such that the sky ux for this channel as seen by the detectors, Si , is given by the convolution of F by vi and wi ,

Si(e) = vi ? wi ? F :

(A.1)

This is converted in a continuous temporal signal by the detector's response which is integrated over contiguous time intervals of width . The j ; th bin of data, tj , may thus be written as

Z

tj = d H ( ; j ) fG (Si (E ( ))] + Ni ( )g

(A.2)

if E ( ) is the trajectory of the beam versus time , Ni ( ) is the instantaneous detector noise, G describes the response of the instrument, and H stands for a top-hat binning window (H ( ) = 1 for ;=2 < < =2, and zero otherwise). Since the time-ordered (hereafter TOD) data is nite and discrete, one can only hope to reconstruct the sky with a nite resolution. Let us denote by f a sky map of the ux F in some appropriate pixelisation scheme with Np pixels. Then rather than relating the TOD to the continuous function F , it is more convenient to replace the time integral in (A.2) by a sum over the pixels of f . If we denote by si the pixelised sky seen by the instrument in the i-th channel (si = vi ? wi ? f ), the time-ordered data which has Nd data points may then be written as

ti = 2isi + ni

(A.3)

where si has been arranged as a vector of length Np , ti and ni being vectors of length Nd containing respectively the time ordered data and the detector noise per temporal bin, the j ; th elements of ni R being d H ( ; j ) Ni( ). The Nd Np matrix 2i is then a projection operator encoding the scan strategy and the response of the detector2 . In case of multiple detectors scanning the same sky si , one could then extend the t and n vectors to include the additional values, and the extended matrix 2 would simultaneously describe all trajectories of the various detectors and thus also describe indirectly the focal plane arrangement. Of course, the optical transmission is expressed in a referential linked to the instrument, and one needs to convert it to a sky position at a given point of the trajectory to deduce the transmitted sky ux. 2 In the case of unknown slow drifts, one could simply add further unknowns in si . 1

1

A.2 Map making In this subsection, we temporarily drop the channel index i and consider the task of estimating a sky map s from the TOD. A linear estimate, ^s, of the pixelised sky may be written as ^s = M t, and a possible choice of the map-making matrix M may be obtained by minimising

2 = (t ; 2^s)T N;1 (t ; 2^s)

(A.4)

  given an estimate of the the noise covariance matrix N = nnT (assuming a zero mean noise, hni = 0). This leads to the map-making method used by the DMR team (Janssen & Gulkis, 1992)



M = 2T N;12 ;1 2T N;1

(A.5)

which provides an unbiased, albeit noisy, estimate of the underlying sky since



^s = M(2s + n) = s + b with b = 2T N;1 2 ;1 2T N;1 n:

(A.6)

Note that the noise  in the recovered map, b, is independent of the unknown s, and its covariance matrix, B = bbT , is simply

B = 2T N;12;1 :

(A.7)

Other methods have been proposed but, as reviewed by Tegmark (1997b), all linear method such that M is invertible are lossless, since the initial data t may be reobtained from the information \compressed" in the form of an estimated map (the meaning of \lossless" can be formerly de ned as keeping constant the Fisher Matrix, see e.g. Tegmark, Taylor, & Heavens (1997) but one then needs to assume a Gaussian-distributed noise). Note though that, while this formally solves the problem, implementing this solution would be infeasible, since it would require inverting the Nd Nd noise matrix N (with Nd > 109 for an individual detector timeline!). The problem arises because of the long term correlations (i.e. slow drifts) of the noise induced by the low frequency deviation from a pure white noise spectrum of the detectors. These long term correlations translate into o-diagonal terms in N (and stripes in the raw map created by simply averaging the observed values in the corresponding sky pixel, see x 2.2.2 below and the gure 2.2.a). Fortunately, once we know the noise spectrum (either from ground calibration or from a prior analysis of some length of the TOD), we can apply a \pre-whitening" lter P (e.g. Wright, 1996, in this map-making context) to apply to the TOD, ~t = Pt, such that n~ = Pn has a  T white noise spectrum n~ n~ = PNPT = I (I standing for the identity matrix). Suppose that the noise spectrum may be described as a white noise plateau with an 1=f upturn toward the low frequencies below some threshold fknee,

hn(f )n(f )?i = 2t 2





1 + fknee f 

(A.8)

where t is the sampling interval and  is the variance of the noise in this time interval, ignoring the 1=f term. Then the Fourier transform of the lter, P(f ) should simply be

s

P(f ) = f + ff (A.9) knee In the time domain, the lter has a spike at t = 0 (since P(f ) ;! 1 when f fknee), other values being negative to insure a zero-sum lter (since P(f = 0) = 0). In eect, one removes an optimal baseline from each scan circle. Thus the estimated map is then obtained from

h i;1  ^s = 2~ T 2~ 2~ T ~t = 2T PT P2 ;1 2T PT P t:

(A.10)

Further implementation details and a slightly more general presentation can be found in Tegmark (1997a). Still, this approach requires the construction and inversion of 2~ T 2~ , whose dimensions are so

large that it cannot be envisaged for Planck. However, an iterative solution should converge well for a scan strategy with many intersecting points (see g. 2.3). Since one would search the ^s minimising 2~ T 2~ ^s = 2~ T 2~ t, this would only require evaluations of product of the form 2~ T 2~ z (with z = t once, and z = ^s(n) at each iteration n), which requires of order Nd operations only3. There are, in addition to the problem of dimension, other diculties in implementing such a mapmaking scheme. Indeed, the theoretically optimal prescription above relies on the assumption that all unknown sources of error are well approximated by a noise which is Gaussian (at least for the method to be optimal). Furthermore, the method requires the a priori knowledge of which set of parameters correctly describes systematic eects which do not behave as a Gaussian noise. If the full parameter space is not well chosen, the system will not converge to the correct solution. Finally, the method is optimal only if the autocorrelation function (or, equivalently, the spectrum) of the noise is known with reasonable accuracy. This autocorrelation can in general be estimated from the data assuming noise stationarity. For Planck the eect of the non-stationary perturbations induced by the readjustment of the spin axis in discrete steps, the active control of temperature drifts, and the readjustment of the zero level of the measurements may be a source of complications. Section 2.2 presents a somewhat dierent approach, adapted to the Planck observing strategy.

Indeed one needs to i) Multiply by , i.e. scan the map z in O( d ) operations. ii) Convolve by the lter PT P which in Fourier requires O( d ln F ) operations, F being the length of the lter. iii) Multiply by  to re-sum the stream into pixels, in O( d ) operations. 3

N

N

N

N

N

Appendix B

Polarisation measurements and striping Chapter 1 has summarised the strong scienti c case for measuring CMB polarisation with the HFI. The measurement of the three Stokes parameters I , Q and U of linear polarisation requires at least three detectors, and at least two of them must be polarimeters oriented in dierent directions1. However, even if the noise is uncorrelated between the polarimeters, there will generally appear a correlation in the resulting noise between I , Q and U . Present analyses of the instrumental noise in the polarised power spectra assume explicitly or implicitly that the noise is uncorrelated between the Stokes parameters (Zaldarriaga & Seljak, 1997 Ng & Liu, 1997). It is therefore important to con gure the polarimeters in a way that ensures decorrelation of the noise between I , Q and U as much as possible. If the noise levels in the polarised bolometers are identical and decorrelated, then there indeed exist con gurations of the relative directions of the polarimeters that ensure this condition. Such decorrelated con gurations involve h 3 polarimeters with directions p regularly distributed over an angle :

p = 1 + (p ; 1) h  p = 1:::h with h 3:

(B.1)

If condition (B.1) is veri ed, the covariance matrix V0 of the Stokes parameters assumes a very simple form and is independent of the global orientation of the polarimeters (0 is the level of noise common to the h polarimeters): 4 02

V0 = h

01 @0

0 0 2 0 0 0 2

1 A:

(B.2)

The smallest decorrelated con guration involves three polarimeters with directions separated by  = =3. However, three is also the minimum number of polarimeters needed to measures the three Stokes parameters. It is dangerous to rely on such a minimal con guration because if one bolometer fails, the information on the polarisation will be incomplete. It is safer, when possible, to choose a con guration with at least four bolometers, either four polarimeters in a decorrelated con guration or three decorrelated polarimeters plus one unpolarised detector. Then, if one of the polarised detectors were to fail, it would still be possible to separate the three Stokes parameters (although their errors would no longer be uncorrelated). The noise of the three Stokes parameters remains uncorrelated:

 when one combines together the measurements of several decorrelated con gurations and of

several unpolarised detectors, irrespective of the relative level of noise between them, as long as one can neglect the cross correlation of the noise between the various decorrelated con gurations and/or unpolarised detectors,

1

If their angular separation lies around

2,

=

Q

and

U

cannot be well separated from each other.

4

 when one combines the measures of several scans of the same pixel by the same or by other decor-

related con gurations, even if the global orientation of the device changes between the various scans. Here again it is assumed that one can neglect cross-correlations between measurements made at dierent times. In general, the noise in the various polarimeters of a decorrelated con guration will not be strictly identical and will be slightly cross-correlated. As long as these imbalances and cross-correlations remain small, the resulting correlation of the noise between parameters I , Q and U is also small and easily calculated to rst order. Again, this remains true when one combines many redundant measurements on the same point of the sky. The correlations do not accumulate2 .

Noise budget If pall bolometers are operated in the same conditions, the noise level is relatively

smaller by a factor 2 for unpolarised detectors than for polarised ones. If there are N bolometers in some channel, h of which are polarised, and all bolometers have the same level of unpolarised noise, UP , then (B.3) I = p UP and Q U = 2pUP : N ; h=2 h

p

Therefore one never looses more than a factor 2 on the signal to noise ratio for the temperature by measuring polarisations.

Destriping polarised data The problem of removing low frequency noise also arises for polarised

data. It is a priori more dicult because the data from each polarimeter cannot be handled independently. Our study follows the same track as for unpolarised data. For any noise parameters foreseen for the HFI, residual noise on a ring can be described to rst order by a dierent oset for each ring and each detector. To get rid of these osets we use the redundancy oered by intersecting scanning circles: at each intersection between circles, the Stokes parameters in an absolute local frame (for instance the ecliptic latitude-longitude local frame) must be the same when scanning along the two circles. It turns out that the osets (there are npolarimeters nrings such osets) can be reduced to 3n rings osets on the Stokes parameters in a reference frame xed relative to the local plane. The equations to be solved to determine the osets strongly resemble those for unpolarised osets, but in general couple together the osets of the three Stokes parameters. As a result, the matrix to be inverted in order to solve the linear system giving the osets has a dimension 3 3 times larger than for unpolarised data. The situation is much simpler if the polarimeters are in a decorrelated con guration and one assumes that the remaining white noise on each circle is equal and uncorrelated between the polarimeters. The equation for the osets on the intensity parameter I then completely decouples from the two others and is the same as for unpolarised data. In the two remaining equations, the osets on the focal frame Q and U are only coupled through the left member, involving the data, but are completely decoupled in the right member, which involves the osets. As a result the matrices to be inverted have the same dimension as in the unpolarised case and are in fact very similar. As is the case for unpolarised data the equation for the temperature osets is singular and one additional constraint must be added (typically the sum of the osets can be required to be 0). The equations for the osets on Q and U (in the focal reference frame) are not singular and these osets are completely determined from the data. This is not surprising because, once the temperature is xed, changing the others Stokes parameters changes the polarisation. We have simulated low frequency noises on the polarimeters in the same way than for unpolarised data, and processed the data as explained above to get rid of the striping. This destriping is very ecient: we obtain destriped maps of I , pQ and p U with the expected theoretical value for the rms p 2= 3 for Q and U . residuals, that is, 20 = 3 for I and 20 2 The impact of these residual cross-correlations on polarised power spectra and on their mutual correlation remains to be evaluated but this should be tractable using perturbative methods.

Appendix C

Iterative sidelobe correction Let us model the antenna pattern in the sidelobes by a vector Li of the lobe average values in a set of lobe pixels i, and the sky by a vector Sp of sky brightnesses in a set of sky pixels p. For convenience, we separate the antenna pattern of the instrument in two parts, the sidelobes, and the main beam. The main beam is not included in the vector Li . The signal st of the instrument at discrete times (indexed by t) can be written, to rst order, as:

st = Sp Tpit Li + ut 

(C.1)

where Tpit is a known bilinear operator depending on time via the scan strategy, and ut is the useful sky signal coming from the main beam. Summation over repeated indices is assumed. If Sp is known to rst order, we can rewrite equation C.1 as:

t = st ; ut = Sp Tpit Li = Ait Li:

(C.2)

This equation can be solved for Li , if t is known, and if the matrix M = (AT A] is regular. The solution is then

L = M ;1AT t :

(C.3)

Of course, one does not have direct access to t . Only the signal st is obtained directly by the measurement. The matrix M is known with excellent accuracy, since to rst order Tpit is known and Sp is also reasonably well known (to less than one per cent). The idea behind an iterative algorithm is the following: from a set of data st , one can obtain a rst-order estimate Sp(1) of the sky Sp. This sky can be used to get a rst order estimate M (1) of matrix M (which will be very close to the actual value of M ), as well a rst order estimate u(1) t of the (1) useful signal ut and hence, by subtraction of ut from st , a rst order estimate t of t (which may be o by some fraction of the total sidelobe signal). A rst order estimate L(1) i is then obtained by:

L(1) = (M (1) ];1 (A(1) )T t(1)

(C.4)

Then, L(1) can be used to estimate next order sidelobe contribution from equation C.2. This contribution can be subtracted from the data, and the process iterated. There are several requirements for this method to be successful: 1. Equation C.2 must allow accurate estimation of the lobe if t is known, i.e. the matrix M must be regular. As M depends on the sky and the scan-strategy, this requirement must be tested for each channel for a given scan-strategy. 2. Even if the matrix M is regular, if the system is nearly degenerate, the solution may be o due to the eect of additional noise. 6

3. The sky brightness in the brightest sky pixels must be known, or one should be in a position to infer it from the data, with good accuracy. The necessary accuracy remains to be quanti ed. It is possible that maps of the sky obtained by other experiments (especially from the FIRAS data) might be sucient. If not, one can use Planck measurements of the brightest parts of the sky themselves. 4. One should be able to perform the inversion of equation C.3 as many times as necessary for the iterative algorithm to converge. This is just a computational problem, albeit a serious one: one cannot expect to solve the system at full Planck resolution directly. Therefore, it is necessary that the sidelobe signals be of low enough frequency that a lobe model with large pixels be sucient to correct them accurately. The speed of convergence of the algorithm and the accuracy of the correction both depend on the scan strategy and the properties of the antenna pattern. These dependences are currently being tested.

Appendix D

Separation of components In the following, we assume calibrated maps have been generated, with known noise covariance matrix, and focus on the next step, i.e. the joint analysis of a number of pixelised sky maps at dierent frequencies to extract information on the dierent underlying physical components.

D.1 Physical model

We make the hypothesis that the ux F ( e) at a frequency  in the direction on the sphere e is a linear superposition of the contributions from Nc components, each of which can be factorised as a spatial template Xp (e) at a reference frequency (e.g. a map of this emission at 100 GHz), times a spectral coecient gp ( ),

F ( e) =

Nc X p=1

gp( ) Xp (e):

(D.1)

The factorisation assumption above does not restrict the analysis to components with a spatially constant spectral behaviour. Indeed, these variations are expected to be small, and can thus be linearised too. For instance, in the case of a varying spectral index (e.g. for describing the synchrotron emission) whose contribution can be modelled as  (e) Xp(e), we could decompose it as   (1 + ((e) ; 4) ln  ]Xp(e). We would thus have two spatial templates to recover, xp(e) and ((e) ; 4 )xp (e) with dierent spectral behaviours, /   and /   ln  respectively. But given the low expected level of the high latitude synchrotron emission, this is unlikely to be necessary. A similar trick should be applicable to the background due to unresolved infrared point sources if necessary (see x E.3) or to describe complex dust properties in regions close to the Galactic plane.

D.2 The component separation problem

The problem may be formulated as follows: nd the best estimates of the templates xp (e) (i.e. the pixelised maps of the templates Xp ) of the Nc components of given spectral behaviour, given N noisy sky maps ^si (e), ^si (e) = vi ? wi ? f + b =

Nc X p=1

(vi ? gp ( )] (wi ? xp (e)] + b(e)

(D.2)

with the noise covariance matrix B given by eq. (A.7), if one uses the map-making method of the DMR team. Since the searched for templates xp(e) are convolved with the beam responses, wi , it is in fact more convenient to formulate the problem in terms of spherical harmonics transforms (or Fourier transforms for small enough maps), in which case the convolutions reduce to products. For each harmonic mode ` f` mg, we can arrange the data concerning the N channels as complex vectors of observations, y = ^s, and noise, b, and the unknown template values as a (complex) vector x(`) of length Nc. For 8

ease of notation, we keep the same symbols for the transformed quantities which are now functions of ` rather than e. We thus have to solve, for each mode ` independently, the matrix equation

y(`) = Ax(`) + b(`)

(D.3)

with Aip = (vi ? gp ( )] wi (`). It is convenient to analyse dierent solutions to this problem in the Bayesian framework. In this case, the estimator x^ of the unknown templates is the one that maximises the posterior probability P (xjy) of the theory given the data. By using Bayes' theorem, this amounts to maximise the product of the likelihood P (yjx) of the data given the theory by the prior P (x) (since the evidence P (y) is a mere normalisation constant),

P (xjy) / P (yjx) P (x):

(D.4)

If the noise b is Gaussian distributed, the likelihood of the data is

h

i

h

i

P (yjx) / exp ;by B;1 b / exp ;(y ; Ax)y B;1 (y ; Ax) :

(D.5)

Dierent solutions will follow depending on the assumed prior P (x). We now discuss three techniques that have been applied to simulated Planck data.

D.3 \Straight" minimisation of residuals

If we assume no prior knowledge (i.e. P (x) = constant in eq. (D.4]), then maximizing the posterior probility P (xjy) reduces to the familiar method of minimization of

2 = (y ; Ax)y B;1 (y ; Ax)

(D.6)

where xy is the Hermitian conjugate of x (i.e. xy is the complex conjugate of the transpose of the vector x). This method will only work though if one uses a method which implicitly \regularises" the solution, i.e. which selects a reasonable solution among the many degenerate ones. Indeed, we may have more templates to recover than there are frequency points, or even if we restrict ourselves to a smaller number of templates, some templates like those of the synchrotron at large ` may produce such a small contribution to the overall signal that huge relative variations may be easily accounted for by small noise variations, etc : : : This problem did not explicitly show up at the map-making stage of x A since then the problem was in fact vastly over-constrained (for a reasonable scanning strategy), not under-constrained. By using a Singular Value Decomposition (hereafter SVD) to perform the 2 minimization, one implicitly selects the solution of minimal norm M = x^ T x^ . In fact, most inversion methods can be formulated as the task of minimizing the expression 2 + M , where M is a measure of some desirable property of the solution, and  balances the trade-o between tting best the noisy data (as measured by 2 ) and imposing a reasonable behaviour (as measured by M ). We will consider below two types of (explicit) regularisation, i.e. two choices of M , which obtain naturally in the Bayesian framework.

D.4 Wiener ltering If we assume that the templates' modes are Gaussian distributed, then the prior in eq. (D.4) is

h

 

i

P (x) / exp ;xyC;1 x



(D.7)

where C = xxy stands for the covariance matrix of the theory, i.e. of the templates. The posterior probability which we need to maximise can then be written as

h i h i P (x) / exp ;2(x) ; xy C;1 x / "y E;1 " 

(D.8)

 

where " = x ; x^ stands for the vector of reconstruction errors and E = "y " for the associated covariance matrix. One then recovers the Wiener ltering solution originally derived by Bouchet, Gispert, & Puget (1995) and Tegmark & Efstathiou (1996), and partially tested during the phase A studies

x^ = Wx with W C;1 + AT B;1 A AT B;1 = CAT ACAT + B  (D.9) where x^ stands for the estimate of the x vector. Clearly this choice of prior will regularise solutions

by weighting them according to their likelihood given the assumed theory, which enters through it's covariance matrix C. Of course C is not fully known a priori, although we do have information like the dust power spectrum deduced from IRAS measurements. The advantage of the simple inversion by 2 minimisation is that it requires minimal prior information on the signal x, through only very general regularity requirements. The power spectra of the templates derived from this rst step can then be used as input to other techniques. One issue addressed by the simulation section x 2.4.2 is precisely how well Wiener ltering can do with only an approximate knowledge deduced from a rst simple 2 minimisation. If one uses Wiener ltering, the reconstruction errors are then given by

E = (I ; R] C (D.10) where I stands for the identity matrix, and R = WA. This Wiener reconstruction is linearly optimal,

i.e. it is the linear method which minimises the covariance of the reconstruction errors. Results of this approach to the Planck case will be presented in x 2.3.2 and x 2.4.2.

D.5 Maximum Entropy Method While we already know from the results of the phase A study that the instrumental performances of

Planck allow an impressive reconstruction of the underlying components using the simple Wiener

ltering above, we also know that at least the foreground emissions are non-Gaussian distributed. One could thus possibly do better by using a non-Gaussian prior. The dust emission will be mapped with great accuracy by the high-frequency channels of our instrument and one could determine its probability distribution from the data. One could do the same for the CMB anisotropies using the 100 GHz channel where it far dominates all other components over a large fraction of the sky. On the other hand, this would not be feasible for sub-dominant components like those arising from clusters of galaxies. Instead Hobson et al. (1998) use an entropic prior, based on information-theoretic considerations alone, which have had considerable success in radio-astronomy and many other elds involving complex image processing. Given an image, x, and a model, m,

P (x) / exp (S (x m)] 

(D.11)

where  is a regularising parameter, and S (x m) is the \cross-entropy" of x and m. The classical diculty is that x should be a positive additive distribution, although this can be circumvented by considering the image x as the dierence of two positive additive distributions, x = u ; v. In that case

S (x mu  mv ) =

h

i2

X

j ; hj

fj ; muj ; mvj ; hj ln 2m g uj

j 2pixels

(D.12)

where j = h2j + 4muj mvj , and mu and mv are separate models for u and v whose properties can be set from an assumed C. The Maximum Entropy Method (hereafter MEM) therefore reduces to minimising versus x the non-linear function 5MEM (x) = 2 (x) ; S (x mu  mv ):

(D.13)

Although this minimisation requires an iterative solution owing to its intrinsically non-linear nature, it takes no longer to compute than a computation of the Wiener matrix. The value of the tradeo parameter, , can itself be obtained self-consistently by treating it as another parameter of the hypothesis space, which also calls for an iterative solution. Finally, all the procedure above (i.e. 5MEM minimization including the search for ) can in turn be looped over by using the current iteration to provide updated estimate of the correlation matrix C, and of the models mu and mv : : : Results of this approach in the Planck case are presented in x 2.4.2.

Appendix E

A model of the microwave sky Many detailed studies have been devoted to effects that may blur the primordial signature of the CMB anisotropies. Gravitational lensing by mass concentrations along the light ray paths may for instance alter the detailed map patterns and add a stochastic component. Or photons passing through a fast evolving potential well might be redshifted (Rees-Sciama effect). Secondary uctuations might be generated during a reionization phase of the Universe. For all these processes, the answer (Seljak, 1996 Rees & Sciama, 1968 Ostriker & Vishniac, 1986 Jae & Kamionkowski, 1998), is that their impact is quite small at scales corresponding to ` ' 1= < 1000, and can easily be accounted for at the analysis stage of CMB maps. Further uctuations may also be imprinted, in case of a strongly inhomogeneous reionization see e.g. Aghanim et al. (1996). We shall ignore these effects in the following.

E.1 Galactic emissions The Galactic emissions are associated with dust, free-free emission from ionized gas, and synchrotron emission from relativistic electrons. We Figure E.1: The cosmic sandwich, or what we shall shall follow here the same approach as in the chew upon. phase A study, i.e. nd a model appropriate for the best half of the sky from the point of view of a CMB experiment. This results in considerable simpli cation for the dust emission, since there is now converging evidence that the dust emission spectrum from high latitude regions with low H I column densities can be well approximated with a single dust temperature and  2 emissivity with no evidence for a very cold dust component. Indeed, Boulanger et al. (1996) derived1 the far{IR sub-mm of the fraction of the sky with N(H I) = 5 1020 H cm;2 where the correlation between dust emission and H I emission is tight2 and found it was 1 By analysing the FIRAS and the Dwingeloo H I maps (Burton & Hartmann, 1994). 2

Above this threshold, the increased slope and scatter is probably related to the contribution of dust associated with molecular hydrogen since the column density threshold coincides with that inferred from UV absorption data for the presence of H2 along the line of sight (Savage et al., 1977).

12

well tted by one single dust component with T = 17:5 K and =NH = 1:0 10;25 (=250m);2 cm2 . Concerning the sub-mm residual to this t, Puget et al. (1996) found that it was isotropic over the sky and concluded that the excess is the extragalactic background from galaxies, unless it is an instrumental component. The cosmological interpretation has recently received additional support. First, the analysis was redone (Guiderdoni et al. (1997) Lagache & al. (1997)) in a smaller fraction of the sky where the H I column density is the smallest (< 1 1020 H cm;2 ). In these regions, the H I-correction is essentially negligible and the \residual" is actually the dominant component. This leads to a much cleaner determination of the background spectrum which is slightly stronger than the original determination. As we shall see below in x E.3, the cosmological interpretation ts well with the results of recent IR searches for the galaxies that cause this background. Finally, the recent analyses by Schlegel, Finkbeiner, & Davis (1997) and Hauser et al. (1997) also detected a residual with similar characteristics. All of these results thus support our spectral modeling of the dust emission. Concerning the scale dependence of the amplitude of the uctuations, we had assumed that their power spectrum is varying approximately as `;3 , in agreement with the determination of Gautier et al. (1992) based on the 100 m IRAS data in the 8 to 4 arcminutes range. More recently, Wright (1997) analysed the DIRBE data by two methods, and concluded that both give consistent results, with a high latitude dust emission spectrum, C (`), also / `;3 in the range 2 < ` < 300, i.e. down to  60=`  0:2 . Concerning the free-free emission, we had constructed our model i) by assuming that the partial correlation between the dust and freefree emission detected at large scales by Kogut et al. (1996) holds at all scales. ii) by adding to the previous H I correlated emission an uncorrelated component with the maximum level allowed by the analysis of Kogut et al. (1996). There are as yet no observations to suggest a situation more pessimistic than in (ii). Concerning Figure E.2: Standard deviation of the uctuations our rst assumption, though, De Oliveira-Costa in the synchrotron, H I-correlated and uncorrelated et al. (1997) cross-correlated the Saskatoon data emissions in our Galactic model, as compared to with the DIRBE data and also found a corre- the CMB uctuations whose level was set by the lated component, with a normalization in agree- DMR measurement, and the data points for the ment with that of Kogut et al. (1996). This re- H I-correlated component obtained by Kogut et al. sult thus supports the hypothesis that the spatial (1996) (see text). A Gaussian beam of 7 degree correlation betweeen dust and warm ionised gas FWHM was used. observed at large angular scales persists to small angular scales. In addition, Veeraraghavan & Davies (1997) used H maps of the North Celestial Pole (hereafter NCP) to determine the spatial distribution of free-free emission R on subdegree scale (since both emissions scale in the same way with the electron density / n2e dl). Their best t estimate is :4 2 ;2:270:07 at 53 GHz, if they assume a gas temperature  104 K. While this specC ff = 1:3+1 ;0:7 K ` trum is signi cantly atter than the spectrum we used, the normalization is also considerably lower than that of Kogut et al. (1996) resulting in a free-free power spectrum much lower on all angular scales than ours. Indeed, their predicted power at ` = 300 is a factor of 60 below that of the COBE extrapolation. One should note also the results of Leitch et al. (1997). They found a strong correlation between their observations at 14.5 and 32 GHz towards the NCP and IRAS 100 m emission in the same eld. However, starting from the corresponding H map (one of those analysed by Veeraraghavan & Davies

(1997)), they discovered that this correlated emission was much too strong to be accounted for by the free-free emission of  104 K gas. They interpret this result as indicating either at spectrum synchrotron radiation (with temperature spectral index   2), or free-free emission from very hot > 106 K associated with a supernovae remnant in large H I feature known as the NCP loop. gas at Te  However, such a strong emission (if it exists) would be easily detected by MAP and the LFI instrument and corrected for. In addition, if it is free-free emission, it would have a signi cantly steeper spectral index than usual and would therefore remain at a relatively low level in the HFI bands, in particular in the important 217 GHz channel (see below). It is worth noting that observations in progress by the same authors indicate that their results for the NCP region are atypical. On the other hand, Draine & Lazarian (1997) recently proposed a dierent interpretation for the observed correlation. They propose that the correlated component is produced by electric dipole rotational emission from very small dust grains under normal interstellar conditions (the thermal emission at higher frequency coming from large grains). Clearly, additional observations measuring the Galactic emission would be of great value in settling this debate. Here we simply note that the spectral dierence between the two cases shows up at frequencies < 30 GHz, i.e. outside of the range probed by Planck. This would thus barely aect the conclusions we draw from our modeling, since we have already assumed a large H I-uncorrelated free-free emission accounting for 50% of the total. Finally, there seem to be no new results concerning synchrotron emission. The most relevant information remains the lack of detectable cross-correlation between the Haslam data and the DMR data which lead Kogut et al. (1995) to impose an upper limit of  = ;0:9 for any extrapolation of the Haslam data in the millimeter wavelength range at scales larger than  7 . In view of the other constraints at higher frequencies (see the phase A report), it seems reasonable to assume that this spectral behavior also holds at smaller scales. We have thus found no compelling reason at this time to change our (pessimistic) Galactic model. In summary, to model the Galaxy, we use the spectral indices  = ;0:9 ;0:16 for the synchrotron & free-free emission, and the dust is assumed at 18K with an emissivity /  2 . One half of the free-free emission is assumed to be correlated with H I. This leads to the following angular power spectra at 100 GHz

` C`1=2 = cX `;1=2 K with csync = 2:1 cHI ;U = 8:5 and cHI ;C = 20:6 

(E.1)

where HI ; U and HI ; C label the Dust+Free-free components, uncorrelated and correlated (respectively) with H I (which implies cfree = 13:7, and cdust = 13:5). These normalisations should be appropriate for the \best" half of the sky, at scales ` > 10, as in the phase A report. By using these angular power spectra and our spectral model, we can compute the rms per 7 beam at every frequency and check that the H I correlated component indeed provides a good t with the results of the Kogut et al. (1996) analysis, as is shown in gure E.2. Figure 2.7 allows a comparison with the other contributions to the microwave sky.

E.2 The polarised emission from dust in our Galaxy At present there exists no data on the spatial distribution of dust polarised emission in the galaxy at high galactic latitudes. However, it is possible to model it using the fact that the dust emission correlates extremely well with the HI emission in the galaxy at high latitudes (Boulanger et al., 1996). This fact allows one to determine the three-dimensional distribution of dust distribution in the galaxy. Recent observations support the view that dust grains are oblate with axes ratio ' 2=3, which, using theoretical models, implies that the intrinsic dust emission is 30% polarised (Hildebrand & Dragovan, 1995). Assuming a distribution of magnetic eld relative to dust structures, two dimensional sky maps of dust polarised emission can be constructed by projecting the three-dimensional distribution of polarised emission (for details, see Prunet et al., 1998). From these maps one can estimate the autocorrelation power spectra of E - and B -mode polarisation and the E ;T cross-correlation spectrum, quantities which can be directly compared to their CMB counterparts. These power spectra can

approximately be tted as:

CE (`) = 8:9 10;4 `;1:3 (K)2 CB (`) = 1:0 10;3 `;1:4 (K)2 CET (`) = 1:7 10;2 `;1:95 (K)2 :

(E.2) (E.3) (E.4) The power spectra are normalised at 100 GHz and are for the maps between galactic latitudes 30 and 45 .

E.3 Infrared sources & their background Given the steepness of rest-frame galaxy spectra longward of  100 m, predictions in this wavelength range are rather sensitive to the assumed high-z history. Indeed, as can be seen from gure E.3, a template galaxy at z = 5 might be more luminous than its z = 0:5 counterpart because the redshifting of the spectrum can bring more power at a given observing frequency than the cosmological dimming, at least in the  > 800 m range (e.g. Blain & Longair, 1993), precisely the range explored by the HFI. This \negative K-correction" means that i) predictions are rather sensitive to the redshift history of galaxy formation ii) variations of the observing frequency imply a partial decorrelation of the galaxy contribution, i.e.one cannot stricto sensu describe this contribution by a template and a frequency spectrum (but see below). This is compounded by the fact that this part of the spectrum is not well known observationally, even at z = 0, and very little is known about the red- Figure E.3: Redshifting a template galaxy spectrum. Longward of  300 m, the ux of a temshift distribution of faint infrared sources. Estimates of the contribution of radio-sources plate galaxy is similar at all z > 0:5. and infrared galaxies to the anisotropies of the microwave sky have so far relied on extrapolations from redshift z = 0 all the way to a (large) assumed galaxy formation onset redshift. Given the uncertainties described above, this makes the predictions of this type of modelling rather uncertain. On the other hand, predictions of galaxy formation models in the UV and optical bands have received a lot more theoretical attention in the last few years they rely on substantially more involved semi-analytical models of galaxy formation which provide a physical basis to the redshift history of galaxies. In short, one starts from a matter power spectrum (typically a standard, COBE-normalised CDM), and estimates the number of dark matter halos as a function of their mass at any redshift (e.g. using the Press-Schechter approach). Standard cooling rates are used to estimate the amount of baryonic material that forms stars. The stellar energy release is obtained from a library of stellar evolutionary tracks and spectra computations may be then performed to compare in detail with observations. This approach, despite some diculties, has been rather successful, and many new observations t naturally in this framework. In the context of the Planck preparation, we began a long term project to extend this type of physical modelling to the Far-infrared range. This eort has recently converged (Guiderdoni et al., 1998). Given the stellar energy release, we estimated the fraction reradiated in the microwave range using a simple geometrical model. This resulted in an infrared luminosity function at all redshifts which, together with an assignment of synthetic spectra to a given infrared luminosity, allows predictions of the numbers and uxes of faint galaxies at any frequency. To constrain the fraction of heavily reddened objects as a function of redshift we used the z  0 IRAS data at 60 m (Lonsdale 10

1

0.1

0.01

0.001

0.0001

1

10

100

1000

et al., 1990) to normalise the model, and assumed that the isotropic background discovered by Puget et al. (1996) is indeed the long-sought Cosmological Infrared Background Radiation (CIBR) due to the accumulated light of infrared galaxies. We thus used it in Guiderdoni et al. (1997) to select a small set of possible redshift histories satisfying this new constraint, and predicted that ISO observations at 175 m could largely break the remaining degeneracy in the model. Such observations were very recently done (Kawara & al., 1997 Clements et al., 1997 Puget et al., 1998) and the number of sources found is best described by one model (hereafter model E) corresponding to a fairly large fraction of very obscured objects. The ISO Hubble deep eld data at 15 m (Oliver et al., 1997) also agree with the predictions of this model. Impressively, even the rst SCUBA determination (Smail, Ivison, & Blain, 1997) at 850 m also seems to be tted by model E. In short, this model is successful in predicting the latest source counts over a broad range of frequencies (see gure E.4 for a comparison), which in turn lends further credibility to the Puget et al. (1996) interpretation of the isotropic residual as the CIBR. Irrespective of any remaining uncertainties in the details of the model, its observational con rmation in the range 15 to 175 m range (and even up to 850 m if the newer SCUBA point is con rmed) gives us con dence that we can predict with good accuracy the contribution of infrared galaxies to CMB measurements performed in the same frequency range, typically that of the HFI. The situation is dierent for the redshift distribution of the sources which is more sensitive than the number counts to the details of the evolutionary scenarios. The current observational constraints are so far limited to the spectroscopic follow-up of IRAS sources at 60 m which essentially highlights the IR properties of galaxies in the local Universe. Forthcoming multiwave- Figure E.4: Number counts prediction from the E length follow-up and spectroscopic observations model of Guiderdoni et al. (1997) & the available of ISO and SCUBA sources will put strong con- data (see text). straints on the shape of the spectral energy distribution of the various classes of sources, and help us improve the determination of the star formation rate history of the Universe taking into account the total luminosity budget of the galaxies. As an output, more elaborated models tting the redshift distributions should give improved predictions of number counts in the HFI observing bands. The lower frequency predictions, in the LFI range, have not yet been modelled with the same care and the model provides a lower limit since it includes only thermal sources. We have thus complemented it with an appropriate model for the low-frequency part taken from Toolatti et al. (1997) in place where we needed to estimate the unresolved background from radio-sources, blazars, etc... Table E.1 gives our model prediction for the number of detectable sources along scans. Table 2.1 in the main text gives our current best estimate for the total number of detectable sources by the HFI at the 5  level. In addition, we can use the self-consistently derived level of the background of unresolved galaxies at a level of 5 tot for a level typical of the best 50% of the sky to provide an estimate of the power spectrum of the uctuations of this background. We nd that this unresolved background can be approximately modelled as that of a modi ed black body with an emissivity /  0:70 and a temperature of 13.8 K. A more precise t yields at  > 100 GHz:

p

x3:7 C`( ) ' 1:12 (1 ; 0x:16 ) 4 ex=2:53 ; 1 (Jy=sr]

with x = h=2kT0 = =(113:6GHz).

(E.5)

NUMBER COUNTS FOR TIMELINE

 FWHM ins tmis

GHz (1) 857 545 353 217 143 100

arcmin (2) 5 5 5 5.5 8.0 10.7

mJy s (3) (4) 43.3 6.2 43.8 6.2 19.4 6.2 11.5 7.6 8.3 16.1 8.3 28.4

t1rot ins1rot ins100rot N100rot

0:01 s (5) 1.39 1.39 1.39 1.54 2.24 2.98

mJy (6) 2240 2270 1000 880 780 510

mJy (7) 224 227 100 88 78 51

(8) 3.7 0.6 0.6 0.02 0.003 0.005

Table E.1: Theoretical estimates from model E of Guiderdoni et al. (1997). (1) Central frequency in GHz. (2) Beam full width half maximum in arcmin. (3) 1  instrumental noise for the 14 month nominal mission. (4) Time spent per pixel and per bolometer during the 14 month nominal mission. (5) Time spent per pixel and per bolometer after 1 rotation, for 1 rotation per minute scanning speed (358.63 arcmin s;1) and 85o view angle. (6) 1  instrumental noise per pixel and bolometer after 1 rotation. This value is computed from 1=2 ins1rot = Nbol (tmis =t1rot)1=2 ins . (7) 1  instrumental noise per pixel and bolometer after 100 rotations. (8) Total number of FIR sources detected at 5  after 100 rotations.

The corresponding temperature uctuation power spectrum is then

`C (`)1=2 ' 1:13 10;9 (1 ; 0:16 ) sinh2 x ` (K] 2 x4 x0:3 ex=2:53 ; 1

(E.6)

which gives a value of C (`) = 0:02 K at 100GHz. At lower frequencies ( < 100 GHz), one has instead `C (`)1=2 ' 10;9 ;0:8 ; 2:5x + 3:38x2 sinh2 x ` (K] 2 x4 (E.7) The corresponding level is compared with the other sources of uctuations in gure 2.7. It is quite low (at least at ` < 1000) as compared to the expected uctuations from the CMB or from the Galaxy. We have generated maps of the galaxies at 30 dierent frequencies by using model E described above (and 1:50 1:50 pixels). This allows us to generate precise maps of the contribution of infrared galaxies to the Planck bands to test various source extraction schemes in our full simulations. More simply though, we can simply assess the level of decorrelation between dierent frequencies by computing the spectrum of each 1:50 Figure E.5: a) Absolute ux in each pixel as a funcof frequency the maps have been convolved with a pixel, once we have convolved each map with a 50 tion Gaussian of 50 FWHM. The solid black line show FWHM Gaussian beam. Figure E.5.a shows the the mean beam the median spectrum is denoted by mean and the median spectrum, as well as vari- red dashes,spectrum, while the various green contours encircle 10, ous contours including up to 90 % of the pixels. 30 70 and 90 % of the pixel values. b) Cross-correlation Clearly the decorrelation should be very weak on coecients between the full resolution maps. this angular scale. Indeed, the cross-correlation coecients of the full resolution maps (once the

5 tot sources have been removed) of gure E.5.b is better than 0.95 in the 100-350 GHz range, and it is still > 0:60 in the full 100-1000 GHz range. This con rms that we can treat the IR background from unresolved sources as just another template to be extracted from the data, with a well de ned spectral behaviour3 , at least in the range probed by Planck. While the model above might well be the best available for assessing the infrared sources properties, it does not take into account the contribution from low-frequency point sources like blazars, radiosources, etc : : : We have used the low-frequency predictions of Toolatti et al. (1997) to model the contribution from these unresolved sources. Here again, we assume that the unresolved background has well-de ned spectral properties.

E.4 Clusters of galaxies During the phase A study, we devised a model for generating maps of the Sunyaev-Zeldovich eect (both thermal and kinetic) and analysed the capabilities of Planck in detecting clusters of galaxies (Aghanim et al., 1997a, see Figure 1.6). Since then, we have improved the model by using better theoretical counts and computations of the peculiar velocities, and generalised it to encompass various cosmological models. The counts were derived from the Press-Schechter mass function (Press & Schechter, 1974), normalised using the X-ray temperature distribution function derived from Henry & Arnaud (1991) data as in Viana & Liddle (1996). The updated source counts predictions are thus in agreement with Figure E.6: Power spectra of the more recent data. The following power spectrum of the uctuathermal eect tions due to the Sunyaev-Zeldovich thermal (hereafter SZ) eect Sunyaev-Zeldovich for dierent cosmological models: from clusters of galaxies was evaluated by analysing new maps of standard CDM (solid line), open the Compton parameter y generated with these counts. The \new" CDM (dashed line) and lambda counts induce a power spectrum larger, by a factor 3, than the CDM (dotted line) are shown. model quoted in the Phase A document. For the standard CDM model, we found that the corresponding y uctuations are well- tted in the range 20 < ` < 4000 by

`(` + 1) C` = aysz ` (1 + bysz `] 



(E.8)

with aysz = 4:3 10;15 and bysz = 8:4 10;4 . This yields ` C`1=2 = 0:27 ` (1 + 8:4 10;4 `) 1=2 K for the temperature uctuation spectrum at 100 GHz. On small angular scales (large l), the power spectrum of the SZ thermal eect exhibits the characteristic l2 dependence of the white noise. This arises because at these scales the dominant signal comes from the point{like unresolved clusters. On large scales (small l), the contribution to the power comes from the superposition of a background of unresolved structures and extended structures. The transition between the two regimes occurs for l ' 1=bsz that is when the angular scale is close to the pixel size of our simulation. Of course, the values of the tting parameters depend on the assumed cosmological scenario. For an open model (0 = 0:3), we found instead aysz = 4:6 10;15 and bysz = 2:2 10;3 . These small dierences are illustrated in gure E.6. The kinetic SZ eect due to the Doppler eect from clusters in motion along the line of sight (hereafter l.o.s) cannot, of course, be spectrally distinguished from the primary uctuations. This additional contribution is about an order of magnitude smaller than the thermal SZ eect. However, once a component separation has been performed, one can compare the derived maps of the Compton parameter y with those of the temperature. When the Doppler eect is large (as compared to the primary uctuations), it can be used to directly estimate the l.o.s velocity of the clusters (Haehnelt & Tegmark, 1996 Aghanim et al., 1997a) using appropriate dierential spatial lters. It was shown Note though that the model may not do full justice to the diversity of spectral shapes of the contributing IR galaxies if this turns out to be the case, we could for instance treat this contribution as an additional noise contribution to the detectors, as was conservatively assumed in the Phase A study. 3

that individual cluster peculiar velocities can be determined to a accuracy of better than about 400 km/s. For smaller y values and/or smaller l.o.s velocities, one can only perform a statistical analysis and estimate the large scale peculiar velocities with an accuracy of 100 km/s at the sensitivity level of the HFI (Aghanim et al., 1997a). This will yield an information nearly impossible to obtain by other means, at least for high-z clusters. In addition, the combination of HFI data with FIRST data should give the temperature of the strongest clusters (Pointecouteau, Giard, & Barret, 1997) through the (weak) spectral dependence of the Sunyaev-Zeldovich eect on temperature, when it is high enough to make relativistic corrections signi cant4. Finally, as rst recognised by Birkinshaw & Gull (1983), a cluster potential well moving transversely to a CMB photon trajectory leads to an additional dipolar temperature anisotropy, potentially observable. Unfortunately, this eect appears to be too small compared with the Doppler and primary CMB uctuations to usefully constrain the transverse velocity dispersion of clusters (Aghanim et al., 1997b).

For a cluster with an electronic temperature e = 8 keV, and a Compton parameter = 3 ; 4, they found that the precision on e determined from Planck data only is from 2.2 to 3.1, for a cluster at z=0.1 depending on the location of the cluster in the survey, and 4.2 for a cluster at z=1. Complementing with FIRST data improves the determination because subtraction of part of the background infrared galaxies becomes possible given the FIRST angular resolution. The error bars are almost independent of e , but scale directly with 1 . 4

T

y

T

T

=y

:e

Appendix F

Merit of the component separation using Wiener ltering F.1 The unpolarised case Once the instrument, through the A matrix, and the covariance matrix C of the templates are known, the Wiener lter is entirely determined through equ.(D.9). Of course, to really check the obtainable accuracy, one needs to go through a rst inversion, e.g. by simple 2 minimisation with no assumed prior. In the following theoretical analysis, we assume for the sake of simplicity that we already know the power spectrum with negligible error. Figure F.1 oers a graphical presentation of the resulting values of the Wiener matrix coecients of the CMB component when we use the sky model described in appendix E. It shows how the dierent frequency channels are weighted at dierent angular scales and thus how the  ; ` information gathered by the experiment is used. Note that throughout this section the point source background (once the 5 tot are removed) was treated as the superposition of two spectrally well-de ned emission processes, one for the radio sources and the other for the Infrared galaxies. Figure F.2 shows in the same representation the Wiener matrix for the other components in the HFI case alone.

Figure F.1: Wiener matrices elements for the HFI and the full Planck mission2, for the CMB component. In this theoretical analysis, we can always assume for simplicity that we have decomposed the sky 20

Figure F.2: HFI Wiener matrices elements for the H I-correlated, H I-uncorrelated, synchrotron and SZ components, in the HFI case.

D

E

ux into a superposition of emissions from uncorrelated templates3 ,i.e. xp x?p0 = pp0 Cp(`) (although it might prove more convenient in practice to relax this assumption and consider correlated templates but with simpler spectral signatures). In that case, one can show (Bouchet & Gispert, 1998a) that

jx^ j2  p





= Qp jxp j2 X 2 "2p = (1 ; Qp )Cpp = 1=Qp Rpp 0 Cp0 p0 + Wp Wp 0 B 0

C^pp = p 1 CQpp 2` + 1 p

(F.1) (F.2)

p0 6=p

(F.3)

where R = WA, and Qp = Rpp stands for its trace elements, "p = xp ; x^p is the reconstruction error of the p process, and C^pp is the expected error on the recovered power spectrum of p4 . Thus Qp tells us: 1. how the amplitude of the estimated modes x^ are damped as compared to the real ones (eq. (F.1]), 2. the spectrum of each residual error in the map (eq. (F.2]), 3. the uncertainty added by the noise and the foreground removal (/ 1=Qp ; 1, eq. (F.3]) to cosmic variance5 . This result only hold under the simplifying assumption of Gaussianity of all the sky components). Given these properties, we can use Qp as a \quality factor" to assess the ability of experimental set-ups to recover a given process p in the presence of other components it assesses in particular how well the CMB itself can be disentangled from the foregrounds, see gures 2.9 and F.3 for examples. Figure F.3 oers a comparison of the eective windows for each foreground, and the corresponding reconstruction errors, for the LFI, the HFI, and the full Planck, by using the estimated power spectra of our sky model (see x E. The number we have used for describing the experimental set-up are recalled in table F.1.



FWHM T cnoise 

FWHM T cnoise

22 54 11.7 11.8 30 33 4.0 2.5

MAP speci cations

30 39 16.2 11.8

40 31.8 19.8 11.8

60 23.4 26.9 11.8

Proposed LFI 44 70 23 14 7.0 10.0 3.0 2.6

100 10 12.0 2.3

90 17.4 36.2 11.8 100 10.7 4.6 0.9

22 55.8 8.4 8.8 143 8.0 5.5 0.8

MAP current design6 30 40.8 14.1 10.8

40 28.2 17.2 9.1

60 21.0 30.0 11.8

90 12.6 50.0 11.8

Proposed HFI 217. 353 5.5 5.0 11.7 39.3 1.2 3.7

545 5.0 401 38

857 5.0 18182 1711

Table F.1: Summary of experimental characteristics used for comparing experiments. Central band frequencies,  , are in Gigahertz, the FWHM angular sizes, FWHM , are in arcminute, and T sensitivities are in K per FWHM FWHM square pixels the implied noise spectrum normalisation of eq. (2.2), cnoise = T (FWHM )1=2 , is expressed in K:deg. 3 In our model, these are the H I-correlated and H I-uncorrelated components rather than the dust and free-free emissions. 4 This error can be computed on general grounds for any linear inversion method, as a function of R. It does tell us about the experiment and the ltering method, irrespective of how the lter was obtained. Thus one can derive valid errors despite the fact that an estimate of C enters p the denition of the Wiener matrix W. 5 The cosmic (or sampling) is given by pp 2 + 1. C

=

`

Figure F.3: Top row: a) Quality factors or square of the `-space eective windows for the LFI, HFI, and Planck (from left to right). As usual, black is for the CMB, red, blue, and green are for the Galactic components, yellow is for the SZ contribution. The transform of the channel beams are also shown as dotted orange lines. Bottom row: CMB reconstruction error contributed by each component (with the same line coding than above) and their total in black. The integral of `(` + 1)"2 =2 would give the reconstruction error of the map. See the gure 2.9.c in the main text for a comparison with C (`). For Planck, the total of the reconstruction error is nearly constant at all ` and corresponds to "(`) ' 0:003 K.

F.2 Generalisation to the polarised case The CMB polarisation signal is likely to be one to two orders of magnitude below the CMB temperature

uctuations. The extraction of this small signal is hindered not only by the detector noise but also by ubiquitous galactic polarised foregrounds. However, as the foregrounds dier from the CMB in both frequency dependence and spatial distribution, one can hope to reduce their level in a multi-frequency CMB experiment. We have recently extended the standard Wiener ltering above to account for the speci cs of polarisation (Prunet, Sethi, & Bouchet, 1998). We give here an outline of the derivation and the main results. A CMB experiment which measures both temperature and polarisation will give multi-frequency maps yi ,  and i indices correspond to the frequency and the eld (Seljak, 1997, temperature and polarization, which we take to be the E-mode polarization throughout), respectively. The terms yi includes contribution from the CMB, the foregrounds, and the detector noise. It can, in the multipole space f` mg, be written as:

yi (` m) = Aijp (` m)xjp (` m) + bi (` m):

(F.4)

Here Aijp is the frequency response matrix of various processes (denoted by the index p), xjp is the true signal of each process (and eld), and bi is the detector noise of the frequency channel for each eld. Our aim is to choose a linear lter Wpij on these maps such that the variance of the reconstruction

error j(^xip ; xip )2 j (with x^ip = Wpij yj ) is minimised. This condition can be reduced to the following equation for Wpij : il lm k m ib c b ck k i Ack p0 Wp 0 A 0 p00 hxp0 xp00 i + Wp 0 hb b 0 i = A p0 hxp0 xp i:

(F.5)

Eq. (F.5) is valid for the general case in which various processes, elds, and corresponding instrumental noises could be correlated. We consider here only uncorrelated processes and noises between dierent elds and channels, but we allow for the correlation between the two elds T and E . These conditions can be expressed as:

hx1p0 x1p00 i hx1p0 x2p00 i hx2p0 x2p00 i hb1 b1 i hb2 b2 i hb1 b2 i

p0 p00 CpT0 mTp0p00 p0 p00 CpTE 0 mTE p0 p00 E E p0 p00 Cp0 mp0 p00   B T mN T    B E mN E  0

= = = = = =

(F.6)

which can be used to simplify the Wiener lter expressions.

F.2.1 Unbiased estimators and covariance of power spectra To estimate errors in determining various power spectra, we need to write their unbiased estimators de ned such that hx^ip x^jp i = hxip xjp i. We start by writing the average power spectrum of x^ip:

h

i

i i j l hx^ipx^ipi = Wpij Wpil 0 Ajkp0 Alm  0 p00 hxp xpi + hb b 0 i :

(F.7)

This expression can be expanded as:

hx^1px^1pi = (Zp1 CpT + b1p) Q11p CpT hx^2px^2pi = (Zp2 CpE + b2p) Q22p CpE hx^1px^2pi = (Zp12 CpTE + b12p ) Q12p CpTE : Eq. (F.8) can then be used to de ne unbiased estimators of the power spectra:

!

X C^pT = Z11 2` 1+ 1 kx^1p(m)^x1p (m)k ; b1p p m

! X 1 1 E 2 2 2 C^p = Z 2 2` + 1 kx^p(m)^xp (m)k ; bp p m

! X 1 1 TE 1 2 12 C^p = Z 12 2` + 1 kx^p (m)^xp (m)k ; bp : p m

(F.8) (F.9) (F.10)

The covariance of the power spectra can be calculated from these unbiased estimators:

; 11 2 Cov(C^pT ) = 2` 2+ 1 CpT Q11 p =Zp ; 22 2 Cov(C^pE ) = 2` 2+ 1 CpE Q22 p =Zp ;(Q12 )2 (C TE )2 + Q11 Q22C T C E 1 p p p p p TE Cov(C^p ) = 2` + 1 p (Zp12 )2

(F.11) (F.12) (F.13)

F.2.2 Errors on cosmological parameters: Fisher matrix

The errors on various power spectra can then be translated into the expected errors on cosmological parameters using the method of Fisher information matrix, which is de ned as:

X X @C`X

Y

;1 (C^ X  C^ Y ) @C` : Cov (F.14) ` ` @

@

i j ` XY Here i correspond to various cosmological parameters and X Y denote E , T , and ET cross-correlation Cov;1 denotes the inverse of covariance matrix. The errors on cosmological parameters are then given by:

Fij =

 i = (F );ii 1=2 : Parameters C2 h Model 796(K )2 0:5 Best channel (LFI) 4% 1:9 % Wiener (LFI) 2:8 % 1:2 % Best channel (HFI) 2:4 % 1:1 % Wiener (HFI) 1:9 % 0:8 % Best channel (Planck) 2:4 % 1:1 % Wiener (Planck) 1:8 % 0:75 %

b 0:05 3:2 % 2:2 % 1:9 % 1:5 % 1:9 % 1:4 %

(F.15)  0:0 0:057 0:038 0:033 0:025 0:033 0:023

ns

nt

0:05 1:0 0:0 12 % 0:5 % 0:53 3:5 % 0:4 % 0:4 6:6 % 0:32 % 0:29 3 % 0:3 % 0:17 6:6 % 0:32 % 0:29 2:9 % 0:29 % 0:16

Table F.2: 1  errors in estimates of cosmological parameters using the polarisation information. The retained parameters are, from left to right of the rst row, the normalisation of the power spectrum, Hubble's constant in units of 100 km/s/Mpc, the energy density of baryons and of the vacuum in units of the closure density, the optical depth to last scattering, and the scalar and tensor indexes of the power-law initial conditions. The next row shows the model values we used, while the other rows compare na$ve estimates based on using the \best" channel of the experiment (e.g. the 143 GHz channel for the HFI, assuming the others can be used to fully remove any foreground contribution) and our estimates assuming Wiener ltering is used. Table F.2 gives an example of the expected accuracy on some of the cosmological parameters in the absence of gravitational waves (for an example of the eect of gravitational waves, see table 2.2, p. 30 of the main text.

Appendix G

List of Acronyms ADC AGN AIT AIV AMS AO AOCS APH API APID ASF AT AWG BSFR CC CCA CCS CCS CDM CDMU CDS CERN CfA CFRP CMB CNES CNR CNRS Co-Is COBE COTS CPAC CRTBT DBI DBU DIRBE DMR DPC DPU DSRI

Analog to Digital Converter Active Galactic Nucleus Assembly, Integration and Testing Assembly, Integration and Veri cation Archive Management System Announcement of Opportunity Attitude and Orbit Control System Attitude Pointing History Application Programming Interface Application Packet Identi er Actively Star-Forming (galaxies) Acceptance Test Astronomy Working Group (ESA) Back Surface Field Re ector Con guration control Cryogenic Cooler Assembly Central Command Schedule CryoCooler System Cold Dark Matter Central Data Management Unit Cryogenic Dilution System Centre Europ#een pour la Recherche Nucl#eaire Harvard-Smithsonian Center for Astrophysics Carbon Fibre Resin Polymer Cosmic Microwave Background Centre National d'Etudes Spatiales Centro Nazionale della Ricerca Centre National de la Recherche Scienti que Co-Investigators COsmic Background Explorer Commercial O The Self Software Cambridge Planck Analysis Center Centre de Recherches des Tr/es Basses Temp#eratures Digital Bus Interface Data Bus Unit Diuse InfRared Background Experiment (on COBE) Dierential Microwave Radiometer (on COBE) Data Processing Centre Data Processing Unit Danish Space Research Institute 26

EGSE EID EM EMC EMI EOL EOM ESA ESO ESOC ESTEC FFT FINDAS FIR FIRAS FIRST FITS FM FOP FOV FPA FPST FPU FS FWHM GSE GSFC GSID GTO HEMT HFI HSK HST HTML IAS IC ICD ICS IDIS IDIS-DT IDIS-MT IIA IID-A IID-B IOP IRAS ISM ISO KAL LEOP LEP LFI MDM

Electrical Ground Support Equipment Experiment Interface Document Engineering Model ElectroMagnetic Compatibility Electro-Magnetic Interference End-Of-Life End Of Mission European Space Agency European Southern Observatory European Space Operations Centre European Space Technology and Research Centre Fast Fourier Transform FIRST Integrated Network and Data Archive System Far Infrared Far Infrared Absolute Spectrophotometer (on COBE) Far InfRared and Submillimetre Telescope Flexible Image Transport System Flight Model Flight Operations Plan Field Of View Focal Plane Assembly FIRST/Planck Science Team Focal Plane Unit Flight Spare Full Width at Half Maximum Ground Support Equipement Goddard Space Flight Center Ground Segment Interface Document Geostationary Transfer Orbit High Electron Mobility Transistor High Frequency Instrument HouseKeeping (data) Hubble Space Telescope Hyper Text Mark-up Language Institut d'Astrophysique Spatiale Imperial College Interface Control Document Instrument Command Sequence Integrated Data and Information System IDIS Development Team IDIS Management Team Instrument Implementation Agreement Instrument Interface Document part A Instrument Interface Document Part B Initial Operations Phase Infrared Astronomical Telescope InterStellar Medium ESA's Infrared Space Observatory Keep Alive Line Launch and Early Orbit Phase Large Electron-Positron collider Low Frequency Instrument Mixed Dark Matter

MGSE MIP MIRD MLI MMIC MMS MMU MOC MPA NASA NEP NRAO OAT OBDH OBSW OGS OIRD OO PA PCU PDF PDU PFM PI PLM PM PND POSDAC PPLM PROM PS PV QA QLA QM QSO RAL RAM ROM RTA S/C S/N S/W SAMBA SIP SIRD SMP SOC SOHO SPC SSAC SSD SSR

Mechanical Ground Support Equipment Mission Implementation Plan Mission Implementation Requirements Document Multi Layer Insulation Monolithic Microwave Integrated Circuit Matra Marconi Space Mass Memory Unit Mission Operations Centre Max Planck Institut f%ur Astrophysik National Air and Space Administration (U.S.A.) Noise Equivalent Power National Radio Astronomy Observatory Osservatorio Astronomico di Trieste On-Board Data Handling On-Board SoftWare Operational Ground Segment Operations Interface Requirements Document Object Oriented Product Assurance Power Control Unit Portable Document Format (Adobe Acrobat) Power distribution Unit ProtoFlight Model Principal Investigator PayLoad Module Project Manager Passive Nutation Damper Paris Orsay Saclay Data Analysis Center Planck PayLoad Module Programmable Read Only Memory Project Scientist Performance Veri cation Quality Assurance Quick Look Assessment Quali cation Model Quasar Rutherford Appleton Laboratories Random Access Memory Read Only Memory Real Time Assessment SpaceCraft Signal to Noise SoftWare SAtellite for the Measurement of Background Anisotropies Science Implementation Plan Science-operations Implementation Requirements Document Science Management Plan Science Operations Centre ESA's SOlar and Heliospheric Observatory ESA's Science Programme Committee Space Science Advisory Committee Space Science Department (ESTEC) Solid State Recorder

ST STScI SVM SVT SZ TA TBC TBD TC TM URD WWW

Science Team Space Telescope Science Institute Service Module System Validation Test Sunyaev Zeldovich (eect) Telescope Assembly To Be Con rmed To Be De ned TeleCommand TeleMetry User Requirements Document World Wide Web