Security Guidelines for Inter-Regional Applications - Noëlle et

In the printable PDF version, if Acrobat Reader is used to read this ...... 3.1.1.1 Security management overview. .... 3.1.3.2 Performing risk analysis when developing a ICT system. ..... 3.3.6.1 Using or importing cryptographic services or means . ...... JavaCard is a standard set of APIs and classes that allows Java applets to run.
1MB taille 8 téléchargements 75 vues
Security Subgroup

Security Guidelines for Inter-Regional Applications

Security Guidelines for Inter-Regional Applications Version 1.3

Coordinated by Leornado BONOMI (CEFRIEL) and Gérard BEUCHOT (INSA de Lyon)

Security Subgroup

20 October 1999

Security Subgroup

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

This guidelines was written by the Security Subgroup of the TeleRegions SUN2 project. The partners' contribution in each regions has been co-ordinated by : Baden-Württemberg : IHK M. Mueller ([email protected]) Catalunya : CIGSA J. M. Asensio ([email protected]) Lombardy : CEFRIEL L. Bonomi à M. Brioschi ([email protected]) North of England : GLIB - ONIX - UNIX M. Martin ([email protected]) Upper Austria : RIS G.Wagner ([email protected]) PROFACTOR J. Prenninger ([email protected]) Rhône Alpes : INSA G. Beuchot ([email protected])

Security Subgroup

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Nota : Word 97 Document. Click on the page number in the summary to skip directly to the corresponding page. In the printable PDF version, if Acrobat Reader is used to read this document, click on an area in the summary to skip directly at the beginning of the corresponding part. The Concept card is also clickable.

Security Subgroup

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Summary Foreword Avant- Propos Summary

11 13 6

Concept Card Distributed System Security

attacks

hackers... legal and juridical aspects

Security Policy

Evaluation

Intellectual Property Rights

TECHNICAL Aspects "Individual" Security

Intrusion tests

Organisationnal Aspects

"Collective" Security Security on Internet

Organisation

Security Services

Physical Security Security Protocols

Software Architecture

Security Mechanisms

Public Key Infrastructures

Cryptography

Smart Card

Security Subgroup

cryptographical Algorithms

Page 1 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

0

INTRODUCTION................................................................................................................................................15

1

SECURITY POLICY..........................................................................................................................................16 1.1

SECURITY ATTACKS........................................................................................................................................... 16

1.1.1

Classification ..............................................................................................................................................16

1.1.2

Examples of attack.....................................................................................................................................17

1.2

SECURITY POLICIES........................................................................................................................................... 18

1.2.1

Introduction to Security Policy............................................................................................................19

1.2.2

Policy Issues............................................................................................................................................20

1.2.3

Policy statement .....................................................................................................................................22

1.3

SECURITY MODELS............................................................................................................................................. 22

1.3.1

The problem with security ....................................................................................................................23

1.3.2

Security methodology ............................................................................................................................23

1.3.3

Defining a MARIA service and security architecture ......................................................................24

1.3.3.1 Defining a service ................................................................................................................................ 24 1.3.3.2 An example of a service definitions..................................................................................................... 25 1.3.3.2.1 The conference service.............................................................................................................. 25 1.3.3.2.2

Why start with roles ? ................................................................................................................ 26

1.3.3.2.3

How would a conference service be offered? ........................................................................... 27

1.3.3.2.4

Sorts of service .......................................................................................................................... 27

1.3.4

What do we do about it ? ......................................................................................................................28

1.3.5

Concluding remarks...............................................................................................................................28

1.3.6

Information service value chains.........................................................................................................29

1.3.6.1 1.3.6.2 1.3.6.3 1.3.6.4 1.3.6.5 1.3.6.6 1.3.6.7 1.3.6.8

2

Information services ............................................................................................................................. 29 Informing agents .................................................................................................................................. 30 The information using roles ................................................................................................................. 30 Information broking roles .................................................................................................................... 31 Service delivery roles ........................................................................................................................... 32 An example of applying the model...................................................................................................... 33 Using the model ................................................................................................................................... 33 Conversations, actflows and workflows .............................................................................................. 34

TECHNICAL ASPECTS ...................................................................................................................................38 2.1

SECURITY SERVICES........................................................................................................................................... 38

2.1.1

Authentication.........................................................................................................................................38

2.1.2

Access Control........................................................................................................................................38

2.1.3

Data confidentiality ...............................................................................................................................38

2.1.4

Data integrity..........................................................................................................................................38

2.1.5

Non-repudiation .....................................................................................................................................38

2.1.6

Availability ..............................................................................................................................................39

2.2

SECURITY MECHANISMS.................................................................................................................................... 39

2.2.1

OSI Security mechanisms......................................................................................................................39

2.2.1.1 2.2.1.2

2.2.2

Security specific mechanisms .............................................................................................................. 39 Security common mechanisms ............................................................................................................. 39

Security levels: Evaluation assurance levels.....................................................................................39

Security Subgroup

Page 2 of 214

Security Subgroup

2.2.3

Security Guidelines for Inter-Regional Applications

20 October 1999

Encryption ...............................................................................................................................................40

2.2.3.1 2.2.3.2

Symmetric encryption - secret-key cryptosystem................................................................................ 42 Asymmetric encryption - public-key cryptosystem............................................................................. 42

2.2.4

Hash functions........................................................................................................................................42

2.2.5

Digital signature and certificate..........................................................................................................42

2.2.5.1 2.2.5.2 2.2.5.3

2.2.6

Digital signature ................................................................................................................................... 42 Digital timestamping............................................................................................................................ 42 Digital certificate.................................................................................................................................. 43

Basic cryptographic algorithms...........................................................................................................43

2.2.6.1 2.2.6.2 2.2.6.3 2.2.6.4 2.2.6.5 2.2.6.6 2.2.6.7 2.2.6.8 2.2.6.9 2.2.6.10

RSA ...................................................................................................................................................... 43 Diffie-Hellman ..................................................................................................................................... 43 DES (Data Encryption Standard)......................................................................................................... 43 AES (Advanced Encryption Standard)................................................................................................ 44 IDEA (International Data Encryption Algorithm)............................................................................... 45 RC2....................................................................................................................................................... 45 RC4....................................................................................................................................................... 46 RC5....................................................................................................................................................... 46 MD5 ..................................................................................................................................................... 46 DSA - DSS...................................................................................................................................... 46

2.2.7

Intellectual Property Rights (IPR) - Watermarks............................................................................46

2.2.8

Public key infrastructures.....................................................................................................................47

2.3

SMART CARD - ICC........................................................................................................................................... 48

2.3.1

Definition - Characteristics..................................................................................................................48

2.3.2

Smart Card Standards...........................................................................................................................49

2.3.3

Smart Card Security ..............................................................................................................................50

2.3.3.1 2.3.3.2

2.3.4

Security functionality .............................................................................................................................50

2.3.4.1 2.3.4.2 2.3.4.3 2.3.4.4 2.3.4.5

2.4

Card Holder Verification ..................................................................................................................... 50 Mutual Authentication ......................................................................................................................... 50 Authentication ...................................................................................................................................... 50 Digital signature ................................................................................................................................... 50 Data integrity........................................................................................................................................ 51 Data confidentiality.............................................................................................................................. 51 Secure data storage and attacks on smart card security....................................................................... 51

"INDIVIDUAL" SECURITY : OSF DCE SECURITY A RCHITECTURE ............................................................. 52

2.4.1

Kerberos Architecture...........................................................................................................................52

2.4.2

OSF DCE Architecture.........................................................................................................................53

2.5

"COLLECTIVE" SECURITY : FIREWALL AND VIRTUAL PRIVATE NETWORKS........................................... 55

2.5.1

Firewalls..................................................................................................................................................55

2.5.1.1 2.5.1.2 2.5.1.3 2.5.1.4 2.5.1.5

2.5.2

Virtual Private Network (VPN)............................................................................................................59

2.5.2.1 2.5.2.2

2.6

Against what does firewall protect ?.................................................................................................... 55 How does firewall protect ? ................................................................................................................. 56 Some problems with existing Application services............................................................................. 57 Layout .................................................................................................................................................. 57 Security Analyser................................................................................................................................. 58 Presentation .......................................................................................................................................... 59 Tunnelling ............................................................................................................................................ 59

SECURTY OF EXCHANGES ON INTERNET ........................................................................................................ 61

2.6.1

Risks..........................................................................................................................................................61

2.6.1.1 2.6.1.2 2.6.1.3

2.6.2

E-Mail................................................................................................................................................... 61 Web browsers ....................................................................................................................................... 61 Web servers .......................................................................................................................................... 62

Security in Internet services .................................................................................................................62

Security Subgroup

Page 3 of 214

Security Subgroup

2.6.2.1 2.6.2.2 2.6.2.3

2.6.3

Security Guidelines for Inter-Regional Applications

20 October 1999

E-mail Security .................................................................................................................................... 62 Web servers Security ........................................................................................................................... 63 Security o f other services ..................................................................................................................... 64

Internet Security Standards..................................................................................................................64

2.6.3.1 Internet Security Architecture .............................................................................................................. 64 2.6.3.2 IPSec .................................................................................................................................................... 65 2.6.3.2.1 Goal and Overview.................................................................................................................... 65 2.6.3.2.2

Security Associations. ............................................................................................................... 65

2.6.3.2.3

Basic concepts ........................................................................................................................... 66

2.6.3.2.4

IPsec Protocols.......................................................................................................................... 67

2.6.3.3 SASL.................................................................................................................................................... 69 2.6.3.4 TLS - SSL ............................................................................................................................................ 70 2.6.3.5 PEM - MOSS - Open PGP/MIME....................................................................................................... 71 2.6.3.5.1 PEM : Privacy Enhancement for Internet Electronic Mail........................................................ 71 2.6.3.5.2

MOSS : MIME Ob ject Security Service................................................................................... 73

2.6.3.5.3

PGP/MIME................................................................................................................................ 73

2.6.3.6 S/MIME............................................................................................................................................... 73 2.6.3.7 SHTTP ................................................................................................................................................. 74 2.6.3.8 PGP....................................................................................................................................................... 74 2.6.3.8.1 PGP and Privacy........................................................................................................................ 74

3

2.6.3.8.2

PGP functionality ...................................................................................................................... 75

2.6.3.8.3

How it works ............................................................................................................................. 76

2.6.3.8.4

Confidentiality : Message body ciphering................................................................................ 77

2.6.3.8.5

Authentication - Non-repudiation : Message signing................................................................ 77

2.6.3.8.6

Key management ....................................................................................................................... 77

2.6.3.8.7

PGP components ....................................................................................................................... 78

ORGANISATIONAL , LEGAL AND JURIDICAL ASPECTS FOR SECURITY...........................80 3.1

ORGANISATIONAL APECTS............................................................................................................................... 80

3.1.1

Security management organisation.....................................................................................................80

3.1.1.1 Security management overview........................................................................................................... 80 3.1.1.2 Security organisation........................................................................................................................... 81 3.1.1.2.1 Security : a management function ............................................................................................. 81

3.1.2

3.1.1.2.2

Staff organisation ...................................................................................................................... 81

3.1.1.2.3

Security actions ......................................................................................................................... 81

3.1.1.2.4

Data processing resources and network management ............................................................... 82

3.1.1.2.5

Active protection ....................................................................................................................... 83

3.1.1.2.6

Methodological approach .......................................................................................................... 84

Password Management.........................................................................................................................86

3.1.2.1 3.1.2.2 3.1.2.3

3.1.3

Risk analysis management....................................................................................................................88

3.1.3.1 3.1.3.2 3.1.3.3

3.1.4

Password Selection .............................................................................................................................. 86 Password Handling .............................................................................................................................. 87 DCE architecture.................................................................................................................................. 87 Incident cycle ....................................................................................................................................... 88 Performing risk analysis when developing a ICT system.................................................................... 89 Multidisciplinary team to conduct the ICT system/service risk analysis ............................................ 90

Network security management.............................................................................................................90

3.1.4.1 3.1.4.2

Security measures for network services ............................................................................................... 91 Penetration team to test the on-line service security............................................................................ 91

Security Subgroup

Page 4 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

3.2

PHYSICAL SECURITY ......................................................................................................................................... 92

3.3

JURIDICAL AND LEGAL ISSUES.......................................................................................................................... 94

3.3.1

Signatures................................................................................................................................................94

3.3.2

Digital signatures...................................................................................................................................95

3.3.2.1 Legal status of documents signed with digital signatures .................................................................... 95 3.3.2.2 European Parliament and Council Directive on Electronic Signatures ............................................... 96 3.3.2.3 Digital signature bringing into play..................................................................................................... 96 3.3.2.3.1 How to sign and verify a message............................................................................................. 97

3.3.3

3.3.2.3.2

Temporal validity: need of time-stamping ................................................................................ 97

3.3.2.3.3

Digital time-stamping service ................................................................................................... 98

3.3.2.3.4

Digital certificate ....................................................................................................................... 98

Trusted Third Parties(TTP) - Certification authorities hierarchies..............................................99

3.3.3.1 The need of legal framework for the TTP.......................................................................................... 100 3.3.3.2 The legal TTP challenge: key escrow and key recovery ................................................................... 100 3.3.3.2.1 Key Escrow challenge: brief summary................................................................................... 100 3.3.3.2.2

3.3.4

Privacy .................................................................................................................................................. 101

3.3.4.1 3.3.4.2

3.3.5

Current situation: conclusion................................................................................................... 101

The US approach................................................................................................................................ 102 EC directive........................................................................................................................................ 103

Intellectual Property Rights .............................................................................................................. 103

3.3.5.1 Introduction ........................................................................................................................................ 103 3.3.5.2 Definitions and properties .................................................................................................................. 104 3.3.5.3 How does intellectual property work at the international level? ....................................................... 107 3.3.5.4 Why to protect an invention and how to find out if an invention is already protected? .................... 107 3.3.5.5 Intellectual Property aspects of World Wide Web authoring ............................................................ 108 3.3.5.5.1 Objective ................................................................................................................................. 108 3.3.5.5.2

The Domain Name System...................................................................................................... 108

3.3.5.5.3

Trademarks .............................................................................................................................. 109

3.3.5.5.4

Metatags .................................................................................................................................. 109

3.3.5.5.5

Hyperlinks: Deep v Surface .................................................................................................... 110

3.3.5.5.6

Frames ..................................................................................................................................... 110

3.3.5.5.7

Service Provider Liability ....................................................................................................... 111

3.3.5.5.8

Rights to Industrial Drawings ................................................................................................. 111

3.3.5.5.9

Protection of Typographical Arrangement.............................................................................. 111

3.3.5.5.10

Minimising the Threat of Prosecution..................................................................................... 111

3.3.5.6 Patentability of computer programs ................................................................................................... 112 3.3.5.6.1 Exclusion of patentability principle ........................................................................................ 112 3.3.5.6.2

Trend of Judicial Reasoning in Europe................................................................................... 112

3.3.5.6.3

Situation in the United States and in Japan ............................................................................. 113

3.3.5.6.4

Advantages and Disadvantages of patentability of computer programs ................................. 113

3.3.5.6.5

Prospects .................................................................................................................................. 113

3.3.5.7 Legal texts .......................................................................................................................................... 114 3.3.5.7.1 Legal protection of computer programs :................................................................................ 114 3.3.5.7.2

Legal protection of databases .................................................................................................. 114

3.3.5.7.3

International Treaties and Conventions................................................................................... 115

3.3.5.7.4

European Directives ................................................................................................................ 115

Security Subgroup

Page 5 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

3.3.5.7.5

National Laws in EU-Member States ...................................................................................... 116

3.3.5.7.6

National Laws out of EU......................................................................................................... 116

3.3.5.8 Annexes .............................................................................................................................................. 116 3.3.5.8.1 Services offered by the IPR-Helpdesk.................................................................................... 116 3.3.5.8.2

3.3.6

Statutory and legal framework of the cryptology in France........................................................ 117

3.3.6.1 3.3.6.2

4

Lexicon in English, French and German:................................................................................ 117

Using or importing cryptographic services or means ........................................................................ 118 Providing cryptographic services or means ....................................................................................... 118

BUSINESS ASPECTS ...................................................................................................................................... 120 4.1

: EDI (ELECTRONIC DOCUMENT INTERCHANGE ) ON INTERNET ..............................................................120

4.1.1

Non-Internet Based Transactions..................................................................................................... 120

4.1.2

Internet Based Transactions.............................................................................................................. 120

4.1.3

EDI ......................................................................................................................................................... 120

4.1.4

Security ................................................................................................................................................. 121

4.1.5

Security of web browsing ................................................................................................................... 121

4.1.5.1 4.1.5.2 4.1.5.3 4.1.5.4 4.1.5.5 4.1.5.6 4.1.5.7

4.1.6

Content Integrity ................................................................................................................................. 124

4.1.6.1 4.1.6.2 4.1.6.3

4.1.7 4.2

Threats ................................................................................................................................................ 121 Secure EDI over the Internet.............................................................................................................. 122 Privacy ............................................................................................................................................... 122 Symmetric Encryption ....................................................................................................................... 123 Asymmetric Encryption ..................................................................................................................... 123 Present trends ..................................................................................................................................... 123 Certification ....................................................................................................................................... 124 Accidental Corruption........................................................................................................................ 124 Malicious Modification...................................................................................................................... 124 Non-repudiation of Origin ................................................................................................................. 125

Summary................................................................................................................................................ 125

PUBLIC KEY INFRASTRUCTURE (PKI)...........................................................................................................125 4.2.1.1

4.2.2

4.2.2.1 4.2.2.2 4.2.2.3 4.2.2.4 4.2.2.5 4.2.2.6 4.2.2.7 4.2.2.8

4.2.3

Public-key-based trust relationships .................................................................................................. 126 What is a Public-key Infrastructure ................................................................................................... 127 Certification ....................................................................................................................................... 127 CA arrangements ............................................................................................................................... 128 Validation........................................................................................................................................... 128 Authentication/Non-repudiability...................................................................................................... 129 Anonymity/Privacy ............................................................................................................................ 129 Summary of basic PKI trust models characteristics .......................................................................... 129

Basic functions of a PKI..................................................................................................................... 130

4.2.3.1 4.2.3.2 4.2.3.3 4.2.3.4 4.2.3.5 4.2.3.6 4.2.3.7 4.2.3.8 4.2.3.9

4.2.4

Introduction ........................................................................................................................................ 125

Public-key infrastructure concepts................................................................................................... 126

Registration ........................................................................................................................................ 130 Initialisation ....................................................................................................................................... 130 Certification ....................................................................................................................................... 130 Key Pair Recovery ............................................................................................................................. 131 Key Generation .................................................................................................................................. 131 Key Update ........................................................................................................................................ 131 Cross-certification .............................................................................................................................. 132 Revocation ......................................................................................................................................... 132 Certificate and Revocation Notice Distribution/Publication ............................................................. 132

PKI Trust models review.................................................................................................................... 133

4.2.4.1 PGP..................................................................................................................................................... 133 4.2.4.2 PEM/RFC-1422 ................................................................................................................................. 134 4.2.4.3 ICE/TEL............................................................................................................................................. 135 4.2.4.3.1 ICE-TEL trust model requirements ......................................................................................... 135

Security Subgroup

Page 6 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.4.3.2

Principles of the ICE-TEL PKI Trust Model.......................................................................... 136

4.2.4.3.3

ICE-TEL model scalability and flexibility: organic growth of CAs....................................... 137

4.2.4.3.4

Managing Complexity ............................................................................................................. 138

4.2.4.4 SPKI, Simple Public Key Cryptography............................................................................................ 139 4.2.4.4.1 Antecedents ............................................................................................................................. 140

4.2.5

4.2.4.4.2

SPKI main goals ...................................................................................................................... 140

4.2.4.4.3

Rethinking Global Names ....................................................................................................... 140

4.2.4.4.4

SDSI 2.0 Names, Local Names and Global Identifiers ........................................................... 141

4.2.4.4.5

SDSI Fully Qualified Names ................................................................................................... 142

4.2.4.4.6

SPKI Attribute certificates...................................................................................................... 143

X.509 PKI Overview............................................................................................................................ 144

4.2.5.1 Introduction ........................................................................................................................................ 144 4.2.5.2 How the X.509 trust model works ..................................................................................................... 144 4.2.5.3 X.500 directory.................................................................................................................................. 145 4.2.5.4 Object registration.............................................................................................................................. 145 4.2.5.5 Certificate extensions......................................................................................................................... 147 4.2.5.6 The standard X.509v3 certificate extensions description .................................................................. 148 4.2.5.6.1 Key information extensions..................................................................................................... 148 4.2.5.6.2

Policy Information extensions................................................................................................. 148

4.2.5.6.3

End entity (User and CA) attribute extensions........................................................................ 149

4.2.5.6.4

Certification path constraints extensions................................................................................. 149

4.2.5.7 Summary of X.509 PKI Characteristics ............................................................................................. 150 4.2.5.8 X.509 PKI, Certificate Management Protocols (CMP) ..................................................................... 150 4.2.5.8.1 PKI Management Overview .................................................................................................... 150 4.2.5.8.2

X.509 PKI Entities Definitions ............................................................................................... 151

4.2.5.8.3

PKI Management Requirements ............................................................................................. 153

4.2.5.8.4

PKI (Certificate) Management Operations ............................................................................. 154

4.2.5.8.5

Assumptions and restrictions .................................................................................................. 157

4.2.5.8.6

PKI Mandatory Management functions .................................................................................. 163

4.2.5.8.7

Root CA key update ................................................................................................................ 163

4.2.5.8.8

Subordinate CA initialisation .................................................................................................. 163

4.2.5.8.9

CRL production ....................................................................................................................... 164

4.2.5.8.10

PKI information request.......................................................................................................... 164

4.2.5.9 X.509 OCSP protocol overview......................................................................................................... 166 4.2.5.10 X.509 Time Stamp protocol.......................................................................................................... 167 4.2.5.10.1 TSA overview......................................................................................................................... 167

4.2.6

4.2.5.10.2

TSA functional assumptions and requirements ....................................................................... 168

4.2.5.10.3

TSA security considerations.................................................................................................... 169

PKI Certification Policy and CPS (Certificate Policy Statements)............................................ 170

4.2.6.1 Definitions and Certificate Policies Concepts ................................................................................... 170 4.2.6.1.1 Certificate Policies .................................................................................................................. 171 4.2.6.1.2

Certificate policy examples ..................................................................................................... 171

4.2.6.1.3

X.509 Certificate Policy Fields ............................................................................................... 171

4.2.6.2 CPS: Certification Practice Statement ............................................................................................... 173 4.2.6.2.1 Relationship between certificate policy and certification practice statement ......................... 173 4.2.6.2.2

Security Subgroup

Set of provisions...................................................................................................................... 174

Page 7 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.6.3 Set of provisions contents checklist................................................................................................... 174 4.2.6.3.1 Introduction ............................................................................................................................. 174 4.2.6.3.2

General Provisions .................................................................................................................. 175

4.2.6.3.3

Identification and Authentication ............................................................................................ 177

4.2.6.3.4

Operational Requirements ....................................................................................................... 178

4.2.6.3.5

Physical, Procedural, and Personal Security Controls ............................................................ 180

4.2.6.3.6

Physical Controls ..................................................................................................................... 180

4.2.6.3.7

Technical Security Controls .................................................................................................... 181

4.2.6.3.8

Certificate and CRL profiles ................................................................................................... 183

4.2.6.4

5

Legal Issues: digital signatures and TTP’s ....................................................................................... 185

SECURITY EVALUATION .......................................................................................................................... 186 5.1

INTRODUCTION: ...............................................................................................................................................186

5.2

CC KEY CONCEPTS..........................................................................................................................................187

5.2.1

Functional requirements:................................................................................................................... 187

5.2.2

Target of Evaluation:.......................................................................................................................... 187

5.2.3

Protection profile:............................................................................................................................... 187

5.2.4

Security Target .................................................................................................................................... 188

5.2.5

Components: ........................................................................................................................................ 188

5.3

SECURITY FUNCTIONALITY ............................................................................................................................188

5.3.1

Functionality classes (11).................................................................................................................. 188

5.3.1.1 5.3.1.2 5.3.1.3 5.3.1.4 5.3.1.5 5.3.1.6 5.3.1.7 5.3.1.8 5.3.1.9 5.3.1.10 5.3.1.11

5.3.2

Assurance classes................................................................................................................................ 191

5.3.2.1 5.3.2.2 5.3.2.3 5.3.2.4 5.3.2.5 5.3.2.6 5.3.2.7

5.3.3

5.4

Configuration Management (ACM)................................................................................................... 192 Delivery and Operation (ADO).......................................................................................................... 192 Development (ADV) .......................................................................................................................... 192 Life Cycle Support (ALC) ................................................................................................................. 192 Maintenance of Assurance (AMA).................................................................................................... 192 Tests (ATE)........................................................................................................................................ 193 Vulnerability Assessment (AVA)...................................................................................................... 193

Evaluation assurance classes (2) ..................................................................................................... 193

5.3.3.1 5.3.3.2

5.3.4

Security audit (FAU).......................................................................................................................... 188 Communications (FCO)..................................................................................................................... 189 Cryptographic support (FCS)............................................................................................................. 189 User data protection (FDP) ................................................................................................................ 189 Identification and authentication (FIA).............................................................................................. 189 Security management (FMT)............................................................................................................. 190 Privacy (FPR)..................................................................................................................................... 190 Protection of the TOE Security Functions (FPT)............................................................................... 190 Resource utilisation (FRU) ................................................................................................................ 191 TOE Access (FTA)....................................................................................................................... 191 Trusted path/channels (FTP)......................................................................................................... 191

Protection Profile Evaluation (APE).................................................................................................. 193 Security Target Evaluation (ASE) ..................................................................................................... 194

Inter-class dependencies.................................................................................................................... 194

SECURITY ASSURANCE ....................................................................................................................................196

5.4.1

Evaluation assurance level (Security level).................................................................................... 196

5.4.2

EAL1- functionally tested................................................................................................................... 197

5.4.3

EAL2 – Structurally tested................................................................................................................. 198

5.4.4

EAL3 – Methodically tested and checked ....................................................................................... 198

Security Subgroup

Page 8 of 214

Security Subgroup

20 October 1999

5.4.5

EAL4 – Methodically designed, tested and reviewed.................................................................... 199

5.4.6

EAL5 – semiformally designed and tested...................................................................................... 199

5.4.7

EAL6 – semiformally verified design and tested............................................................................ 199

5.4.8

EAL7 – formally verified design and tested.................................................................................... 199

5.5

A PPROACH TO EVALUATION ..........................................................................................................................199

5.5.1

CC protection profiles........................................................................................................................ 200

5.5.2

Protection Profile Content................................................................................................................. 200

5.5.3

CC security targets ............................................................................................................................. 201

5.5.4

Evaluation ............................................................................................................................................ 203

5.6

CC PROTECTION PROFILES..............................................................................................................................203

5.6.1

Protection Profile overview............................................................................................................... 203

5.6.2

Commercial security 1 (CS1) - Basic Controlled Access Protection......................................... 204

5.6.3

Commercial security 3 (CS3) – Role-Based Access Protection(RBAC) .................................... 204

5.6.4

Traffic Filter Firewall ........................................................................................................................ 204

5.6.5

Application Level Firewall................................................................................................................ 205

5.6.6

ECMA Extended Commercially Oriented Functionality Class (E-COFC)............................... 205

5.6.6.1 5.6.6.2 5.6.6.3

5.6.7 6

Security Guidelines for Inter-Regional Applications

The Enterprise Business class............................................................................................................ 206 The Contract Business class .............................................................................................................. 207 The Public Business class .................................................................................................................. 207

Others profiles...................................................................................................................................... 207

ANNEXES :......................................................................................................................................................... 208 6.1

BIBLIOGRAPHY .................................................................................................................................................208

6.1.1

Overview ............................................................................................................................................... 208

6.1.2

Security Evaluation............................................................................................................................. 208

6.1.3

Cryptography....................................................................................................................................... 208

6.1.4

Legal issues .......................................................................................................................................... 208

6.1.5

Security management organisation.................................................................................................. 209

6.1.6

Glossary, Acronym.............................................................................................................................. 209

6.2

INDEX ................................................................................................................................................................211

Security Subgroup

Page 9 of 214

Security Subgroup

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Page 10 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Foreword The writing of this security guidelines has been decided initially to provide to the various partners of the project TeleRegions SUN2, in an only document, all the basic elements allowing them to define a security policy, to implement it and to evaluate its quality. Indeed, if documents dealing with the system and network security are very numerous, they are distributed in a very large number of sources and we have found no work or document dealing with the whole of the raised problems. This guide constitutes therefore a summary of technical, organisational, legal and juridical documents (to which have been added some elements coming from own works of participants to the project). Our will was to present the security of Information Technologies systems not only of a technical viewpoint but also according to a global approach that seems us essential. The advent of the "e -business" (e - commerce, B to B, B to C), but also of the telemedicine (e-healthcare) and even of the teletraining or telelearning (e-learning) leads for bringing into play or for reinforcing the systems of security, which allow to maintain the integrity and possibly the privacy of exchanges between authenticated users, without possibility of repudiation. The security of the information consists of the protection of a information capital (of an enterprise, an administration or an individual) against disclosures, modifications and accidental or intentional, but prohibited, destruction. It can be proving directly expensive for making secure these systems (equipment and specialised software, administration) but also indirectly by slowing exchanges, by forbidding some communication modes or forcing constraints which some users will find very restricting; then, they will have a tendency to bypass the implemented devices, compromising thus the security of the whole. This cost has to be put in comparison with the value of the capital of information to protect. More and more often, the communication system rests on a private network, easy enough for isolating and making secure, and on the Internet to access partners in others enterprises or to use available resources via this network. Sometimes, the Internet network is used alone for internally needs of an enterprise distributed on several sites. In concrete terms, an enterprise (or an administration) which wants to make secure its information system will have first to define its "security policy", before to set up the necessary technical means for its implementation and to proceed to their evaluation with respect to this policy and, possibly to proceed also to some intrusion tests. This guide is therefore going to approach these various points of view: In a first part, after a quick statement on attacks to which the distributed systems can be submitted, we will process the security policy that can be defined only after an analysis of the value of the information to make secure. The former will be approached from the example of the MARIA network. The second part deals with the technical aspects : security services and mechanisms (including smart cards), "individual" security system (derived from the Kerberos architecture), "collective" security (firewall and virtual private networks) and

Security Subgroup

Page 11 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

security of the exchanges on the Internet. The third part focuses on organisational , juridical and legal aspects. A chapter deal with the Intellectual Property Rights in an European frame; these rights apply as well a software as contents of courses for teletraining.

In a fourth part are packed two aspects concerning more specially the business sector : the security of EDI and the Public Key Infrastructures. These infrastructures, under development, have appeared us very important not only by their finality, but also because they bring together technical , organisational, juridical and legal problems. Finally, the fifth part deal with the security evaluation, taking in account the most recent advances( Common Criteria).

Security Subgroup

Page 12 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Avant -propos L'écriture de ce guide de la sécurité a été décidée à l'origine du projet SUN2 pour fournir aux différents partenaires du projet TeleRegions SUN2, en un seul document, tous les éléments de base leur permettant de définir une politique de sécurité, de la mettre en œuvre et d'en évaluer la qualité. En effet, si les documents traitant de la sécurité des systèmes et des réseaux sont très nombreux, ils sont répartis dans un très grand nombre de sources et nous n'avons trouvé aucun ouvrage ou document traitant de l'ensemble des problèmes posés. Ce guide constitue donc une synthèse de documents techniques, organisationnels, légaux, etc. (auxquels ont été ajoutés quelques éléments issus des travaux personnels d'équipes participants au projet). Notre volonté était de présenter la sécurité des systèmes relevant des Technologies de l'Information non seulement d'un point de vue technique mais selon une approche globale qui nous semble indispensable. L'avènement du "e-business" (e-commerce, B to B, B to C), mais aussi de la télémédecine (e-healthcare) et même de la formation à distance (e-learning) conduit à la mise en place ou au renforcement de la sécurité des systèmes qui permettent d'assurer l'intégrité et éventuellement la confidentialité d'échanges non répudiables entre utilisateurs authentifiés. La sécurité de l'information consiste en la protection d'un capital d'information (d'une entreprise, d'une administration ou d'un individu) contre les divulgations, modifications ou destructions intentionnelles ou accidentelles mais interdites. La sécurisation de ces systèmes peut s'avérer coûteuse directement (équipements et logiciels spécialisés, administration) mais aussi indirectement en ralentissant les échanges, en interdisant certains modes de communication ou imposant des contraintes que certains utilisateurs trouveront très contraignants; ils auront alors tendance à contourner les dispositifs mis en œuvre, compromettant ainsi la sécurité de l'ensemble. Ce coût doit être mis en regard de la valeur du capital d'information à protéger. De plus en plus souvent, le système de communication mis en jeu s'appuie sur un réseau privé, assez facile à isoler et à sécuriser, et sur l'Internet pour accéder à des partenaires d'autres entreprises ou utiliser des ressources disponibles via ce réseau, voire à n'utiliser que le réseau Internet même pour les besoins internes de l'entreprise répartie sur plusieurs sites. Concrètement, une entreprise (ou une administration) qui désire sécuriser son système d'information devra d'abord définir sa "politique de sécurité" avant de mettre en place les moyens techniques nécessaires à sa mise en œuvre et de procéder à leur évaluation relativement à cette politique et, éventuellement, à des tests d'intrusion. Ce guide va donc aborder ces différents points de vue : Dans une première partie, après un rapide exposé sur les attaques auxquelles peuvent être soumis les systèmes répartis, nous traiterons de la politique de sécurité qui ne peut être définie qu'après une analyse de la valeur des informations à sécuriser. Celle-ci sera abordée à partir de l'exemple du réseau MARIA.

Security Subgroup

Page 13 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

La seconde partie traite les aspects techniques : définition des services et des mécanismes de sécurité (y compris les cartes à puce), sécurisation "individuelle" des systèmes (issue le l'architecture Kerberos), sécurisation "collective" (coupe-feu et réseaux virtuels privés), sécurisation des échanges sur l'Internet. La troisième partie porte sur les aspects organisationnels, juridiques et légaux. Un chapitre porte sur les droits de la proprieté intellectuelle dans le cadre européen, droit qui s'applique aussi bien aux logiciels qu'aux contenus de cours pour la formation à distance. Dans une quatrième partie sont regroupés deux aspects concernant plus spécialement le secteur des affaires : la sécurisation de l'EDI et les infrastructures de gestion de clés publiques. Ces infrastructures, en cours de développement, nous ont semblées très importantes non seulement par leur finalité, mais aussi parce qu'elles regroupent des problèmes techniques, organisationnels, juridiques et légaux. Enfin, la cinquième partie traite de l'évaluation de la sécurité à partir des développement les plus récents (Critères Communs).

Security Subgroup

Page 14 of 214

Security Subgroup

0

Security Guidelines for Inter-Regional Applications

20 October 1999

Introduction

The information security is the protection of information assets from intentional, or accidental, but unauthorised, disclosure, modification, or destruction. The value of information, as a corporate asset, depends on the capability of the organisation to keep it private. The introduction of the network, and in particular Internet, intended as a way to allow an easier access from the authorised parties, has led on the other hand also to an increased potential vulnerability of the organisation's resources. The need to protect the resources against intentional, but also accidental, risks is becoming a necessity every day more relevant. A correct Security management plan should then be a main objective of every kind of organisation. Anyway, it is also important to understand that also the security protections have to be considered a cost. In any case, attention should be paid not to spend more money on controls over a given period than we would expect in losses had we taken no action, minimising the combined costs of expected losses and the controls put in place. When considering the security costs, it should be considered not only direct costs, like those for personnel, hardware ok software, but also indirect costs, like in particular any eventual reduction in performances or system usability. It should be clear that, in general, the need to secure information competes with the need for ready access by appropriate parties. In the rest of that chapter we will shortly introduce the concepts of security attacks, that of security services and the policies. Evaluation of that risk is a key first step in determining what action to take. A description of the basic types of potential attacks will then be described in the first paragraph. Based on the analysis of the risks, the security services , i.e. the correct requirement for what concerns the security issues, will be described. From a organisational point of view, once the potential attacks, and the required countermeasures have been identified, an action plan should then be defined, that contains not only the mechanisms to be built in order to protect the organisation, but, most of all, it should produce a public document, containing the policy, the organisation vision of the security approach, the goals, the procedures the users’ roles and responsibilities.

Security Subgroup

Page 15 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

1 Security Policy 1.1

Security attacks

1.1.1 Classification Attacks against a network or a system can be classified of different manners: - Outsider/Insider attacks Active Attacks

Passive Attacks Interception Confidentiality

Release of message contents

Interruption Availability

Fabrication Integritity

Modification Integrity

Traffic analysis

-

Passive/Active attacks

-

According to security services

Interruption: Interception: Modification: Fabrication:

Attack on Availability Attack on Confidentiality Attack on Integrity Attack on Authentication

An Insider Attack is an attack originating from inside a protected network. An insider to the organisation may be a considerable threat to the security of the computer systems. Insiders often have direct access to the computer and network hardware components. The ability to access the components of a system makes most systems easier to compromise. Most desktop workstations can be easily manipulated so that they grant privileged access to a local area network providing the ability to view possibly sensitive data. Passive attacks consist in observing traffic on the network, in listening messages or in reading files in a system (through the network) without modification of the data or the system : therefore, it is difficult to notice them. On the contrary, active attacks lead to modifications in the system behaviour, data modification or deleting, or component destruction.

Security Subgroup

Interruption

Interception

Modification

Fabrication

Page 16 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Attacks can happen in particular, but not only, during data transmission. 1.1.2 Examples of attack Abuse of Privilege: When a user performs an action that they should not have, according to organisational policy or law. Data diddling : An attacker changes the data while en route from source to destination. Data Driven Attack: A form of attack in which the attack is encoded in innocuous-seeming data which is executed by a user or other software to implement an attack. In the case of firewalls, a data driven attack is a concern since it may get through the firewall in data form and launch an attack against a system behind the firewall. Data manipulation : A form of attack on the data content : " modification" during the transfer or the data storage or "fabrication" of non-authentic data. Denial-of-service : attack characterised by an explicit attempt by attackers to prevent legitimate users of a service from using that service. Examples include - attempts to "flood" a network, thereby preventing legitimate network traffic - attempts to disrupt connections between two machines, thereby preventing access to a service - attempts to prevent a particular individual from accessing a service - attempts to disrupt service to a specific system or person Not all service outages, even those that result from malicious activity, are necessarily denial-of-service attacks. Other types of attack may include a denial of service as a component, but the denial of service may be part of a larger attack. Illegitimate use of resources may also result in denial of service. For example, an intruder may use your anonymous ftp area as a place to store illegal copies of commercial software, consuming disk space and generating network traffic. DNS spoofing: Assuming the DNS name of another system by either corrupting the name service cache of a victim system, or by compromising a domain name server for a valid domain Email bombing and spamming Email "bombing" is characterised by abusers repeatedly sending an identical email message to a particular address. Email "spamming" is a variant of bombing; it refers to sending email to hundreds or thousands of users (or to lists that expand to that many users). Email spamming can be made worse if recipients reply to the email, causing all the original addressees to receive the reply. It may also occur innocently, as a result of sending a message to mailing lists and not realising that the list explodes to thousands of users, or as a result of an incorrectly setup responder message (such as vacation(1)). Email bombing/spamming may be combined with email "spoofing" (which alters the identity of the account sending the email), making it more difficult to determine who the email is actually coming from.

Security Subgroup

Page 17 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Eavesdropping : An attacker listens to a private communication. IP Spoofing : An attack whereby a system attempts to illicitly impersonate another system by using its IP network address. IP Splicing / Session Hijacking : An attack whereby an active, established, session is intercepted and co-opted by the attacker. IP Splicing attacks may occur after an authentication has been made, permitting the attacker to assume the role of an already authorised user. Primary protections against IP Splicing rely on encryption at the session or network layer. Man-in-the-middle attack : An Internet hacker may interfere with the initial public key exchange, by intercepting the very first message to a new correspondent and substituting genuine public key with a bogus one. Masquerading : An attack in which an attacker pretends to be some one else. The best way to thwart this attack is to authenticate a principal by challenging it to prove its identity. Password files attack : We have seen incidents in which intruders obtain password files from sites and then try to compromise accounts by cracking passwords. Once intruders gain access to a user account, they attempt to gain root access through a cracked root password or by exploiting another vulnerability Replay attack: An attacker captures a messages and at a later time communicates that message to a principal. Though the attacker cannot decrypt the message, it may benefit by receiving a service from the principal to whom it is replaying the message. Smurf attack : Smurf attacks are when a malicious Internet user fools hundreds or thousands of systems into sending traffic to one location, flooding the location with pings. Spoofed/forged Email : Email spoofing may occur in different forms, but all have a similar result: a user receives email that appears to have originated from one source when it actually was sent from another source. Email spoofing is often an attempt to trick the user into making a damaging statement or releasing sensitive information (such as passwords). Examples of spoofed email that could affect the security of your site include: - email claiming to be from a system administrator requesting users to change their passwords to a specified string and threatening to suspend their account if they do not do this - email claiming to be from a person in authority requesting users to send them a copy of a password file or other sensitive information

1.2 Security Policies Partially adapted from Internet Security White Paper from BMD Solutions

Security Subgroup

Page 18 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

1.2.1 Introduction to Security Policy Why Do We Need Security Policies and Procedures? Twenty-five years ago, links outside a site were unusual. Computer security threats were rare, and were basically concerned with insiders. Today, many systems are in private offices and labs, often managed by individuals or persons employed outside a computer centre. Many systems are connected into the Internet, and from there around the world. With world-wide Internet connections, someone could get into a system from the other side of the world and steal the owner's password in the middle of the night when the building is locked up. Viruses and worms can be passed from machine to machine. The Internet allows the electronic equivalent of the thief who looks for open windows and doors; now a person can check hundreds of machines for vulnerabilities in a few hours. System administrators and decision makers have to understand the security threats that exist, what the risk and cost of a problem would be, and what kind of action they want to take (if any) to prevent and respond to security threats.

Basic Approach: Look at what you are trying to protect. Look at what you need to protect it from. Determine how likely the threats are. Implement measures , which will protect your assets in a cost-effective manner. Analyse what happens when the policy is violated. However, the cost of protecting yourself against a threat should be less than the cost recovering if the threat were to strike you. Security planning is an on-going cycle Organisation Issues The goal in developing an official site policy on computer security is to define the organisation's expectations and direction. For example, a banking industry may have very different security concerns from those of an university. First, the goals of the organisation should be considered. Second, the site security policy developed must conform to existing policies, rules, regulations and laws that the organisation is subject to. Third, the policy should address the issues when local security problems develop as a result of a remote site as well as when problems occur on remote systems as a result of a local host or user. Who Makes the Policy? Responsibilities Policy creation must be a joint effort by technical personnel, who understand the full ramifications of the proposed policy and the implementation of the policy, and by decision makers who have the power to enforce the policy. A key element of a computer security policy is making sure everyone knows their own responsibility for maintaining security. There may be levels of responsibility associated Security Subgroup

Page 19 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

with a policy on computer security. At one level, each user of a computing resource may have a responsibility to protect his account. System managers may form another responsibility level: they must help to ensure the security of the computer system. Network managers may reside at yet another level. Risk Assessment One of the most important reasons for creating a computer security policy is to ensure that efforts spent on security yield cost effective benefits. As an example, there is a great deal of publicity about intruders on computers systems; yet most surveys of computer security show that for most organisations, the actual loss from "insiders" is much greater. However, you should not spend more to protect something than it is actually worth. Another common threat is disclosure of information : the value or sensitivity of the information stored on accessible computers should be determined. Disclosure of a password file might allow for future unauthorised accesses. A glimpse of a proposal may give a competitor an unfair advantage. A technical paper may contain years of valuable research. What Happens When the Policy is Violated. What to do When Local Users Violate the Policy of a Remote Site It is obvious that when any type of official policy is defined, be it related to computer security or not, it will eventually be broken. The violation may occur due to an individual's negligence, accidental mistake, having not been properly informed of the current policy, or not understanding the current policy. It is equally possible that an individual (or group of individuals) may knowingly perform an act that is in direct violation of the defined policy. One point to remember about your policy is that proper education is your best defence. In the event that a local user violates the security policy of a remote site, the local site should have a clearly defined set of administrative actions to take concerning that local user. The site should also be prepared to protect itself against possible actions by the remote site. 1.2.2

Policy Issues

There are a number of issues that must be addressed when developing a security policy. These are: General security questions : 1. Who is allowed to use the resources? 2. What is the proper use of the resources? 3. Who is authorised to grant access and approve usage? 4. Who is in charge of managing the resources? 5. What are the user's rights and responsibilities? 6. What are the rights and responsibilities of the system administrator vs. those of the user? 7. What do you do with sensitive information? 8. What should be done in case of violation? 9. Who is in charge of the security policy? 10. How does the policy is verified? Security Subgroup

Page 20 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

What Needs to be Protected The security policy defines the WHAT's: what needs to be protected, what is most important, what the priorities are, and what the general approach to dealing with security problems should be. The security policy should include a general risk assessment of the types of threats a site is most likely to face and the consequences of those threats note: from those question need to be specified (risk analysis): 1. What do you protect? 2. From whom do you protect? 3. What is the perimeter of the trusted system? 4. Which QoS parameters (usability, performance, security etc.)? 5. Which priorities? Identifying Possible Problems To determine risk, vulnerabilities must be identified Economic questions: 1. how much would an attack cost without protections? 2. what is the probability of an attack? 3. how much do the protections cost? 4. how much would an attack cost once adequate protections are built? Security Policy can be established according to two philosophies : • everything that is not explicitly forbidden is allowed • everything that is not explicitly allowed is forbidden. Five criteria for evaluating any policy: 1. Does the policy comply with law and with duties to third parties? 2. Does the policy unnecessarily compromise the interest of the employee, the employer or third parties? 3. Is the policy workable as a practical matter and likely to be enforced? 4. Does the policy deal appropriately with all different forms of communications and record keeping with the office? 5. Has the policy been announced in advance and agreed to by all concerned?

The following are checklists to help a site determine which strategy to adopt: "Protect and Proceed" or "Pursue and Prosecute". Protect and Proceed 1. If assets are not well protected. 2. If continued penetration could result in great financial risk. 3. If the possibility or willingness to prosecute is not present. 4. If user base is unknown. 5. If users are unsophisticated and their work is vulnerable. Pursue and Prosecute 1. If assets and systems are well protected. Security Subgroup

Page 21 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2. If good backups are available. 3. If this is a concentrated attack occurring with great frequency and intensity. 4. If the site has a natural attraction to intruders, and consequently regularly attracts intruders. 5. If the site is willing to incur the financial (or other) risk to assets by allowing the penetration to continue. 6. If intruder access can be controlled. 7. If the monitoring tools are sufficiently well-developed to make the pursuit worthwhile. 8. If the support staff is sufficiently clever and knowledgeable about the operating system, related utilities, and systems to make the pursuit worthwhile. 9. If there is willingness on the part of management to prosecute. 10. If the system administrators know in general what kind of evidence would lead to prosecution. 11. If there is a site representative versed in the relevant legal issues. 11. If the site is prepared for possible legal action from its own users if their data or systems become compromised during the pursuit. 1.2.3 Policy statement A policy has to be described in a document, as the set of desired security goals: As a general rule, the policy should be: 1. A published, and organisation-wide document 2. Related to the protection of information 3. A base to set directions and give broad guidance 4. A guidance supported by top-level executive 5. A document periodically renewed 6. The policy should contain: • the organisation's definition of information security • a statement regarding the applicability to all of the organisation's information • the consequences of non-compliance • the requirement for ownership, classification of information, • the individual responsibilities of people within the organisation • actions to be taken when an attack occurs The organisation security policy should be: 1. known by every person in the organisation 2. understood for what concerns the contents and its relevance 3. agreed Once the site security policy has been written and established, a vigorous process should be engaged to ensure that the policy statement is widely and thoroughly disseminated and discussed.

1.3 Security models After the description of security general concepts and technologies, we define here how, in the actual cases, that problems occurs. Security Subgroup

Page 22 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

1.3.1 The problem with security This paper is about security from the point of view of the those who are or will be accountable and responsible, for policy makers, managers and users. To understand their side of the problem, consider the following mistakes which are often made in the definition of “secure” systems and services: • Security is treated as a very high priority and dominates the definition of applications and service. The result is a system which seems to be their to prevent things happening rather than to enable and empower the users. The result is usually rejection and disuse. • The designers focus on some perceived high priority security issue or some particular security mechanism without understanding the full range of problems to be solved at the human as well as the technical level. All security issues are then redefined in terms of the preferred solution. The results are similar to 1, above. These are bottom up or technology first mistakes. The top down approach can also lead to problems: • Great effort is put into the formulation and agreement of a coherent and complete set security principals, policies, rules and models among all the organisations involved. Often, however, there are real differences of interest exist and this process never concludes satisfactorily. In SUN, the infrastructures and applications are being deployed to support the delivery of services by co-operated groups of agencies. In many cases, these agencies are independent of each other and have at least different priorities. Experience has shown that any assumption that the requirements and policies with respect to infrastructure and services are free of conflict and contradiction is unrealistic. 1.3.2 Security methodology The core of all security methodologies is the following set of processes: 1. (Re)Model the security domain, i.e. system and its environment. 2. Conduct a vulnerability analysis on the model: What could go wrong? 3. Conduct a consequence analysis: assessing the cost and sustainability of the identified failures. 4. Propose countermeasures. 5. Compare the cost of the countermeasures with the consequences of failure. 6. If the clients are still worried and there is still time and money, go back to 1…..

Security Subgroup

Page 23 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The first part of this document provides us with an overview of the technical aspects of what is required in step 4. However, one of the biggest issues concerns the selection of the scope of the model which is made in step 1. The problem is that it can not be limited to the technical system alone: most of the failures we are worried about involve people, and concern breakdown of roles and their responsibilities. After some initial discussions with our colleagues involved in the medial application here in the Northern Region of England, we have produced an initial discussion document. The main idea presented is the following: Rather than attempting to model and analyse all the complexities of the environments we are supporting, whether they are health care, public information or business support, we could model a set of services and use the concept of instances of service as the basic unit of security analysis. For this to work, the concept of a service must be adequate. It must be defined in terms of: •

The roles and relationships involved in defining and managing the service,



The roles and relationships of the users,



The nature and intention of the messages they generate and interpret in the context of the use of the service.



The mapping of these messages onto media, channels and persistent stores.



The relationship between media, channels and stores and physical platforms and networks.

In the following paper, we present an initial analysis of the requirements in the medical sector as an example of this approach. 1.3.3 Defining a MARIA service and security architecture The purpose of this note is to suggest a framework for defining both functional and non functional requirements of the services which are proposed for the MARIA network. It is intended to be a first step in the creation of a architecture which defines not only what services are offered in terms of the user roles and processes which will be supported and how they are configured and managed, but also to define systemic properties such as security and resilience. 1.3.3.1 Defining a service The MARIA network provides an infrastructure for the co-ordination and delivery of palliative care for children and young people. The users of the services it provides operate in a number of different contexts and, it is often the case that any particular user activity has a number of different significance in these different contexts. The first stage in defining the services, therefor, is to identify these contexts (roles, responsibilities and relationships) and the significant things (policies and objects) that are handled within them. We will use the term “enterprise” to refer to a particular context characterised by a set of norms, objectives and values. One of the challenges of this example of requirements modelling and analysis is that it is clear from the outset that these different enterprises do not merely interact or interwork in the delivery of services but, in many cases, that they are composed together. The enterprises which MARIA supports are: • A medical enterprise which has the objective of delivering care within the constrains of budgets and clinical policy.

Security Subgroup

Page 24 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999



An academic enterprise within which professionals are trained and accredited and medical research is undertaken. • An educational enterprise: schooling is a high priority need of the children and young people and the MARIA network is a significant educational resource. • The social service enterprise within which a wide range of support is delivered on the basis of statutory and discretionary rights and by charities. The roles which constitute each of these enterprises are mapped onto individuals who are also employees within a career structure in one or other of them as well as members of the social and community context within which the MARIA network is embedded. Finally, and most importantly, there is the family context of the children and young people and their parents and carers, siblings, relatives and friends together with all the external relationships they engage in such as citizens, consumers, members of social, political and cultural groups, etc. (We do not refer to a “family enterprise”, the connotations are wrong.) It is not necessary to model and analyse each of these enterprises before we are able to propose generic services and service classes (phew! J), it is however, important to establish that these distinct contexts of interpretation exist and, for example, that particular items of information may move from one to another where they may have quite different significance and be subject to different interpretations. The objective of a service definition and model is to provide the means of expressing rules to govern these flows of information and capabilities within and between enterprises. 1.3.3.2 An example of a service definitions We will consider a service or service class to illustrate the level of definition required for the purposes of a technical architecture, and the terms in which the policies and requirements may be expressed. We will use the example of a conference service where we will define the set of basic roles and relationships which constitute the service, the set of resources which are exchanged between the role holders and we will illustrate some typical policies which may apply. 1.3.3.2.1 The conference service. A conference service supports instances of the following roles: A sponsor or initiator who sets the terms of reference including the allocation of rights to participate. ( In an ad hoc conference, the sponsor may be a role holder at the operational level within any of the enterprises. Some conferences are instituted because they are part of a standard procedure and in these cases the sponsor role is exercised in a different time frame concerned with the design, implementation and evaluation of organisational process and procedure.) Moderating roles, comprising the chair who is responsible for controlling process, the secretary who is responsible for recording and communicating the proceedings. There are two roles who may generate input to the conference: participants who are responsible not only for the appropriateness of their submissions to the conference but also participate in any decisions which are taken and facilitators who are responsible for the quality of the material they submit but do not participate in the decision responsibility. Finally there are implementing agents who are responsible for carrying out any decisions of the conference. There are many ways in which these roles may be combined and allocated within groups of individuals and there are also many ways that a conference may be distributed in both space and time. By modelling strictly in terms of roles we are attempting to abstract away

Security Subgroup

Page 25 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

form these considerations in order to be able to identify the common elements and to be able to map general policies and requirements onto particular cases. There are many classes of conference. In the case of a clinical conference, policies about roles may be formulated in terms such as the following: • A client, who’s case is the subject of a conference, will be a participant except in the case of a child client when the child’s mentor /carer will participate. • Those who will be responsible for implementing the decisions of a conference will be participants. (etc) Policies about resources, particularly information, could be: • Submissions to a clinical conference will remain private to that conference until released by the chair. • The fact that any particular item of external information has been used in the context of a particular conference will remain private to that conference until released by the chair. • If a conference is going to decide to allocate medical resources then the accounting manager responsible for that resource must be a member of the conference. (etc.) Clinical conferences belong to cases and, when the chair and the sponsor agree that the conference is concluded, all materials are inherited by the case (another service) and are subject to the policies of that service. [ I think I know quite a lot about conferences but not so much about cases, for example, does a case have an individual “owner”, is it ultimately the responsibility of a consultant? If so, then there would be a policy about the relationship between the sponsor, chair and the owner of the case, e.g. that they are the same individual. On the other hand, are cases ultimately “owned” by a conference. Note that these are examples intended to argue that even if we limit our language to the concepts of roles and resources, we are still able to say all the things we need to say in policy terms about intentions and responsibilities and that these terms lead us to fundamental decisions about how the enterprises operate on the one hand and also about the object hierarchy or data model that might be used as a basis for implementation of an application or a service.] Now the way each of these different sorts of policy will be implemented depends on decisions about how a particular conference is implemented. In the case of some conferences where the significance of the decisions that are being made is so high that the only acceptable way of letting role holders discharge their responsibilities is by copresence – looking into each others eyes - we have a conference round a table in real time and the conference support service is concerned with the co-ordination of diaries, circulation of agendas and preparation and circulation of minutes. In other cases, the channels of communication and the medium in which information is exchanged may involve video and/or audio. Finally, in some conferences, e-mail may provide the means of conducting business when the participants are widely dispersed and the process may continue over many months. The way a particular conference is implemented and supported depends on a complex set of constraints and requirements. 1.3.3.2.2 Why start with roles ? The key question which must be asked about the enterprise projection of a proposed service is whether a complete and coherent set of roles have been identified to allow the full range of required policies to be expressed. For example, we have not, offered a role of observer in a conference. Such a role would imply the right of access to all material but no right to generate input or responsibility for decisions.

Security Subgroup

Page 26 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The absence of an explicit role would mean that observers could be supported as participants who chose not to input to a conference and who might seem to be a member of the responsible team but would not in fact be so. 1.3.3.2.3 How would a conference service be offered? It is envisaged that the MARIA network will provide certain infrastructural resources which include: • A number of connection and connectionless services which will support point to point and point to multi-point communication of text, voice, image and video. • A set of persistent information storage, access and retrieval services. • A set of work-flow and transaction services within which sequences of operations on information and communications resources can be defined, monitored and enforced. [This does not yet exist in the network but we have the technology.] • A set of service access points/work stations. • A set of tokens (smart cards + systems components) which provide means of corroborating the identity individuals and also the means of maintaining records of the capabilities which they have at any particular time and context. • ? At the organisational level, we have users who, at any given time, will hold a complex set of roles. In systems terms, each role implies rights to interpret certain items of information and the right to generate certain items as well as rights over certain resources. The direct context in which the capabilities associated with these rights and duties is an instance of a service. The purpose of the service architecture is to provide a dynamic mapping between the set of systems functions, and the relative capabilities that they provide, and the sets of rights and responsibilities defined in service user roles. This must operate at three levels: • Facilities to monitor and modify the definitions of services and the rules under which they operate. (Policy and direction support services) • Facilities to change the mapping of people onto roles. (Human resource management services corresponding to a service registration process.) • Facilities to initiate and deliver instances of services to sets of users according to their roles (Operational services) 1.3.3.2.4 Sorts of service We have used the conference as an example of a service defined in terms of a set of roles, communications channels and information resources. This is an example of a “few to few” service in that there are limits to the size of participation in an effective conference. The range of possible services which are required to support the different intra and inter enterprise relationships is presented in the following framework: Services characterised by the interpretation of type and content of messages. • A communications service is one where the system makes no interpretation of content or of the type of the content. • A structured service where the system interprets the type of the content and the service is supporting a workflow. Thus, a conference service would be structured and each submission would be distributed to all participants because of its status as a submission and their role as a participant. If a person changed role then all current information associated with the new role would become available to them.

Security Subgroup

Page 27 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999



An automated service where the system interprets both the type and the content of the message and automatically registers commitments, e.g. a booking or diary service, a library service, an archiving service…. Services characterised by topology: One to one services each of which is defined in terms of two roles such as: Consultation: doctor – patient Personal support: mentor – ward Reporting: manager – job holder Thus, a simple structured reporting service may manage the collection of time sheets, a consultation service would be a component of a case management service and, depending on the locations of the two participants would ensure that the doctor had access to the case records (structured service) and also that the most appropriate communications service was allocated if the parties were remote from each other. Few to few services support team processes and meetings. One the many services correspond to publication distribution and information services. “One” in this topology is often an enterprise or encapsulated group of enterprises. Thus a library service includes the roles of librarian (classification, maintenance, support etc.) together with publishers and authors. The latter exercise copyrights over content and may have different policies depending on the enterprise domain in which a library service is being used and the role of the user. Brokerage or mediation services (Many – one – many) which support markets and exchanges. 1.3.4 What do we do about it ? We are working under a number of significant constraints in SUN2: • There is no resource for new developments • There is very little resource for integration and advanced applications. So we have to be very realistic. My thesis in this discussion paper is that using the sort of approach I have outlined here we could identify one or two key “services” in the sense I have been using the term, and use them as a vehicle to assemble and integrate the different technical and organisational elements required. Assuming that we take a service which is relatively well understood such as a case conference which is spread over an initial face to face meeting followed by a period of remote interaction through e-mail and video (?) with a face to face closing meeting, key issues at the requirements level are: • How is the service instituted in the first place and how are the work-flows and transactions defined and maintained? (Service definition) • How are roles allocated and managed in the initial creation of the case conference and throughout its life? (Service instantiation) • How is process controlled and, in particular, how is each role supported? (Service operation) • How is persistent information and communication handled, access controlled and quality/use monitored? (Service management) 1.3.5 Concluding remarks What I am attempting her is to define a way of addressing the problem which allows us to move rather quickly to practical issues like integrating smart card readers into work stations and providing a MARIA browser which presents the service resources and capabilities required by each role holder but to do so in a context where we can also

Security Subgroup

Page 28 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

develop a longer term architectural perspective. The next stage, think, will be a half day workshop with the following agenda: The MARIA network as she is now: • The users and the ambitions • The technical realities: systems, networks, services etc. The technologies we have available or could get access to which are compatible with the status quo: •

Transaction and work flow



Smart card



User interface/browser/presentation

How to proceed: •

Can we identify one or two paradigm services to pilot? (Our work package is called “Advanced applications”.)



Can we specify something that will make sense to users and is realistic given the resources we have?



Where do we go from here?

1.3.6 Information service value chains 1.3.6.1 Information services The value chain which delivers information services to users can be represented at different levels of granularity and abstraction. The purpose of the models presented here is to provide a basis for two types of Subjects analysis: To explore the configurations of roles into Informing Information Publishing Agents market actors so as to inform the Enterprises different points of view from which services, systems and Information organisational structures may be Broking Inquiring Enterprise Agents evaluated. 1. To provide a framework for Information the identification of failures Users Service modes and their consequences Enterprise which is an important stage in Evaluating Agents security analysis. The information service value chain is represented at the highest Figure 0: The information service value chain level by three divisions of value adding which correspond to publishing, information broking and information delivery. Outside of the chain but interacting with it are informing agents who represent the ultimate requirement or obligation to inform and the information seeking and using agencies. These are represented in Figure 0 above.

Security Subgroup

Page 29 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

1.3.6.2 Informing agents Informing agency is associated with the rights and duties to communicate in some social, administrative or commercial context. Thus, a company offering a commercial information service or an administrative department meeting and obligation for public information, both include this agency. An agency named “subject” is represented as the locus of the rights to privacy and good name which apply in a regulated information economy. 1.3.6.3 The information using roles We represent the information user as three component roles which correspond to three epochs or phases of information service usage: 1. The inquiring agent has some information need and matches this need to a current offer in the conversation with the information broking enterprise. 2. The information user accesses and acquires information through the conversation with the information service provider. 3. Finally, the utility and effectiveness of the information may be communicated back the information broker by the evaluating agent. In each of the following sections we will consider the three different groupings of responsibility within the information service value chain. We identify four roles that are the components of information publishing enterprise. As we shall see below, responsibilities associated with the marketing and distribution of information products are considered as separate aspects of the information value chain.

The publishing agent is the locus of responsibility for what is published in terms of its content and for the intended and actual benefit or harm that is caused by its ultimate interpretation and use. The units of publication may vary from an individual data item in a database service to a complete work Subjects Authors corresponding to an article or book. Publishing Designers The publishing agent interacts with Informing Agent three roles which are usually Agents Editors considered as part of publishing enterprise. It is worth noting that the Information most usual context for these roles has Broking Inquiring Enterprise corresponded to paper based Agents publishing. However, the underlying Information division of responsibility is Users Service independent of medium. The outcome Enterprise of the conversation between the Evaluating Agents informing agent and the publishing agent is a publishing brief. The authoring agent is concerned with generating and assembling content to Figure 1: Publishing roles meet the communications objective in the publication brief. This takes the form of copy. The designing agent is concerned with the organisation and presentation of the copy into a draft in a way that will meet the intended information users’ needs. The editorial agent is responsible for the formal aspects of the draft and applies sets of Security Subgroup

Page 30 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

editorial riles which have been defined by the publishing enterprise. 1.3.6.4

Information broking roles

An information broking enterprise will usually serve many publishing enterprises although in the past on-line information service providers have brokered their own information and, in fact bundled access and delivery. One of the objectives of the model, that we are presenting here, is we are presenting here is to illustrate the range of possible configurations of basic roles and offers a representation that abstracts away from the boundaries of enterprise. They thus attempt to represent the complete unbundling of value adding relationships. The first stage in the conversation between a publishing enterprise and an information broking enterprise is the submission of information for registration: Registering agency exercises the following responsibilities: • Assessment of the registrability of the submitted information. Each broking domain will have a set of criteria of acceptability of information for the client/user groups which it serves. (Note that a single information broking enterprise may manage several different domains of registration Subjects corresponding to different sets of Information clients and users. Informing Publishing • Registration (i.e. acquisition and Agents Enterprises reliable storage) of data required Classifying Advertising to support access and delivery Agent Agent transactions on accepted Inquiring Registering Accrediting Agents information. This includes the Agent Agent ownership and conditions of use. Users A register provides the information Post transaction Transacting Agent Agent resources required by the transaction Evaluating managing agent but does not contain Information Agents information to support searching or Service Enterprise discovery. The organisation of register entries and the provision of Figure 2: Information broking supporting resources is the responsibility of the classifying (and organising) agent. There are three means of delivering this value: 1. Through the organisation and provision of search mechanisms which operate on content. 2. Through the provision of classification and categorisation schemes. 3. Through linking and ordering. In each case, there may be multiple schemas implemented over the same register of information offers, and of content, to meet the needs of different user groups. The advertising agent is responsible for publicising information services and is taken to be acting on behalf of the publisher and informing agents. This information is distinct in terms of policy and in terms of responsibilities, from content and from independent evaluations of the provenance, relevance and value of information from independent agencies; the latter is the responsibility of accrediting agents. The interaction between the publishing agent and the registering, classifying, advertising and accrediting agents results in an update of a catalogue, which here takes on a very wide Security Subgroup

Page 31 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

meaning. We consider it to correspond to any resources provided in an information service environment to assist the user to satisfy an information need by searching or discovering and selecting an appropriate source. Whatever forms the catalogue takes, responsibility for the information embodied in it can be ascribed to one or other of the agents we have identified so far in the information broking enterprise. When an inquiring agent has located a source and an item of information within an accessible catalogue environment, a transition takes place from what we term the rendezvous phase to the transaction phase of information service value adding. The enquiring agent becomes a user and the corresponding agent in the information broking environment is the transaction management agent. The responsibilities of this agent are: 1. To check that all the preconditions for the information transaction are met. 2. To record the instance of usage. This may include charging information or the facilitation of payments mechanisms and will certainly include the capture of information required for management purposes and to resolve any post-transaction enquiry (e.g. a complaint or valuation). 3. To provide or mediate access capability between the user and the information service enterprise.. 1.3.6.5

Service delivery roles

The access controlling agent is responsible for ensuring that only users with the appropriate capability access information appropriate to them. The medium handling agent is responsible for the organisation and holding of information for the purposes of delivery to the user. This includes the configuration and management of information servers in the case of Subjects network based services and would correspond to stock holding of copies Information Informing in the case of physical media. Publishing Agents Enterprises

The delivery agent is responsible for Information the timely and appropriate delivery of Broking Inquiring requested content (and media) to the Enterprise Agents designated user. In the case of electronic information services, this Users Access Controlling Agent agency is usually split into communications access provision and Medium Handling Agent Evaluating data transport provision. Agents Delivering Agent The presentation agent is responsible for rendering the delivered Presentation Agent information in a way that conforms to the presentation conventions of the information environment. Note that the presentation agent is indicated as Figure 3: Service delivery. distributed between the service delivery enterprise and the user enterprise which corresponds, for example, to the ownership and configuration of a browser by the user.

Security Subgroup

Page 32 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

1.3.6.6 An example of applying the model We consider the provision of a simple information service on the Internet in the form of a home page: The owner of the home page is subject, informing agent and publishing agent. The owner will also generate the page header and metadata which will be used for location purposes by search engines and may invite other page owners to create links to it. Thus, such publishers act as their own registering, advertising and classifying agencies. Access to most information, and certainly to home pages, is open and free so in this case there would be no transaction agency exercised. The allocation of post transaction agency is an interesting issue for the Internet. In case of a complaint, the only directly accessible enterprise is the one which owns the information handling resource which is hosting the information, i.e. the medium handling agent. But this enterprise (typically the ISP) is not regarded as the publisher and is not, therefore, directly responsible for content. This is an example of how changes of media and channels and the virtualisation of resources may imply a change in the legal and contractual framework of economic and social activities. It further shows that the representation of roles and relationships as presented in these enterprise models provides a means of locating and reasoning about these policies. The ISP exercises access control on the basis of its own, registered users, and medium handling while the Internet organisations collectively provide data transport. Access services are supplied by a telecommunications service enterprise. Thus, we see that the current configuration of the Internet is a particular selection and mapping of the underlying roles and relationships of a information service value chain. There are many other possible mappings which can be constructed to meet other needs and policies which may or may not be based on the same technical protocols and systems resources. We are here discussing an architecture of enterprise rather than a technical architecture. 1.3.6.7 Using the model If the model presented here is adequate and expressive enough, then it should be possible to map different information services and their environments onto the model. In particular, the concept of a Regional Information System or a domain specific information network such as MARIA. In each case of a mapping, each of the agencies is examined and allocation of the responsibilities it entails is identified with a real enterprise and with an individual or group within the enterprise. In cases where a role had been automated, the question of responsibilities is transformed into questions of the fitness of the design and the responsibility of the correct maintenance and operation of a system must be examined. In the cases where the responsibilities are exercised by a person or a group, the question to be considered is how these role holders have access to the information and other resources required. Another question is how they are able to demonstrate that they have exercised due care and attention in discharging their responsibilities. Thus an assessment of any aspect of an information service must be associated with a particular role or combination of roles and it is the ability to define these points of view of assessment that the enterprise models may be useful in identifying areas of common interest and the applicability of common solutions. In the case of security analysis, the enterprise models have a particular use. Since we are defining the intentional aspects of a generic information service through this representation of roles and relationships, the provide a useful definition of a vulnerability is in terms of Security Subgroup

Page 33 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

the breakdown or failure of a conversation. This may occur at one (or more) of three levels: 1. Role failure: the individual who has a particular responsibility misunderstands, misinterprets or abrogates it. These are failures of intention and result in communications which are not what they purport to be. 2. Instrumental failure: here the failure is in the generation or interpretation of information (i.e. the content of a conversation with another role holder) resulting in communications which are not what they were intended to be. 3. Communications failure: here the failure is one of performance of the writing, transmitting and reading processes which result in miscommunication. Now each of these classes of failure calls for a different sort of countermeasure. A role failure can only be rectified by reallocation or re-negotiation of roles. For example the requirement for two signatures is such an organisational countermeasure based on the argument that conspiracy is less likely than an individual betrayal. The type of countermeasure which is an appropriate response to instrumental breakdown involves the re-allocation of resources. This type of failure is associated with bottlenecks, overloads and shortage of information, training or skill. Communications failures are technical in nature and are ultimately traceable to faulty specification, implementation or operation of the information and communications systems and services. Vulnerability analysis, then, comprises three stages: 1. The identification of the range of role failures that may occur by considering each participant in turn as hostile and assessing the possibility, cost and pay-off compared this with the consequent damage to the victim enterprise and its clients. 2. The identification of the requirements which are being placed on information and communications assets by each role. This must address two dimensions of resourcing: • The requirements to engage in an instance of the conversations of the role, e.g. skills and training, information and communications resources etc. • The requirement to meet the capacity of the expected demand which will be placed on the role. 3. The analysis of the proposed information, communications and automation resources in terms of the conversations they are designed to support. The concept of a work flow ( see below) is useful at this level of design and analysis. It should be stressed that the purpose of this method is to convert a rather low quality problem which is not directly soluble into two somewhat more manageable problems: the insoluble problem in vulnerability analysis is “Have I thought of everything that could go wrong?” The two more manageable problems into which this has been structured are: • Have I created an adequate model my enterprise and the environment in which it is operating? • Have I analysed it completely? 1.3.6.8 Conversations, actflows and workflows The models of information service provision presented here have been constructed using a set of formal rules of construction in order to provide a rigorous basis for processes such as evaluation and security analysis. It is not the purpose of this paper to provide the background and justification of this method but to provide some practical support for the identification and exploitation of commonalties in what on the surface, appear to be quite diverse services applications and systems which have been developed to meet local and quite specific needs. The objective is to demonstrate that, despite the wide range of contexts, there are common, underlying characteristics of information service provision Security Subgroup

Page 34 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

and that there is a level of abstraction within which the different approaches can be described which is not trivial and within which useful analysis may be undertaken. To summarise the approach: we have represented the information service value chain in terms of basic or atomic roles which are associated with value adding. These abstract from the allocation of roles to agents such as people or computers. Roles interact through conversations and the concept of a conversation serves to link the normative description of a role, i.e. what we think the role should entail intention, with a description of what someone or something to whom the role has been allocated, is observed to do, extension. party/enterprise Role Assignment

Act-flow

role

actor

act

action/operation instrument

Work-flow

channel + medium

Figure 1. For communicative behaviour to take place, intention and extension have to be combined and reified, observed and interpreted by conversing agents. This is the process of instrumentalisation and an instrument is the resource which mediates the association between the intentional and extensional events. The term “instrument” is a rich one combining the legal connotation of the documentary embodiment of a contract, the scientific or medical connotation of a tool for acquiring recording or presenting information and the musical connotation of the means of performance. In the theory of conversations, we use the term to mean any resource which serves to signal or witness an intended act and which carries information associated with that act, concerning the state of the conversation in which it is performed. Thus a document may be an instrument, and so also may a hand-shake. In the latter case, the resource involves the co-located attendance and activity of the participants. The final concepts concerning the dynamic aspects of conversations are those of medium and channel. These require little elaboration. They are concerned with the representation, preservation and transportation of the information embodied in instruments. Thus, to define the concepts represented below :

Party/enterprise: is the entity which is held responsible Role: specifies the set of responsibilities in terms of relationships with other roles. Actor: the entity which stands for or surrogates a party/enterprise; the thing that behaves. Act: is the intended interpretation Action/operation: the behaviour which creates or modifies an instrument. Instrument: the resource which is taken as evidence of an act in the context of a conversation. Channel and medium: the means of recording, preserving and transporting instruments. One method of representing and analysing the dynamic aspects of organisational behaviour Security Subgroup

Page 35 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

is that of a workflow. This concept relates actors, the actions they perform and the resources upon which they operate. These concepts are appropriate for the normative definition of organisational processes. A workflow is used to define both single-actor procedures, for example: “the systems administrator will back up all user data on a daily basis and will record disk utilisation” and multi-actor ones, such as “a cheque must be signed by the finance officer and passed to a director for counter-signature”. When we consider the relationship between party, actor and role, we have the concepts required to express policies associated of role assignment: for example who is appointed as systems administrator and who deputise in the case of absence. Again, we have a linguistic and conceptual framework for expressing rules and decisions such as: “while the systems administrator (John) is on holiday, the operator (Paul) will undertake backup and reporting tasks.” Procedures are mapped to roles and roles are mapped onto individual people. We have taken particular care to distinguish between intentional aspects of conversations and extensional ones such as the workflows and role assignments because it is important to distinguish between what is regarded as constitutive, i.e. essential and inherent to an environment and what is regarded as pragmatic. A decision to change the way that roles use to interact and the way that their conversations will be implemented because for example new media or channels have become available, is quite different from a decision to change the roles themselves. These decisions involve the exercise of quite different rights and responsibilities. It is for this reason that we introduce the distinction between what we term an actflow, specified in terms of the ordering of acts as events, which signify intended and interpreted changes in the state of responsibility and obligation between agents, and workflows which specify the concrete, observable actions which are interpreted as instances of the implementation of an actflow. There are many ways in which a particular actflow may be implemented depending on the distribution of the actors and the nature of the media and channels of communication available to them. We can, for example, define systems administration responsibilities in terms of actflows in the conversation between the systems administrators, systems managers and the users. This will comprise acts such as protect (data and media), respond (to requests for assistance from users), maintain (states of serviceability) and report (on levels of utilisation). The definition of workflows, which are a response to these intentional definitions of the systems administration responsibility, may be evaluated in the following terms: Do you, the policy maker and stakeholder, think that this proposed procedure affords protection of your data? Would you accept that behaviour as an acceptable response to a request for support? And so on. The key factor in this evaluation is that we are comparing an intention with an extension and the question relates to the acceptability of the latter as an interpretation of the former. If the system administrator has the role of protecting user data then the corresponding act flow may be expressed in terms of “backing up” which must be defined in terms of providing the means to restore the data if, for any reason, the operational copy is destroyed. If the systems administrator makes a copy of user data and then uses the backup disk as a coffee mat, protection may have been afforded to the equipment, maintaining its cleanliness and serviceability, but an acceptable act of protection of user data has probably not been performed. A change in technology and systems architecture could mean that the same protection actflow could be implemented automatically exploiting replication in a distributed environment. The responsibilities to protect user data would then be mapped onto systems design and operation responsibilities rather than onto systems administration. Such a Security Subgroup

Page 36 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

change is different in nature from an announcement that users must take responsibility for their own data and that its protection is not part of the service. Such a conversation would reflect a different relationship between the computing service and the users.

Security Subgroup

Page 37 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2 Technical aspects 2.1 Security services In every system that manages data in an electronic format, and in particular those connected to the Internet, many potential threads have to be faced, and security mechanisms have to be put in place to prevent, detect or recover from security attacks. The technologies that can support the security can be both hardware and software. The mechanisms that are built aim at assuring many different security services: The ISO “IS 7498-2” standard defines fourteen security services that can be packed in five subsets. 2.1.1 Authentication The authentication aims at assuring that real identity of a party is the same he claims to have. An attack to this service is represented by the masquerade. Services: • Peer entity Authentication • Data origin authentication The corroboration that the source of data received is as claimed. 2.1.2 Access Control The service aims at assuring that the resources are accessible only to the parties they were intended to be. It includes the prevention of use of a resource in an unauthorised manner. 2.1.3 Data confidentiality Protection from information disclosure by unauthorised parties. This kind of attack this service aims to prevent, often passive and difficult to be detected, is called interception. Services: Data confidentiality in connected mode Data confidentiality in connectionless mode Selective confidentiality by fields Data flow confidentiality 2.1.4 Data integrity The protection of the information being transmitted, or locally stored, from modification or deletion by unauthorised parties Services: Integrity in connected mode with recovery Integrity in connected mode without recovery Selective integrity by fields, in connected mode Integrity in connectionless mode Selective integrity by fields, in connectionless mode 2.1.5 Non-repudiation Represents the protection from the risk that a party could disclaim an action he really did. We can distinguish two kind of non-repudiation: the non-repudiation of origin, the proof about a message creator's identity, and the non-repudiation of destination, basically a

Security Subgroup

Page 38 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

transmission problem consisting in the proof that a particular message has been delivered to the intended receiver. Services: Non-repudiation with proof of origin Non-repudiation with proof of delivery 2.1.6 Availability Availability means the computer system keeps working efficiently and is able to recover quickly if a disaster occurs • Service Availability - Many attacks can result in an interruption, or in a reduction of availability of the system. The protection against such a possibility tries to prevent potential attacks like the message flooding or the replay attacks.

2.2 Security mechanisms 2.2.1 OSI Security mechanisms These security services rest on eight specific security mechanisms and five common security mechanisms. Each security service use one or several specific mechanisms. 2.2.1.1 Security specific mechanisms - Ciphering - Digital Signature - Access Control - Data Integrity - Authentication data exchange - Traffic-padding - Routing Control - Notarisation For example, the Authentication services use the ciphering, digital signature and the Authentication data exchange mechanisms. 2.2.1.2 Security common mechanisms - Trust functionality - Security label - Security Event detection - Security audit log - Security recovery The OSI communication layers where the security services have to be implemented are also specified in the OSI 7498-2 framework. Most of the services can be implemented in the Network Layer or in the Transport Layer. All the services can be offered at the Application Level. 2.2.2 Security levels: Evaluation assurance levels For more details see part 5 : Security evaluation Evaluation assurance levels define a scale to measure the criteria for the security evaluation of a product or a service. They provide a uniformly increasing scale that Security Subgroup

Page 39 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

balances the levels of assurance obtained with the cost and feasibility of acquiring that degree of assurance. A first scale has been defined in US TCSEC (Orange book, 1985). Additional criteria (Correctness criteria E0 to E6) have been introduced in European TCSEC (1991). Its functionality classes F-C1, F-C2, F-B1, F-B2, F-B3 and F-A1 are hierarchically ordered confidentiality classes, which correspond closely to the functionality requirements of the TCSEC classes C1 to A1. “Common Criteria” (1997) defines equivalent evaluation levels. The next table shows these 3 scales. CC level

TCSEC level

EAL0

D

Minimal protection

EAL1 Functionally tested EAL2 Structurally tested

C1

EAL3 Methodically tested and checked

C2

Discretionary Security Protection Controlled access protection

EAL4 Methodically designed, tested and B1 reviewed EAL5 Semiformally designed and tested B2

Labeled Security Protection

EAL6 Semiformally verified design and B3 tested EAL7 Formally verified design and tested A1

Security Domains

Structured protection

Verified design

ITSEC level E0 FC1,E1 FC2,E2 FB1,E3 FB2,E4 FB3,E5 FA1,E6

2.2.3 Encryption The cryptography is the art, or science, aimed at proving a way to protect all the information that are requested to be kept secret, in such a way that only a well-defined group of entities are able to access the information. The term cryptography comes from the Greek cryptos, that means hidden. The most important mechanism to protect the data is called cryptography. The traditional cryptographic processes transform the original document, called plain text, using a cryptographic algorithm and a secret info, called key, producing a ciphertext that can not be re-transferred to the original text without the same key with which it was encrypted. Presently, the cryptography is used not only to keep the secret info hidden, but also to for other purposes, and in particular for the authentication of users, applications and host. Actually, the cryptography is not a new art. The first examples of protection of the information being transmitted go back to the ancient Egyptians. One of the most ancient cipher algorithm was that used by Caesar to protect the messages sent to, and received from, the generals that were located in remote regions of the Roman Empire. The algorithm was actually really simple, and consists in substituting each letter of the message with that three position later in the alphabet. The generals, being aware of the process, were then able to decrypt the message, substituting to each letter the corresponding one three positions before. In such a way, for example, the word "CAESAR" became

Security Subgroup

Page 40 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

"FDHVDU" , a word that an enemy could not be able to understand even if the messenger who carrying the document were taken prisoner. In that example, "CAESAR" represent the plaintext, "FDHVDU" the ciphertext that is transmitted, the substitution process represents the cryptographic algorithm and the integer value of "3" could be considered the key.

Security Subgroup

Page 41 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.2.3.1 Symmetric encryption - secret-key cryptosystem Adapted from OSF RFC 68.1 A secret-key cryptosystem is an encryption/decryption system or mechanism using a secret-key known only to the sender and the receiver. For privacy and authentication, a sender can encrypt a message in a secret-key shared with the receiver. If the receiver can decrypt the message using the shared secret-key, then the receiver knows that the message was encrypted by the sender and that the message has not been modified. See DES, IDEA, RC4, RC5, AES 2.2.3.2 Asymmetric encryption - public-key cryptosystem Adapted from OSF RFC 68.1 A public-key cryptosystem is an encryption/decryption mechanism using a private-key known only to the owner and a public-key that is published. For privacy, messages directed to a principal can be encrypted in the public-key of the principal. Only the principal can decrypt these, using the principal's private-key. For authentication, a principal can encrypt information in the principal's private-key. Other principals can verify that the information originated from the first principal if the information can be decrypted using the first principal's public-key. key-pair: an public-key and its associated private-key in a public-key cryptosystem. See: RSA, Diffie-Hellman 2.2.4 Hash functions Hash (or One-Way Hash) is a function that takes plaintext (message, file,...) of arbitrary length as input and outputs a small fixed-length value that is a unique message digest, the hash value, of the input message. The idea behind these functions is that no two inputs can produce the same output, thus a modified file will not have the same hash value. See 2.2.6.9 : MD5 2.2.5 Digital signature and certificate 2.2.5.1 Digital signature A digital signature is an unforgeable electronic signature that authenticates a message sender and simultaneously guarantees the integrity of the message. It also can assure the source non-repudiation. The digital signature rest on asymmetric encryption. See: 2.2.6.10 : DSA-DSS 2.2.5.2

Digital timestamping

Security Subgroup

Page 42 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The timestamping service concatenates a time and date to the hash of a document and signs that. That timestamp of a signed document can be included with a document, as proof that it existed in that form at the time of the timestamp. 2.2.5.3 Digital certificate A digital certificate indicates the ownership of a public key by an individual or other entity. It allows verification of the claim that a given public key does in fact belong to a given individual. In its simplest form, a certificate contains a public key and a name. As commonly used it also contains: an expiration date, the name of the certifying authority that issue the certificate, a serial number and, most importantly, the digital signature of the certificate issuer.

2.2.6 Basic cryptographic algorithms 2.2.6.1 RSA Adapted from the RSA Labs FAQ RSA is a public-key cryptosystem for both encryption and authentication; it was invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman. It works as follows: take two large primes, p and q, and find their product n = pq ; n is called the modulus. Choose a number, e, less than n and relatively prime to (p-1)(q-1), which means that e and (p-1)(q-1) have no common factors except 1. Find another number d such that (ed - 1) is divisible by (p-1)(q-1). The values e and d are called the public and private exponents, respectively. The public key is the pair (n,e); the private key is (n,d). The factors p and q maybe kept with the private key, or destroyed. The security of RSA is related to the assumption that factoring is difficult 2.2.6.2 Diffie-Hellman The Diffie-Hellman key agreement protocol (also called exponential key agreement) was developed by Diffie and Hellman in 1976 .The protocol allows two users to exchange a secret key over an insecure medium without any prior secrets. The protocol has two system parameters p and g. They are both public and may be used by all the users in a system. The Diffie-Hellman key exchange is vulnerable to a middle person attack 2.2.6.3 DES (Data Encryption Standard) This secret-key cryptosystem has been adopted by the U.S. government in 1977 as the federal standard FIPS 46-1 (Federal Information Processing Standard) for the encryption of commercial and sensitive yet unclassified government computer data. DES is also defined in the ANSI standard X9.32. It was originally developed by IBM. DES, has been extensively studied since its publication and is the best known and widely used symmetric algorithm in the world. DES has a 64-bit block size and uses a 56-bit key during execution (8 parity bits are

Security Subgroup

Page 43 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

stripped off from the full 64-bit key). It was originally designed for implementation in hardware. Encryption and decryption are structurally identical, though the subkeys used during encryption at each of the 16 round are taken in reverse order during decryption. The security is provided by a non-inversible function applied in every round. DES can be used for encryption in several officially defined modes and these modes have a variety of properties. ECB (electronic codebook) mode simply encrypts each 64-bit block of plaintext one after another under the same 56-bit DES key. In CBC (cipher block chaining) mode, each 64-bit plaintext block is bitwise exclusive-ORed with the previous ciphertext block before being encrypted with the DES key. Thus, the encryption of each block depends on previous blocks and the same 64-bit plaintext block can encrypt to different ciphertext blocks depending on its context in the overall message. CBC mode helps protect against certain attacks, but not against exhaustive search or differential cryptanalysis. CFB (cipher feedback) mode allows one to use DES with block lengths less than 64 bits. The OFB mode essentially allows DES to be used as a stream cipher. In practice, CBC is the most widely used mode of DES, and it is specified in several standards. For additional security, one could use triple encryption with CBC (Triple-DES). A number of modes of triple-encryption have been proposed: • DES-EEE3: Three DES encryptions with three different keys. • DES-EDE3: Three DES operations in the sequence encrypt-decrypt-encrypt with three different keys. • DES-EEE2 and DES-EDE2: Same as the previous formats except that the first and third operations use the same key. DESX is a strengthened variant of DES supported by RSA Data Security's .The difference between DES and DESX is that, in DESX, the input plaintext is bitwise XORed with 64 bits of additional key material before encryption with DES and the output is also bitwise XORed with another 64 bits of key material. (The DESX construction is due to Rivest). 2.2.6.4 AES (Advanced Encryption Standard) DES was last re-certified in 1993, by default. NIST has indicated, however, it will not recertify DES again. In fact in 1997 NIST announced an international competition for the DES’s replacement, and called it AES, Advanced Encryption Standard. Requirements were quite simple: the algorithm must implement symmetric cryptography, must support 3 key sizes (128, 192 and 256 bits), all selected candidates have to allow everyone worldwide to use it in a royalty-free basis. 15 candidates were accepted for consideration in the first round in the AES initiative. Among these were close variants of some of the more popular and trusted algorithms currently available, such as CAST-256 (Entrust Technologies), DFC (CNRS/ENS France), Magenta (Deutsche Telekom) Mars (IBM), ), RC6 (RSA Labs), SAFER+ (Cylink), etc. A first selection kept 5 propositions : MARS from IBM, RC6 from RSA, Rijndael by J. Daemen and V. Rijmen, Serpent by R. Anderson, E. Biham, L. Knudsen and Twofish by B. Schneier, J. Kelsey, D. Whiting, D. Wagner, C. Hall, N. Ferguson. At the end, on October 2000, NIST has selected Rijndael as the proposed AES. It has to pass the 3 months of ‘public comments’, then the Secretary of Commerce should approve it making Rijndael the official AES.

Security Subgroup

Page 44 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.2.6.5 IDEA (International Data Encryption Algorithm) IDEA is the second version of a block cipher designed and presented by Lai and Massey It is a 64-bit iterative block cipher with a 128-bit key and eight rounds. The cipher structure was designed to be easily implemented in both software and hardware, and the security of IDEA relies on the use of three incompatible types of arithmetic operations on 16-bit words. The speed of IDEA in software is similar to that of DES. IDEA is generally considered secure 2.2.6.6 RC2 RC2 ((Rivest rfc2268))is a conventional (secret-key) block encryption algorithm, which may be considered as a proposal for a DES replacement. The input and output block sizes are 64 bits each. The key size is variable, from one byte up to 128 bytes, although the current implementation uses eight bytes. The algorithm is designed to be easy to implement on 16-bit microprocessors. On an IBM AT, the encryption runs about twice as fast as DES (assuming that key expansion has been done).

Security Subgroup

Page 45 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.2.6.7 RC4 RC4 is a stream cipher designed by Rivest for RSA Data Security. It is a variable key-size stream cipher with byte-oriented operations. The algorithm is based on the use of a random permutation. This algorithm is confidential and proprietary to RSA Data Security. RC4 is a secret-key cryptosystem that run very quickly in software. 2.2.6.8 RC5 RC5 is a fast block cipher designed by Rivest for RSA Data Security. It is a parameterised algorithm with a variable block size, a variable key size, and a variable number of rounds. The block size can be 32, 64, or 128 bits long. The number of rounds can range from 0 to 255. The key can range from 0 bits to 2048 bits in size. Such built-in variability provides flexibility in levels of security and efficiency. There are three routines in RC5: key expansion, encryption , and decryption 2.2.6.9 MD5 MD5 is an algorithm that takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given pre-specified target message digest. The MD5 algorithm is intended for digital signature applications, where a large file must be "compressed" in a secure manner before being encrypted with a private (secret) key under a public-key cryptosystem such as RSA. The MD5 algorithm is an extension of the MD4 message-digest algorithm (RFC1320). MD5 is slightly slower than MD4, but is more "conservative" in design. MD4 was designed to be exceptionally fast; MD5 backs off a bit, giving up a little in speed for a much greater likelihood of ultimate security. 2.2.6.10 DSA - DSS The Digital Signature Algorithm (DSA) was published by the National Institute of Standards and Technology (NIST) in the Digital Signature Standard (DSS), which is a part of the U.S. government's Capstone project . DSS was selected by NIST, in co-operation with the NSA, to be the digital authentication standard of the U.S. government (May 1994). 2.2.7 Intellectual Property Rights (IPR) - Watermarks A digital watermark is an invisible identification code that is permanently embedded in a multimedia data.(audio, image, text or video); it allows unique identification of copyright owners, buyers and distributor and provides a strong deterrent to illegal copying. To guarantee intellectual property rights on software, electronic document or patent, it is possible to deposit these data at a Trusted Third Party. These data have to be signed and timestamped, with the help of cryptographic means very powerful and very sure, to guarantee rights of properties during a very long period (until 20 or 30 years). In general

Security Subgroup

Page 46 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

it is sufficient to deposit a (timestamped and signed) message digest of corresponding documents, that will be preserved them at their proprietor 2.2.8 Public key infrastructures Public Key Infrastructure (PKI) allows to conduct business electronically with the confidence that: • The person sending the transaction is actually the originator • The person receiving the transaction is the intended recipient • Data integrity has not been compromised It is very important that digital signatures should become accepted as legal evidence of the authenticity of documents and of the identity of the parties involved in forming binding contracts and other agreements when communicating electronically. While not all signatures will carry equal weight as evidence, the system should certainly not introduce mechanisms which might allow a person to claim that some other holder of the key misused it to forge a signature - there should be no other key holders. Any scheme which allowed access to private signature keys would severely weaken the non-repudiation properties of digital signatures. Authentication involves use of digital public key infrastructure software products for the encryption and digital signature of information communicated between client workstations over the Internet and Intranets. This technology will become much more common as organisations strive to gain a higher degree of control over who accesses information Key management deals with the secure generation, distribution, and storage of keys . Secure methods of key management are extremely important. Once a key is randomly generated, it must remain secret to avoid unfortunate mishaps (such as impersonation). In practice, most attacks on public-key systems will probably be aimed at the key management level, rather than at the cryptographic algorithm itself. Users must be able to securely obtain a key pair suited to their efficiency and security needs. There must be a way to look up other people's public keys and to publicise one's own public key. Users must be able to legitimately obtain others' public keys; otherwise, an intruder can either change public keys listed in a directory, or impersonate another user. Set against this, there will not be any user demand for a backup mechanism for signature keys, since the loss of a key simply means that a new key has to be obtained before further signatures can be created. There is never any reason to require recovery of the old private key. The role of Trusted Third Parties (TTP) in relation to signature keys (and this is by far their most important role) is to certify the public keys as belonging to particular individuals or organisations, or as giving certain authority. Certificates are used for this purpose. Certificates must be unforgeable. The issuance of certificates must proceed in a secure way, impervious to attack. In particular, the issuer must authenticate the identity and the public key of an individual before issuing a certificate to that individual Government and private entities that accept electronic communications need assurance that the digitally signed messages they receive can be verified with reference to a certificate that is appropriately trustworthy for the intended purpose. There is an increasing recognition that this assurance can be provided through the use of a certificate policy. The term certificate policy comes from the X.509 version 3 certificate specification, where it is defined as follows:

Security Subgroup

Page 47 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

certificate policy: A named set of rules that indicates the applicability of a certificate to a particular community and/or class of application with common security requirements. For example, a particular certificate policy might indicate applicability of a type of certificate to the authentication of electronic data interchange transactions for the trading of goods within a given price range Many Internet protocols and applications which use the Internet employ public-key technology for security purposes and require a public-key infrastructure (PKI) to securely manage public keys for widely-distributed users or systems. The X.509 standard constitutes a widely-accepted basis for such an infrastructure, defining data formats and procedures related to distribution of public keys via certificates digitally signed by certification authorities (CAs). Part 4.2 of this document provides a detailed view on PKI.

2.3 Smart Card - ICC Smart cards can be use in different ways to secure information systems or communications : • As a physical access control mean to buildings or consoles .. • As an authentication mean • To store secure data as passwords, signatures, cryptographic keys, … To can provide security to application, Smart Card must be very secure itself. A real smart card, i.e. a chipcard with a full processor controls any kind of access to the application data through its secure operating system. 2.3.1

Definition - Characteristics

The term of "Smart Card" is very useful but ambiguous and used in many different ways. ISO uses the term "Integrated Circuit Card" (ICC) to encompass all those devices where and integrated circuit is contain within an ISO ID1 identification card piece of plastic. The card is 85,6mm*53,98mm*0,76mm and is the same as the ubiquitous bank card . Integrated Circuit Cards come in two forms, contact and contactless. The Contact Card is the most commonly seen ICC to date largely because of its use in France and now other parts of Europe as a telephone prepayment card. An "intelligent" smart card is made of 6 or 7 components : • a CPU (Central Processor Unit) • a ROM (Read Only Memory) • an EEPROM (electronically erasable programmable ROM) : a non volatile read/write memory • a RAM (Random Access Memory) : a temporary working memory • an I/O system (Input/Output) Security Subgroup

RAM

ROM

EEPROM

Temporary

Operating System

Application

Operating Storage

Internal

Access Control

Bus

Storage

System

CPU

Access Conditions, Keys Data Flow

Cryptographic Coprocessor

I/O System

ICC Chip

Page 48 of 214

Security Subgroup

• •

Security Guidelines for Inter-Regional Applications

20 October 1999

a gold connector plate with 8 contacts (only 6 are actually used) an optional cryptographic coprocessor

The chip must contain the communication logic by which it accepts commands from the card acceptance device (CAD, smart card reader, chip card reader) and through which it receives and transmits the application data. 2.3.2 Smart Card Standards The International Standards Organisation (ISO) began working on the standardisation of chip cards as early as in 1983. The basis for the standardisation comprised the existing standards 7810, 7811, 7812 and 7813 which define the physical characteristics of identification cards. The foundation of virtually all existing smart card standards is ISO 7816. It specifies physical and electrical characteristics of the smart card interface, formats and protocols for information exchange with smart cards, and functions provided by smart cards (see list). European Standards (CEN) Due to early smart card projects in Europe and the involvement of the different telecommunication organisations, the development of standards has been driven in Europe with some focus on telecommunications use (CEN 726). These standards have been influencing the ISO standardisation work and are widely reflected in the above ISO standards. OSF DCE Standard RFC 57.0 introduces smart card terminology, describes current smart card technology (including physical and logical characteristics), discusses the current status of smart card standardisation, and closes with a brief discussion of the benefits DCE could gain by utilising smart cards. RFC 57.1, titled "DCE Smart Card Integration" , describes a proposed DCE implementation. Industry Standards and Initiatives Various de facto industry standards have been developed or are under development by the respective industries based on emerging markets and businesses. Examples are • EMV (Europay, Mastercard and Visa) Europay, Mastercard and Visa have co-operated to create global specifications for the application of smart cards for the payment industry. The EMV specifications represent a stable foundation for the financial industry to begin deployment of smart card applications that are capable of functioning across borders and systems. • OpenCard Framework • PC/SC • JavaCard JavaCard is a standard set of APIs and classes that allows Java applets to run directly on a standard ISO 7816 compliant card. The specifications are announced by Sun and Visa, with the support of leading smart card suppliers. By offering chip independent and secure execution of different applications in conjunction with guaranteed data privacy and flexible object sharing, JavaCards provide a new approach to a real multi-application environment. Security Subgroup

Page 49 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The specifications are available through the Internet home pages of the respective companies. 2.3.3 Smart Card Security 2.3.3.1 Card Holder Verification ICC store passwords in its EEPROM in a way that they can never be read out from the outside. The comparison of a secret password (CHV, Card Holder Verification Value), only known to the card holder, can be done within the chip. Data and functions can be protected such that access is only allowed after a successful presentation of the matching password. After a certain number (generally 3) of incorrect attempts, the password is blocked and the card can not be used. If one password is generally sufficient par card, a second password could be use to unblock the first password in case it has been invalidated. 2.3.3.2 Mutual Authentication Access to data can be restricted such that prior cryptographic authentication of the external world is required. For this purpose the card generates a challenge (i.e. a random number) and presents it to the external world. The external world uses a secret cryptographic key to transform the challenge number received from the card into a cryptogram and presents the result to the card. The card internally verifies the cryptogram with the corresponding key, securely stored in the EEPROM of the chip. Based on the result, access is granted to certain protected data or denied. 2.3.4 Security functionality 2.3.4.1 Authentication A smart card can be used as a authentication mean using the internal authentication mechanism. The external world generates a random number (challenge) and sends it to the card. That one uses this random number and a secret key, securely stored within its EEPROM memory, generates a cryptogram and returns it to the external world. The external world verifies the cryptogram. If the result of the verification is positive, the external world can be sure that the card is authentic 2.3.4.2 Digital signature A digital signature allows the authenticity of information to be verified without the need for a secret key. Another advantage of digital signatures with public key algorithms is the "non-repudiation" function: the sender of a signed message cannot deny that he created the message. Since the signature is based on his private key only known to him, nobody else could have created the message.

Security Subgroup

Page 50 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

A secret key, stored in the card EEPROM, is used to provide the card holder signature. 2.3.4.3 Data integrity Sometimes it is important to ensure that data exchanged with the Smart Card is authentic and remains unchanged when travelling through an uncontrolled and perhaps unsecured environment. This could be for example a terminal or a network like the Internet. To ensure the integrity and authenticity of sensitive data exchanged with the Smart Card, access conditions can be associated with the data stored on the card as described above, thus ensuring the integrity of the data when being exchanged with the external world. Private Key (DES) Based Data Authentication Data integrity can be achieved through Message Authentication. A Message Authentication Code (MAC) is calculated based on a secret key, a random number and the data which is to be exchanged. The MAC is appended to the data and transmitted together with the data. The random number is included to prevent replaying of messages: each transaction has a different MAC, even if the data Secret Key are equivalent. Random The receiving partner performs Number the same computation based on DES the same parameters and algorithm compares the result. If both are Data equivalent, it is proven that the DATA M A C data • was not modified during transmission and • was produced by a legitimate sender. Public Key (RSA) Based Data Authentication Generally , Smart Cards support public key algorithms, as RSA, and may use digital signatures for data authentication and integrity services. 2.3.4.4 Data confidentiality Secret data like cryptographic keys or passwords (Card Holder Verification values) can be protected on their way from the external world to Smart Card through data encryption. A public secret key (symmetric algorithm) is used. 2.3.4.5 Secure data storage and attacks on smart card security. Many application or protocols need of secure information to provide a secure service. A Smart Card can store such data with a high level of security. There are two ways of attacking smart cards : • Destructive reverse engineering of the silicon circuit, including the contents of the ROMs. • Discovering the memory contents by other means A well equipped laboratory can do both. Differential fault analysis is a powerful attack on crypto-systems embodied in devices such as smart cards. If the device can be made deliver erroneous output under stress (heat, vibration, radiation ….) the a cryptanalyst comparing correct and erroneous outputs has a Security Subgroup

Page 51 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

dangerous entry point. The weak point of an smart card system (in addition to theft…) is the connection between the card and the CAD (card reader), which can be eavesdropped to pick up the CHV of the card.

2.4 "Individual" Security : OSF DCE Security Architecture From 1978, packets networks in connected mode ( X25 protocol ) provided some level of security by the Users Closed Group . An UCG is represented by lists of its members located in the network access switches. These ones refuse the access to all host not belonging of the UCG if it wants to communicate with one of its members. This access control can be bi-directional or limited to incoming or outgoing calls. It is possible, on private networks, to complete this mechanism of access control by an authentication of the calling systems. Works on the security of the packets networks have been initiated by request of the DoD (Department of Defense of the United States) and were developed in the frame of the Kerberos project in the beginning of years 1980 (basis document : 1985). These works have provided a basis to proposals of the OSF (Open System Facilities) for its Distributed Communication Environment : DCE. Other works have been led in the framework of banks to make their trades secure. They have given protocols as PSIT or ETEBAC 3 or 5. Finally some constructors, notably IBM, have developed their own tools (for example DES and the means to make exchanges secure). 2.4.1

Kerberos Architecture

It is based on the utilisation of a ciphering system of passwords, using DES private keys.. A hierarchical Trusted Server knows the user keys and the server keys. Kerberos establishes an asymmetrical end-to-end link, between user and server. No password is transmitted on the network, but user passwords are used as ciphering keys of user or server keys. Prototype, it makes the object of different "incarnations". It has not a standardised interface and reveals some weaknesses. 1 Client The diagram hereafter illustrates mechanisms of key exchange that allow the reciprocal authentication by an Client Name 3 exchange of secure passwords . It request requires a trusted server of keys. 3 1 3 2 Exchanges between a client and an application server are made secure by a key 3. This key is transmitted to the client in a " casket " closed by its private key 1. In this "casket" the server of keys sends it also an "casket" destined for the Security Subgroup

Client Name response 3

3

2

Trusted Server of keys Server 2

Page 52 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

application server, closed by the private key 2 of this server, and containing the key 3 of securisation of the exchanges. The client transmits this "casket" to the application server for communicating it the key 3. The protocol of authentication uses a sure Clients - Services database. The hereafter diagram shows the sequence of exchanges between client, authentication server and server of "tickets of access". 1: initial authentication demand Kerberos 2:ciphered parameter of authentication Database recording , ciphered by the user's password Ticket Kerberos Grant 3: password to decipher the ticket Authentication Server Server 4: request to the service of Ticket s KAS TGS 5: reception of a ciphered Ticket of 4 service 2 1 5 6: dispatch of the Ticket of service and of parameter to the server Password 6 "pwd" Client 7: control of the Ticket of service; Server user 3 return of the ciphered hour for 7 authentication 2.4.2 OSF DCE Architecture Client stations have secure accesses to servers thanks to an authentication and access control protocol making use of several servers collaborating to the security. A Security server allows to centralise passwords in an alone place. During of the " login " it delivers a ticket. A Time server allows to use time-stamps or an useful life limited to tickets to avoid that they could be replayed. The network can be divided in several cells : a "server of cell" names concentrate rights of access in a unique place . Protocols of intercells security allow to have a unique" login " ; they use a secure RPC (Remote procedure call) protocol.

Security Subgroup

Name Server

Time Server

Security Server

Page 53 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The authentication is based on the deliverance of limited at one time validity tickets : TGT : Ticket - Granting Ticket PAC : certificate of privilege attribution Cki : time i ( time stamp) It follows the simplified functioning hereafter diagram : DCE login 1 1: demand of TGT 2 Client login without password 2: TGT and CK1 ( ciphered by pwd) authentication server deciphering of TGT and CK1 by pwd AS 1* : ticket demand for SP 2*: ticket for SP and CK2 8 3 privileges 3: demand of PTGT server 4: TGT with PAC and CK3 4 PS Demand of service Ticket Grant 5 5: demand of ticket for a service server 6: ticket of the service and CK4 7 6 ticket TGS 7: presentation of the ticket ticket+request 8: exchange of random numbers Server

Security Subgroup

Page 54 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.5 "Collective" Security : Firewall and Virtual Private Networks Generally enterprises wish to have private networks (Intranet) interconnected between them by a private extensive network but also connected to Internet or interconnected via Internet and to constitute a virtual private network (VPN). The Internet technology has not been conceived to be sure and hardware equipment or software have to be introduced to improve the security. 2.5.1 Firewalls A Firewalls is a system or a set of systems that strengthen access controls to frontiers between networks, generally a secure private network and a public network as Internet, between which they constitute a bottleneck of strangling. They do not correspond to a very precise equipment but rather to a great variety of equipment that rest on two mechanisms of basis : - to block a part of the traffic that crosses them - to allow the crossing of messages for some applications. They constitute, in practice, the implementation of a policy of access control that has to be very well defined. Users have to access to services or data they need, but do not have more rights that necessary. Whole is found in this balance need / restrictions and firewalls are only an implementation of this policy. It consists in restraining accesses since the exterior (of the secure network) to : - the totality of the systems in the protected network or only a part of its components - to some services : electronic mail (Email), Web, transfer of files (FTP), distant accesses (Telnet, rlogin, finger, . . . ) for example but also to restrain services to the exterior, to avoid the exit of information not only from users but also following intrusions since the exterior. Some services are very unsecure. For example the POP server (for electronic mail access) requests the transmission in clear and at regular intervals of the identifier and of the password of each client. The administration network service SNMP (version 1) allows to discover easily the composition and characteristics of a network and to act from a distance on these components; in each message the field " Community ", kind of password that can surpass rights of the "root " user, is transmitted in clear. 2.5.1.1

Against what does firewall protect ?

A firewall can protect against unauthorised accesses from the external world. They constitute a bottleneck of strangling where security and audit can be imposed. On the other hand, they do not protect against internal attacks or attacks that do not pass by them. They do not protect neither against attacks transmitted by the intermediary of data introduced in the secure network : virus or indirect attack by " Applets " Java or AptiveX.

Security Subgroup

Page 55 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The arrival of these new objects created thus a hole of security via Web server, which, until then, were often considered as a means sure enough. The implementation of firewalls requires to reduce access points to the secure network by forbidding all access that do not pass by firewalls (backdoors : for example direct phone lines or X25 or Frame Relay networks) : it's no use to install a reinforced door to a house of paper or with open windows. 2.5.1.2

How does firewall protect ?

Two types of functions, providing more or less transparency or constraints to the user can be implemented : - IP packets filtering at the Network level in routers of entries (filtering routers or subset of a firewall). - message filtering at the Application level using proxy server(s). This system is more opaque and restricting. A proxy has to be installed for each type of application to protect. The access is authorised only after identification or authentication. After this control, the proxy allows the user to connect to the real service. Firewalls can be classified in three types : - Network level firewalls block and permit traffic flow according to the source, destination addresses and ports in individual IP packets. They can provide Network Address Translation (Nat) to mask real traffic origin or destination Some are able to monitor and maintain information about the state of connections passing through them, the contents of data streams, etc. Most network level firewalls will route traffic in such a way that a validly assigned block of IP addresses will be required. The main advantages of network level firewalls are their speed and their transparency to the user. -

-

Application level firewalls are typically hosts running proxy servers, permitting no direct traffic between networks, and performing extensive monitoring and logging of traffic. Essentially, traffic goes in one way and out another. This allows for application level firewalls to be used as network address translators, as the traffic is being passed through an application which masks the traffic origin. The proxies are specific for applications such as FTP or Telnet or protocols such as IIOP and SQL*Net. Application level firewalls may affect overall performance and may require staff training, however several of the more recent incarnations are entirely transparent. The main advantages of application level firewalls are the ability to provide detailed audit reports and the ability to enforce more conservative security models than network level. Stateful Multilayer-inspection firewalls. These firewalls analyse all packet communication layers and extract the relevant communication and application state information. Instead of examining the contents of each packet, the firewalls compare the bit patterns to packets that are already known to be trusted. Stateful multilayerinspection firewalls can be faster than application-layer firewalls — the proxy mechanism is at a lower level — but they are also more complex; they can have some of the advantages and shortcomings of each of the previous two models.

Security Subgroup

Page 56 of 214

Security Subgroup

2.5.1.3

Security Guidelines for Inter-Regional Applications

20 October 1999

Some problems with existing Application services.

Some Application services can pose problems, the principle of the network partitioning being to render opaque the secure network for external users. Name Servers (DNS) and especially must not provide the real address of host systems. Generally, they will be therefore split into : -

An external public name server, which is put out the perimeter partitioned An internal private name server containing the true address; it is not visible by external users. It solves directly names for internal accesses and serves as relay for internal users to the external server to solve external user names .

For an external user, the external name server provides a normal resolution and a resolution filtered for the internal mode ; in general an address translation (IPNat) is realised. This solution can pose problem for some anonymous FTP servers that want to know the caller's identity. To allow external access to Email, Web or File Transfer Services, proxy servers can be used. An other solution, less secure, is to use internal servers with access restriction on the accessible "ports" accessible. Generally File Transfer is made using a Web server and only Web access are allowed to read files. To put files on a Web server (Uploading), a "Put method" can be implemented (as on the BSCW server) : secure access can be provided by a form and the exchange of tickets. 2.5.1.4

Layout

Several layouts, more or less complex or restricting, can be installed : - Packets filtering. The firewall performs an Internet access control by allowing only authorised IP packet to pass. Masking of the secure network by an address translation (Ipnat) and advanced identification of users can be added. A historical and statistics of security are available. -

Dual-homed Gateway

Secure Network Packets

IP Address translation

(Optionnal)

Secure network

Internet Packets filtering all applications via Proxy

Security Subgroup

Filtering

Proxy server

Information service

Page 57 of 214

Security Subgroup

-

Security Guidelines for Inter-Regional Applications

Bastion host (Screening host). More flexible and less sure than the preceding system, it allows the direct traffic between the interior network and Internet ; crossing through the proxy is mandatory only for some sensitive services.

20 October 1999

Secure network

Internet packets filtering

Allowed direct traffic Proxy services access

Proxy Information server server

The Proxy gives access to internal services

-

DeMilitarized Zone (DMZ - Screening Subnetwork) : put between Internet and an interior network, a sub- network is the alone accessible common place from the two interconnected networks. It masks the secure network and allows more important rates than a screen host.; some direct traffic can equally be allowed. The screen sub- network takes in different servers : Web Server , anonymous FTP Server, public mail Server, etc.

2.5.1.5

Internet

packets filtering

Allowed direct traffic

packets filtering

Proxy server

Secure network

Email Information server server

access via Proxy Services access

Security Analyser Security analyzer

Analyse of Security constitute a technical close to firewall. A security analyser examine the traffic destined to some servers and force the disconnection of unauthorised users.

Configuration File

3 4 5 6

File Server

2 1 1

Secure Domain

Security analyzer

Security analyzer

Storage disks

5

3

3

6

4

Storage disks

4

2 Station A

Intruder

2

Secure Domain

Station B

Secure Domain 1

1

1

1

External network

Security Subgroup

Page 58 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.5.2 Virtual Private Network (VPN) 2.5.2.1 Presentation A network is told virtual when it does not rest on permanent real physical connections but on connected temporary virtual links built up by routing of packets between machines. A "Virtual Private Network" allows to simulate a private network on a public network. Although realised on the public network, it is accessible only to the alone authorised systems. A secure " private network " is thus created on an unsecure public network, such that Internet, which represents the backbone of the private network widened. This network is private, what means that alone authorised users have knowledge not only exchanged data but also of the even existence of a privileged data exchange in a " private communication ". To transport data between two similar networks or between a remote host and an interior network, on an intermediate public network, "tunnelling" technology is used. Tunnelling protocols wrap (or "encapsulate") one protocol in another protocol. Wrapping a point-topoint protocol in a routable protocol allows the point-to-point traffic to be transferred across a routed network such as the Internet. The result is that the user appears to have a point-to-point connection across a multi-hop network. Security mechanisms have to be established to achieve secure delivery. Currently, the two most widely used tunnelling protocols are Microsoft's Point-to-Point Tunnelling Protocol (PPTP) and Cisco's Layer Two Forwarding (L2F). PPTP wraps packets in IP, the Internet layer three protocol, while L2F uses layer two protocols, such as Frame Relay and ATM, for tunnelling. The emerging tunnelling standard, Layer 2 Tunnelling Protocol (L2TP), combines the best points of PPTP and L2F. 2.5.2.2 Tunnelling Tunnel can be "bored" between (local) networks or between a remote host and a Remote Access Server (RAS) in a network. In this case, tunnel can be initiated by the client (voluntary tunnelling) or the server (compulsory tunnelling mode). PPTP works with RAS. Security is provided because PPTP is an encrypted protocol. In addition PPTP is able to encapsulate other protocols. This makes it possible to communicate via the Internet using NetBEUI and NWLink wrapped by PPTP, which is in turn wrapped by IP. L2TP is a protocol that tunnels PPP traffic over variety of networks (e.g., IP, SONET, Security Subgroup

Page 59 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

ATM). Since the protocol encapsulates PPP, L2TP inherits PPP authentication, as well as the PPP Encryption Control Protocol (ECP) and the Compression Control Protocol (CCP). L2TP also includes support for tunnel authentication, which can be used to mutually authenticate the tunnel endpoints. However, L2TP does not define tunnel protection mechanisms. L2TP is a key tool in building virtual private networks (VPN) and represents the next generation of tunnelling protocols. L2TP Tunnels PPP traffic over public or private networks, including the Internet. It provides authentication of the PPP connection using PAP, CHAP, etc., just as in a typical PPP session. This is intended to prevent unauthorised individuals from intercepting the communications, which can occur over normal Internet connections. L2TP may also utilise IPSEC to provide for tunnel strong authentication, privacy protection, integrity check and replay protection. Both the voluntary and compulsory tunnelling cases are usable with IPSec.

Security Subgroup

Page 60 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.6 Securty of Exchanges on Internet 2.6.1 Risks 2.6.1.1

E-Mail

There are two ways for providing E-mail security: • S-MIME • PGP The most widely used mailers are Eudora, Microsoft Outlook and Outlook Express, Netscape Communicator (or Mozilla), and Lotus Notes. Recent versions of Eudora, Outlook and Netscape Communicator include S/MIME Security services, generally using a RC2 key. Using PGP is possible by downloading of plugins. Plugins for Eurora 3.x, 4.x and Mac , Outlook 97&98, Outlook Express 4 & 5 are included in PGP 6.0.2i. Plugin for Lotus Notes is announced. The size of these plugings (pgpplugin.dll) is about 100 or 120 ko. A Pgp pluging for Netscape Communicator will probably never append. It is just possible to prepare encrypted text and copy in the message via the clipboard. 2.6.1.2 Web browsers Today, the Web browsers are perhaps the most sophisticated, popular, programs you can find on the Internet. Due to their complexity, the system security threat associated with them can make easier for intruder to gain unauthorised access to a browser user's computer by exploiting a security hole in either the browser or the programs used in conjunction with the browser. Because of the easiness in installing and configuring a Web server, it is easy, also for hackers, even not too skilled, to set up a server aimed only at breaking the computers of the browser who connect to it. Often the attack can be carried out simply by persuading the user to view a particular web document or click on a certain hyperlink. In general, the most dangerous sources of potential attacks originate not from security weakness of a browser itself, but from the external viewers. Those are in fact programs that are called by the browser in order to display document of a particular type, which the browser is not able to handle by itself. Some of these programs may themselves have security holes which can be exploited by the hacker. Another sort of vulnerability can be caused by extremely large objects that are requested to be handled by the browser. This kind of attack represent a well-know source of bug in many applications and operating systems, in particular UNIX, and are caused by buffers, zones of RAM used to store and manage data. These zones can be often easily filled beyond their capacity so that the data overlap the bounds of the memory supposed to contain the buffer. A classic example of this phenomenon that was found, and fixed, in many browsers recently were the requests of extremely long URL's, which caused the buffer overflow, being able to execute malicious program codes. A part from the textual and graphical contents, the HTML pages can contain also executable programs that are dynamically downloaded at the users' stations. In particular we have two main types: the applets and the activeX controls. Note that those programs are different from the viewers formerly described, being the viewers applications already installed in the local stations, and launched to present a document of a particular type. Applets and activeX controls are instead actual applications that are downloaded with the Security Subgroup

Page 61 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

page. The concerns for that last kind of application are more remarkable, because it can become more difficult for the user to control the programs being executed on the machine. Every page can potentially cause the execution of potentially malicious code. The applet in particular have been designed with the explicit goal to be able to address these system security threads. Applets are written in Java, a language defined by Sun, and are executed in a very restricted environment, where each untrusted code is securely executed, being each instruction a priori checked, and not allowed to execute any operations potentially dangerous for the local station. In particular every access to the local resources, like the hard disk, is denied. Anyway, since often "regular" applications can require, for example, to access local files, many mechanisms have been studied to grant such rights to a code. The first approach consists in requiring the user to explicitly set the allowed applet permissions, relaxing the constraints otherwise fixed by the language. The user can grant different rights for each single operations, specifying in addition if the right should be granted in any case to any applet or a user confirmation should be explicitly requested for each attempt. A better solution is that based on the signed applet. Those applet are explicitly signed by the host who provide them. When a user decides to trust a host, any applet signed coming from it is executed in a privileged environment without any further control. Thanks to the signature, with such approach it is possible to check not only the origin of the code, but also detect any potential modification that the code could have had during its transmission, either accidental or intentional. 2.6.1.3 Web servers A part from any possible bug of the executable code, the most serious security vulnerabilities of Web servers are those caused by the CGI (Common Gateway Interface) programs. Those are actual programs that are run on the server machine, and are supposed to carry out some operation in order to produce a suitable output, usually consisting in a HTML page, built on the basis of the parameters specified during the CGI call. A typical example is constituted by result of the queries commonly used to search for a particular argument in the search engines (see for example www.altavista.digital.com or www.hotbot.com). Here, the parameters of the query represent the requested keywords, and the pages dynamically produced contain the list of the items that match the query. The problem related with the CGIs is that if CGI programs are not carefully written, they may allow malicious client-side attackers to execute commands on the Web server host. This risk also exists with all server side active documents. To limit the impact of this problem, such active documents must be used only when needed, and the right to install them must be reserved to trusted persons only. In addition, it is critical to maintain the server configuration very carefully.

2.6.2 Security in Internet services 2.6.2.1 E-mail Security E-mail presents security problems at different levels. The most commonly considered are those concerning normal message exchanges. It is important to keep in mind that the default behaviour of the mail system is to have no way to insure the identity of the sender of a message. The information placed in the header part of the message depend only on what the sender declared when he configured his mail client. On the other hand, it is Security Subgroup

Page 62 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

possible to track the path followed by the message and to know the originated machine (or at least its IP-address), but there is no way to know who was using it… this authentication of sender can be solved using public keys encryption systems which can also add a possibility of message content privacy. Confirmation of reception is also a subject of confusion: when the sender request a “delivery report”, he often means he wants to know if the receiver got his message. In fact he is only informed about the arrival of the message in the receiver’s mailbox, and he get no proof about the reading of the message. Another level concerns the attacks using mail service on the net. They can be split in two categories: • attacks of the mail servers • illegal or abnormal usage of the mail. The first one is only a part of what might be the normal information systems security at each site connected to the net. It is important to watch the security alerts diffused by authorised organisations, and to install patches and software updates on a regular basis. The second one is more difficult to address. It needs some specific tools to prevent abusive usage of the mail service functionality, but the difficulty is to place the limit between normal and abnormal messages. Filters can be install on the mail servers to check each incoming message, using a set of rules to decide if the message can be accepted or if it must be rejected. These rules can apply to the header fields of the message, but also to the envelope which is more reliable. It is not possible to give a full list of what can be considered as an abnormal use of mail system, because it depends on sites and evolves during time, but some points are critical. A commonly used attack consist in sending an unacceptable message using a remote server relay which will appear as the sender of the message. To do that, the sender needs only to set his “victim” as outgoing mail server in his mail client configuration. This operation takes benefits on the mail relaying capability of the server. This capability must absolutely be blocked. The best way to do that is to refuse to send out of the current domain any message coming from outside this domain. Sending commercial or political advertisement to a huge list of persons is a new activity which can be considered as a form of attack. It is particularly difficult to sort normal important messages in tenth of large useless ones! This action, called “spamming” is difficult to avoid: some servers on the net publish blacklists of spammers which can be used to refuse mails coming from these addresses. It is also possible to analyse the recipient list and to reject messages sent to more than a certain amount of persons. As it is possible to track the path followed by a message and to get the IP address of the sender host, attacks come frequently from hosts not declared in the reverse DNS to make difficult the identification of the sending domain. To avoid these attacks, a lot of sites now refuse mails coming from such undeclared hosts. Unfortunately, some system administrators, mainly those of Internet providers, don’t manage their reverse DNS properly. This implies that some persons are unable to send messages to protected sites… and as they can send to some other places, it is very difficult to make them understand that the problem is on their side and not at recipient site. 2.6.2.2 Web servers Security Web server are a potential door to enter a system and to try to get or modify private or system data on the related host. The first problem concerns the configuration of the Web server software itself. The second one concerns the execution of server side active documents. Security Subgroup

Page 63 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

To avoid attacks on data store on the server host, the Web server software must be run with the minimal set of privileges needed by its function. It must be allowed to distribute over the network only the documents placed in its own data disk space, and must be unable to distribute other files present on the disk of the host computer. In addition, distribution of documents is intended to be controlled depending on some criteria, for example domain originating request, or knowledge of an access code. These controls are realized through server configuration file(s). It is critical to manage that with a great care: any mistake in a configuration file can produce unpredictable effects on distribution control. Note that when access restrictions are placed on a part of data directory tree, subsequent restrictions on a sub-tree can introduce conflicts in the hierarchy of rules. In this case, the order of declarations in the configuration can be critical, and must be check with an extreme care. To reduce the risk due to active documents execution, it is recommended to remove all unused scripts from the server: a bug in an unused template script can open a wide access on your whole system. Used scripts must be developed and tested in the most complete possible way, including tracking and suppression of potential buffer overflows, which are the most important sources of trouble. It is also recommended to use a CGI wrapper to reduce the level of privileges which apply to scripts execution, and mainly to restrict their access to only a part of the file systems of the server. 2.6.2.3 Security of other services Any services on the network can constitute a potential risk for the machine hosting them. It is not possible to review all existing services, but a good practice is to deny access to all services, and then to allow only those which are useful with a well checked configuration. Users of these services must be aware of the potential risks associated with such practice. For example it is important to know that user/password authentication is generally transmitted on the net in clear text and that all computer connected to the net segments on which this information circulates can get it only using some very simple pieces of software. Because of that, critical account must never be used through the network without a specific authentication scheme like challenge negotiation or single usage password.

2.6.3 Internet Security Standards 2.6.3.1

Internet Security Architecture

As in the OSI Security Reference Model (OSI 7498-2), in the Internet Security Architecture the security mechanisms can be provided by adding fields or messages in non-secure communication protocols and by inserting security sub-layer between the communication layers. Standards are developed at Network, Transport and Application levels : Level Application Transport Network

Security Subgroup

Standards PEM, MOSS, S/MIME, PGP/MIME, SHTTP SASL, TLS (SSL) IPSec

Page 64 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

These Security Services rest on standard Encryption algorithms or Message-Digests, and on the Internet X.509 Public Key Infrastructure (PKIX) (See §6.5 below). 2.6.3.2

IPSec

2.6.3.2.1 Goal and Overview IP sec is designed to provide interoperable, high quality, cryptographically-based security for Ipv4 and Ipv6 without affecting users, hosts, and other Internet components. IPSec provides integrity protection, authentication, and (optional) privacy and replay protection services for IP traffic. IPSec packets are of two types: Encapsulating Security Payload (ESP) format, which provides privacy, authenticity, and integrity. Authentication Header (AH) format, which only provides integrity and authenticity for packets, but not privacy IPSec can be used in two modes; transport mode which secures an existing IP packet from source to destination, and tunnel mode which puts an existing IP packet inside a new IP packet that is sent to a tunnel end point in the IPSec format. Both transport and tunnel mode can be encapsulated in ESP or AH headers. IPSec transport mode was designed to provide security for IP traffic end-to-end between two communicating systems, for example to secure a TCP connection or a UDP datagram. IPSec tunnel mode was designed primarily for network midpoints, routers, or gateways, to secure other IP traffic inside an IPSec tunnel that connects one private IP network to another private IP network over a public or untrusted IP network (for example, the Internet). In both cases, a complex security negotiation is performed between the two computers through the Internet Key Exchange (IKE), normally using PKI certificates for mutual authentication. The IETF RFC IPSec tunnel protocol specifications did not include mechanisms suitable for remote access VPN clients. Omitted features include user authentication options or client IP address configuration 2.6.3.2.2 Security Associations. The management of security services is based on the concept of security associations. A Security Association (SA) is a simplex "connection" that affords security services to the traffic carried by it. It is uniquely identified by a triple consisting of a Security Parameter Index (SPI), an IP Destination Address, and a security protocol (AH or ESP) identifier. Two types of SAs are defined: transport mode SA and tunnel mode SA. The former provides security services for higher layer protocols and the latter provides security services for the IP protocol too. For a tunnel mode SA, there is an "outer" IP header that specifies the IPsec processing destination, plus an "inner" IP header that specifies the (apparently) ultimate destination for the packet. The security protocol header appears after the outer IP header, and before the inner IP header. Whenever either end of security association is a security gateway, the SA must be tunnel mode. Sometimes a security policy may call for a combination of services for a particular traffic Security Subgroup

Page 65 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

flow that is not achievable with a single SA. The term "security association bundle" or "SA bundle" is applied to a sequence of SAs through which traffic must be processed to satisfy a security policy. There is the possibility that the SAs comprising a bundle may terminate at different endpoints. For example, one SA may extend between a mobile host and a security gateway and a second, nested SA may extend to a host behind the gateway. 2.6.3.2.3 Basic concepts Locating a Security Gateway. A host or security gateway must have an administrative interface that allows the user/administrator to configure the address of a security gateway for any sets of destination adresses that require its use. This includes the ability to configure the requisite information for locating and authenticating the security gateway (and backup gateways) and verifying its (their) authorization to represent the destination host. Just as authentication and key exchange must be linked to provide assurance that the key is established with the authenticated party, SA establishment must be linked with the authentication and the key exchange protocol. The Internet Security Association and Key Management Protocol (ISAKMP) provides the support needed to establish a security association between negotiating entities, in order to secure the channel of communication for the next ISAKMP messages. This channel of communication is an ISAKMP SA and it is used to establish security association for other security protocols. The ISAKMP SA is bi-directional. Security Association Establishment. An SA establishment message consists of a single SA payload followed by at least one, and possibly many, Proposal payloads and at least one, and possibly many, Transform payloads associated with each Proposal payload. The SA Payload contains the Domain of Interpretation (DOI) and Situation for the proposed SA. A domain of interpretation defines payload formats, exchanges types, and convention for naming security-relevant information such as security policies or cryptographic algorithms and modes. A Situation is the set of information that will be used to determine the required security services. The Proposal payload provides the initiating entity with the capability to present to the responding entity the security protocols and associated security mechanisms for use with the security association being negotiated. If the SA establishment negotiation is for a combined protection suite consisting of multiple protocols, there must be multiple Proposal payloads each with the same Proposal number. These proposals must be considered as a unit and must not be separated by a proposal with a different proposal number. The Transform payload provides the initiating entity with the capability to present to the responding entity multiple mechanisms for a given protocol. The Proposal payload identifies a Protocol for which services and mechanisms (=transforms) are being negotiated. The Transform payload allows the initiating entity to present several possible supported mechanisms for that protocol. There may be several transforms associated with a specific Proposal payload each identified in a separate Transform payload. The receiving entity must select a single transform for each protocol in a proposal or reject the entire

Security Subgroup

Page 66 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

proposal. Security Association Databases. The protection offered to IP traffic is based on requirements defined by a Security Policy Database (SPD) established and maintained by a user or system administrator, or by an application operating within constraints established by either of the above. The SPD specifies the policies that determine the disposition of all IP traffic inbound or outbound from a host or a security gateway. Each entry includes an indication of whether traffic matching this policy will be bypassed, or subject to IPsec processing. If no policy that matches the packet is found in the SPD, the packet must be discarded; and if IPsec processing is to be applied, the entry includes an SA (or SA bundle) specification, listing the IPsec protocols, modes, and algorithms to be employed, including any nesting requirements. If the application of the SAs in the bundle requires a specific order, the policy entry in the SPD must preserve these ordering requirements. The Security Association Database (SAD) contains parameters that are associated with each active security association. A Security Association has a lifetime after which the SA must be replaced with a new SA (and new SPI) or terminated, plus an indication of which of these actions should occur. IP traffic processing. For outbound processing, entries in SAD are pointed to by entries in the SPD. If an SPD entry does not currently point to an SA that is appropriate for the packet, the implementation creates an appropriate SA (or SA bundle) and links the SPD entry to the SAD entry. For inbound processing, each entry in the SAD is indexed by a destination IP address (outer header's destination IP address, if it is a tunnel mode SA), the IPsec protocol type, and the SPI (A 32-bit value used to distinguish among different SAs terminating at the same destination and using the same IPsec protocol). If the SA lookup fails, the packet must be dropped. If required, IP fragmentation occurs after Ipsec processing or in routers en route, thus, if transport mode is applied, such fragments must be reassembled prior to Ipsec processing at the receiver. In tunnel mode, a security gateway may apply tunnel mode to such fragments. 2.6.3.2.4 IPsec Protocols. The two protocols used by IPsec are the Authentication Header (AH) and the Encapsulating Security Payload (ESP). If the specification of a service requires the use of AH and ESP, then at least two SAs must be created to afford protection to the traffic stream. Authentication Header. The AH is used to provide connectionless integrity and data origin authentication for IP datagrams, and to provide protection against replays. This latter, optional service may be selected, by the receiver, when a Security Association is established. The Next SPI, the Sequence Number (SN) and the Authentication Data (AD) are some of the fields that comprise the AH format. The SN is an unsigned 32-bit field that contains a monotonically increasing counter value. It is mandatory and is always present even if the receiver does not elect to enable the anti-

Security Subgroup

Page 67 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

replay service for a specific SA. Processing of the SN field is at the discretion of the receiver. The sender's counter and the receiver's counter are initialised to 0 when an SA is established and the first packet sent using a given SA will have a SN of 1. If anti-replay is enabled (the default), the sender checks to ensure that the counter has not cycled before inserting the new value in the SN field. If the counter has cycled, the sender will set up a new SA and key. For each received packet, the receiver must verify that the packet contains a SN that does not duplicate the SN of any other packets received during the life of this SA. This should be the first AH check applied to a packet after it has been matched to an SA, to speed rejection of duplicate packets. Duplicates are rejected through the use of a sliding receive window. If anti-replay is disabled by the receiver at the time of the SA establishment, the sender does not need to monitor or reset the counter. However, the sender still increments the counter and when it reaches the maximum value, the counter rolls over back to zero. The authentication algorithm employed for the ICV computation is specified by the SA. The AH Integrity Check Value (ICV) is computed over upper level protocol data, the AH header (with the AD set to zero) and IP header fields that are either immutable in transit or that are predictable in value upon arrival at the endpoint for the AH SA. If the received packet falls within the window and is new, then the receiver proceeds to ICV verification (computation and matching AD). If the ICV validation fails, the receiver must discard the IP datagram as invalid. The receive window is updated only if the ICV verification succeeds. Encapsulating Security Payload. ESP is used to provide confidentiality, data origin authentication, connectionless integrity, an anti-replay service (a form of partial sequence integrity), and limited traffic flow confidentiality. The set of services provided depends on options selected at the time of SA establishment. Similar mechanisms to ones explained in the AH protocol to provide an anti-replay and integrity service, are used here. The encryption algorithm employed is specified by the SA. ESP is designed for use with symmetric encryption algorithms. Because IP packets may arrive out of order, each packet must carry any data required to allow the receiver to establish cryptographic synchronisation for decryption. In the outbound packet processing, the sender: encapsulates (into the ESP Payload field) : -for transport mode--just the original upper layer protocol information. -for tunnel mode--the entire original IP datagram. adds any necessary padding. encrypts the result using the key, encryption algorithm, algorithm mode indicated by the SA and cryptographic synchronisation data (if any). If authentication is selected, encryption is performed first, before the authentication, and the encryption does not encompass the Authentication Data field. This order of processing facilitates rapid detection and rejection of replayed packets. It also allows the receiver to decrypt packets in parallel with authentication. In the inbound packet processing, the receiver: decrypts the ESP Payload Data using the key, encryption algorithm, algorithm mode, and cryptographic synchronisation data (if any), indicated by the SA. Security Subgroup

Page 68 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

processes any padding as specified in the encryption algorithm specification. reconstructs the original IP datagram from: -for transport mode--original IP header plus the original upper layer protocol information in the ESP Payload field. -for tunnel mode--tunnel IP header + the entire IP datagram in the ESP Payload field. If authentication has been selected, verification and decryption may be performed serially or in parallel. If performed serially, then ICV verification should be performed first. If performed in parallel, verification must be completed before the decrypted packet is passed on for further processing. Authentication Algorithms. The authentication algorithms employed for the ICV computation are specified by the SA. For point-to-point communication, the use of HMAC algorithm [RFC-2104] in conjunction with the MD5 algorithm [RFC-1321] or the SHA-1 algorithm [FIPS-180-1], provides data origin authentication and integrity protection. For multicast communication, one-way hash algorithms combined with symmetric signature algorithms are appropiate. Encryption Algorithms in the ESP. The encryption algorithms employed are specified by the SA. Though both confidentiality and authentication are optional, at least one of these services must be selected hence both algorithms must not be simultaneously "null". Because IP packets may arrive out of order, each packet must carry any data required to allow the receiver to establish cryptographic synchronisation for decryption. This synchronisation can be made of explicit way, by an Initialisation Vector (IV) or by using an algorithm for deriving the data equivalent to IV. In the DES-CBC algorithm, the IV immediately precedes the protected (encrypted) payload. DES-CBC algorithm is described in [FIPS-46-2] [FIPS-74] [FIPS-81]. 2.6.3.3

SASL

The Simple Authentication and Security Layer (SASL) (RFC 2222) is a method for adding authentication support to connection-based protocols. To use this specification, a protocol includes a command for identifying and authenticating a user to a server and for optionally negotiating a security layer for subsequent protocol interactions. Anonymous SASL Mechanism (RFC 2245) : It is common practice on the Internet to permit anonymous access to various services (for example anonymous FTP). Traditionally, this has been done with a plain text password mechanism using "anonymous" as the user name and optional trace information, such as an email address, as the password. As plaintext login commands are not permitted in new IETF protocols, the Anonymous SASL Mechanism supplies a new way to provide anonymous login within the context of the SASL framework. One-Time Password (OTP) mechanism (RFC 2444) provides a useful authentication mechanism for situations where there is limited client or server trust. OTP is a good choice for usage scenarios where the client is untrusted (e.g., a kiosk client), as a one-time password will only give the client a single opportunity to act on behalf of the user. OTP is also a good choice for situations where interactive logins are permitted to the server.

Security Subgroup

Page 69 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.6.3.4 TLS - SSL TLS, Transport Layer Security (RFC 2246), is an Internet standard built from SSL "de facto" standard. The primary goal of the TLS Protocol is to provide privacy and data integrity between two communicating applications. The TLS Record Protocol is used for encapsulation of various higher level protocols. One such encapsulated protocol, the TLS Handshake Protocol, allows the server and client to authenticate each other and to negotiate an encryption algorithm and cryptographic keys before the application protocol transmits or receives its first byte of data. One advantage of TLS is that it is application protocol independent. Higher level protocols can layer on top of the TLS Protocol transparently. The TLS standard, however, does not specify how protocols add security with TLS; the decisions on how to initiate TLS handshaking and how to interpret the authentication certificates exchanged are left up to the judgement of the designers and implementers of protocols which run on top of TLS. SSL, Secure Socket Layer, is a protocol using different cryptographic algorithms to implement security using authentication with certificates, session key exchange algorithms, encryption and integrity check. It is a common protocol, often used to ensure that the communication between WWW-server and WWW-client is safe and encrypted. SSL has been developed by Netscape. Actual version (v3.0) and TLS take into account opinions from several other industrial companies and developers. SSL v3.0 has authentication of the strong type. If the browser is inactive or if the session takes very long time the session keys, one key per way, can be changed. TLS or SSL are placed between the TCP/IP layer and the application layer. To set up a SSL (or TLS) connection a TCP/IP connection must be established first as SSL (or TLS) uses the primitives of the TCP/IP. The SSL (or TLS) connection can be seen as a secure channel within the TCP/IP connection in which all traffic between the application peers goes encrypted. All the calls from the application layer to the TCP layer are replaced with calls to the SSL(or TLS) layer, and the SSL(or TLS) layer will take care of the communication with the TCP layer. TLS or SSL protocols are composed of two layers: the TLS (or SSL) Record Protocol and the TLS (or SSL) Handshake Protocol, as show below: Application Layer : Smtp (Email), Http (Web), Ftp (File Transfer), … Handshake protocol - Change Cipher - Alert Protocol Record Layer TCP Layer IP Layer

Security Subgroup

Page 70 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The Record Protocol provides connection security that has two basic properties: -

The connection is private. Symmetric cryptography is used for data encryption (e.g., DES, RC4, etc.) The keys for this symmetric encryption are generated uniquely for each connection and are based on a secret negotiated by another protocol (such as the Handshake Protocol). The Record Protocol can also be used without encryption.

-

The connection is reliable. Message transport includes a message integrity check using a keyed Message Authentication Code (MAC). Secure hash functions (e.g., SHA, MD5, etc.) are used for MAC computations. The Record Protocol can operate without a MAC, but is generally only used in this mode while another protocol is using the Record Protocol as a transport for negotiating security parameters.

The Handshake Protocol provides connection security that has three basic properties: - The peer's identity can be authenticated using asymmetric, or public key, cryptography (e.g., RSA, DSS, etc.). This authentication can be made optional, but is generally required for at least one of the peers. -

The negotiation of a shared secret is secure: the negotiated secret is unavailable to eavesdroppers, and for any authenticated connection the secret cannot be obtained, even by an attacker who can place himself in the middle of the connection.

-

The negotiation is reliable: no attacker can modify the negotiation communication without being detected by the parties to the communication.

The change cipher spec protocol exists to signal transitions in ciphering strategies. Alert messages convey the severity of the message and a description of the alert. Alert messages with a level of fatal result in the immediate termination of the connection. 2.6.3.5

PEM - MOSS - Open PGP/MIME

2.6.3.5.1 PEM : Privacy Enhancement for Internet Electronic Mail PEM (RFC 1421) defines message encryption and authentication procedures, in order to provide privacy-enhanced mail (PEM) services for electronic mail transfer in the Internet. The procedures defined in the current document are intended to be compatible with a wide range of key management approaches, including both symmetric (secret-key) and asymmetric (public-key) approaches for encryption of data encrypting keys. Privacy enhancement services (confidentiality, authentication, message integrity assurance, and non-repudiation of origin) are offered through the use of end-to-end cryptography between originator and recipient processes at or above the User Agent level. No special processing requirements are imposed on the Message Transfer endpoints or at intermediate relay sites. This approach allows privacy enhancement facilities to be incorporated selectively on a site-by-site or user-by-user basis without impact on other Internet entities. Interoperability among heterogeneous components and mail transport facilities is supported. RFC 1422 specifies supporting key management mechanisms based on the use of public-

Security Subgroup

Page 71 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

key certificates. RFC 1423 specifies algorithms, modes, and associated identifiers relevant to the current RFC and to RFC 1422. RFC 1424 provides details of paper and electronic formats and procedures for the key management infrastructure being established in support of these services. How it works So, in order to verify the public key of a user B (Bob), user A (Alice) must obtain the certificate for B issued by CA(B), the certificate for CA(B) issued by PCA(CA(B)) and the certificate for this PCA issued by the CA at the top of the hierarchy TLCA. Because this certification path is so well defined, it can be provided by all users as a matter of course to allow other users do verification without needing to look up certificates in directories. User A is assumed to know the public key of the CA at the top of the hierarchy (this is assumed to be published in so many places that modification of them all is not practical) and to trust the CA at the top of the hierarchy (which, after all, is not doing anything other than tying together the second level of the hierarchy). Lets illustrate some aspects of PEM operation, assume than Alice (A) wishes to send a message to Bob (B) with whom she has never communicated, and that they both operate within the certification structure shown in figure (a) (the arrows represent a certification, such as PCA1 issuing a certificate for CA1’s public key). When Alice composes her message, she appends the certification path between her and the IPRA, i.e. the certificates for herself, CA2, CA1 and PCA1. Note that she does not issue (i.e. sign) these certificates, she merely includes them in her message (she obtained them when she was issued her own certificate by CA2). She then signs the entire message, including the certification path (see figure (b)). When Bob receives the message and wishes to validate it, he starts with the IPRA key and validates the certificate path included in the message to obtain Alice’s public key (checking not only that the included certificates have neither expired nor been revoked, but also that Distinguished Name subordination has been observed). If this is the first time that Bob has followed a certification path down PCA1’s branch of the hierarchy, his PEM software is required ask Bob to explicitly accept PCA1’s certificate. This is to alert Bob that he is entering a new certification policy domain, and that he would be wise to review PCA1’s policy (as well as the policies of any CAs subordinate to PCA1, depending on how much PCA1 allows its CAs to refine their own policies). This requirement for policy awareness helps Bob to determine whether the origin of the message agrees with its contents. For example, Bob might be suspicious of a purchase order message requesting an educational discount that is certified beneath a PCA that represents commercial organisations. However, it is entirely up to Bob to be aware of the policies of the various PCAs. PEM has no automatic policy-verification mechanism. The need for all users to be familiar with the policies of each PCA they encounter resulted in PEM requiring that there only be a small number of PCAs. This meant that PEM would have had to describe every aspect of Internet communications with only a handful of policies because if the number of PCAs increases. Otherwise, if the number of PCAs grows enough to reflect the actual world complexity then the problem becomes that PEM is not designed to select a trusted path between the users if they do not have a trusted top level common point, because of the lack of cross certified policy verification mechanisms.

Security Subgroup

Page 72 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

2.6.3.5.2 MOSS : MIME Object Security Service MOSS (RFC 1847, RFC1848 and RFC 2480) is based in large part on the PEM protocol. It provides authentication and integrity services via digital signature and confidentiality service via encryption. The services are offered through the use of end-to-end cryptography between an originator and a recipient at the application layer. Asymmetric (public key) cryptography is used in support of the digital signature service and encryption key management. Symmetric (secret key) cryptography is used in support of the encryption service. For this purpose MOSS provides two new security subtypes of the MIME multipart content type: signed and encrypted. The signed content type specifies how to support authentication and integrity services via digital signature; he security multiparts clearly separate the signed message body from the signature. The encrypted content type specifies how to support confidentiality via encryption. Whereas PEM currently only supports text-based electronic mail messages and the message text is required to be represented by the ASCII character set with "" line delimiters, MOSS support both textual and non-textual messages and don't require to have certificates: when using MOSS, users need only have a public/private key pair. 2.6.3.5.3 PGP/MIME PGP/MIME (Rfc2015 - Rfc2440) provides solutions to a number of problems in integrating PGP with MIME, the most significant of which is the inability to recover signed message bodies without parsing data structures specific to PGP. It uses MOSS for providing security and authentication. PGP/MIME defines three new content types for implementing security and privacy with PGP: application/pgp-encrypted, application/pgp-signature and application/pgp-keys. 2.6.3.6 S/MIME S/MIME : Secure/Multipurpose Internet Mail Extensions is a specification for secure electronic mail and was designed to add security to e-mail messages in MIME format. The security services offered are authentication (using digital signatures) and privacy (using encryption). S/MIME melds proven cryptographic constructs with standard e-mail practices. More importantly, it was designed to be interoperable, so that any two packages that implement S/MIME can communicate securely. The S/MIME specification consists of two documents : S/MIME Message Specification (RFC 2311) and S/MIME Certificate Handling (RFC 2312). S/MIME uses a hybrid approach to providing security, often referred to as a 'digital envelope.' The bulk message encryption is done with a symmetric cipher, and a public-key algorithm is used for key exchange. A public-key algorithm is also used for digital signatures. S/MIME recommends three symmetric encryption algorithms: DES, Triple-DES, and RC2. The adjustable key size of the RC2 algorithm makes it especially useful for applications intended for export outside the U.S. RSA is the required public-key algorithm. S/MIME does use digital certificates. The X.509 format is used due to its wide acceptance Security Subgroup

Page 73 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

as the standard for digital certificates. 2.6.3.7

SHTTP

Secure-HTTP or SHTTP describes a syntax for securing messages sent using the HTTP protocol. It tries to enable spontaneous commercial transactions by negotiation of different algorithms, modes and parameters needed for security. It provides independently applicable security services for transaction confidentiality, and authenticity/integrity. It allows a variety of key management mechanisms, security policies and cryptographic algorithms (several cryptographic message format standards may be used) by supporting option negotiation between parties for each transaction and SHTTP is a secure message-oriented communications protocol designed for use in conjunction with HTTP. It is designed to co-exist with HTTP's message model and to be easily integrated with HTTP applications. S-HTTP provides a variety of security mechanisms to HTTP clients and servers, providing the security service options appropriate to the wide range of potential and uses possible for the WWW S-HTTP deliberately mimics the format and style of HTTP to ease integration. However, certain headers are promoted to be Secure HTTP headers. Message protection can be done in three ways: signature, authentication, and encryption. Any message may be signed, authenticated, encrypted, or any combination of these. SHTTP has features to allow all these facilities. SHTTP also permits persistent connections between clients/proxy and proxy/server pairs through the use of special headers. 2.6.3.8

PGP

"Pretty Good Privacy" (PGP) is primarily a system, de-facto standard, for providing secure email. PGP is a public key encryption program originally written by Phil Zimmermann in 1991. Over the past few years, PGP has got thousands of adherent supporters all over the globe and has become a de-facto standard for encryption of email on the Internet. Two other products, "PGPfone" and "PGPdisk" , use PGP encryption technology to secure phone calls and encrypt disk partitions, respectively. 2.6.3.8.1 PGP and Privacy The primarily aim of Phil Zimmermann has been to guaranty "Privacy" of electronic communication. In 1991, the US government introduces Senate Bill 266. This omnibus anti-crime bill had a measure in it that all encryption software must have a back door in it.. This bill prompted P. Zimmermann to write PGP. This is what he says in pgpdoc.txt in pgp1.0 : Why Do You Need PGP? (extracts) It's personal. It's private. And it's no one's business but yours. You may be planning a political campaign, discussing your taxes, or having an illicit affair. Or you may be doing something that you feel shouldn't be illegal, but is. Whatever it is, you don't want your private electronic mail (E-mail) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution.

Security Subgroup

Page 74 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

………. What if everyone believed that law-abiding citizens should use postcards for their mail? If some brave soul tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. ……… Today, if the Government wants to violate the privacy of ordinary citizens, it has to expend a certain amount of expense and labour to intercept and steam open and read paper mail, and listen to and possibly transcribe spoken telephone conversation. ……… More and more of our private communications are being routed through electronic channels. Electronic mail is gradually replacing conventional paper mail. E-mail messages are just too easy to intercept and scan for interesting keywords. This can be done easily, routinely, automatically, and undetectably on a grand scale. International cablegrams are already scanned this way on a large scale by the NSA. ………. The Government will protect our E-mail with Government-designed encryption protocols. Probably most people will acquiesce to that. But perhaps some people will prefer their own protective measures. Senate Bill 266, a 1991 omnibus anti-crime bill, had an unsettling measure buried in it. ………. This measure was defeated after rigorous protest from civil libertarians and industry groups. In 1992, the FBI Digital Telephony wiretap proposal was introduced to Congress. ………. Although it never attracted any sponsors in Congress in 1992 because of citizen opposition, it was reintroduced in 1994. Most alarming of all is the White House's bold new encryption policy initiative, ……….. The centrepiece of this initiative is a Government-built encryption device, called the "Clipper" chip, containing a new classified NSA encryption algorithm. ……. The catch: At the time of manufacture, each Clipper chip will be loaded with its own unique key, and the Government gets to keep a copy, placed in escrow. Not to worry, though-- the Government promises that they will use these keys to read your traffic only when duly authorised by law. …….. If privacy is outlawed, only outlaws will have privacy……… PGP empowers people to take their privacy into their own hands. There's a growing social need for it. That's why I wrote it. 2.6.3.8.2 PGP functionality PGP were designed to assure : Message authentication Message non-repudiation Confidentiality of the content of messages or documents For this purpose PGP use encryption, Digital Signature and Message Digest PGP use IDEA symmetric block cipher to encrypt the body of the message and an asymmetrical public-key cryptography (RSA or Diffie-Hellman) for signing messages and Security Subgroup

Page 75 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

transmitting IDE keys. Nota : Problems with Export of RSA cryptographic components from USA are solved in "PGPi" (international) by the use of the Diffie-Hellman algorithm with the DSS standard for signing. 2.6.3.8.3 How it works If user A wishes to communicate with user B then each sends the other their public key by insecure means. This key is packaged in a proprietary certificate format. It is at least selfsigned, and may possible be signed by any number of other people as well. Users A and B then compute a fingerprint value from their keys (a hash value or checksum) and communicate this to each other via a different insecure method as was used for communicating the public key (different means the difference between via email and via telephone, rather than via email and via finger). Users A and B then compare the fingerprints that they have been sent with the fingerprints they have computed from the key they have been sent and, if they match, they assume that the key is good. Each then signs the key of the other user with their own key and adds it to their personal security environment (known as a public keyring) so that further secure communication between users A and B will not need this preamble. User A can optionally send back to user B the key of user B signed by user A (and vice versa of course). The users can than add this new signature to their public key certificate, and we will see how this can be used shortly. Here we have a purely out-of-band authentication system. The above description bears a great deal of similarity to a description of how two CA operators issuing X.509 certificates would cross-certify each other, or how a CA would issue a certificate for a user. In PGP’s case, the cross-certification is being done at the user level, the assurance of user identity is almost always either low or personal, and the protection against security attack during key exchange is as strong as the two distinct methods of distribution of key and fingerprint during authentication process. The reason that users get their keys signed by other users and add these signatures to their own certificates is due to the way in which the PGP trust model works. If user A has obtained a copy of user B’s key that is known to be good, and if A knows B well enough to make an assessment of B’s competence in security matters then A can nominate B to be either completely trusted or semi-trusted. User A can then configure PGP to accept all certificates that are signed by a certain number of completely trusted people or a certain (presumably greater) number of semitrusted people. So if user A wants to communicate with user C and if user C’s certificate is signed by user B then user A will trust the certificate of user C since user B has vouched for the identity and key of user C. If user C has also declared user B to be trustworthy then user C will trust the key of user A since in the exchange described at the top of this section user B signed the key of user A, and user A added this to A’s certificate. Note that if A tries to communicate with user D. and user D’s certificate is signed by user , then user A will not automatically trust that certificate as trust is not transitive. The process of verifying a signed message is: 1. The message can be verified using public key xxxxxx which claims to be Alice….. 2. Key xxxxxx is signed by user bob with key yyyyyy. 3. I trust user bob to be competent to vouch for identity of others. 4. I believe that yyyyyy is the key of bob. 5. I require one trusted user in order to believe a signed certificate.

Security Subgroup

Page 76 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

6. I trust key xxxxxx belongs to Alice. 2.6.3.8.4 Confidentiality : Message body ciphering Encryption To send a confidential message X to a recipient whose public key is Ppub() : PGP randomly generates a 128 bit key k IDEA is applied to X to produce a ciphered message IDEA(X,k) Ppub is applied to k for ciphering the shared key k IDEA(X,k) and Ppub(k) are sent together to the recipient Since k is a random number, X will never be encrypted the same way twice. In the case that X is sent to several recipients, the key k is ciphered with the Public keys of each recipients and these keys are joint to the IDEA ciphered message. Decryption At the message reception, PGP applied: Its private key Ppriv () to Ppub(k) to retrieve k = Ppriv (Ppub(k)) IDEA ( ,k) to the ciphered message body to obtain X = (IDEA (X,k),k) 2.6.3.8.5 Authentication - Non-repudiation : Message signing For signing a message X, MD5 is applied to X to obtain a message Digest MD5(X). MD5 is ciphered by the secret key Spriv ( ) of the sender to obtain the signature Spriv (MD5(X)) for the message X. X and Spriv (MD5(X)) are sent together. Recipient, who has the sender public key Spub() applied that key to the signature to retrieve the message digest MD5(X) = Spub(Spriv ( MD5(X))). He also applied MD5() to X to generate a Message Digest : if the transmitted and the generated Message Digests match, the message X is authenticated. Encryption and signing can be used together. In this case IDEA is applied to the set (X,Spriv (MD5(X))). The transmitted message is then IDEA ((X, Spriv (MD5(X)),k), Ppub(k)) 2.6.3.8.6 Key management PGP has sophisticated key management. There are many ways to get a public key out to other people. To use PGP for private correspondence with people that are already known, it is probably easiest to simply copy the PGP Public Key Block into a message and send it to them as email. They would then copy it and paste it into a text editor to be saved as a keyfile. A public key can also posted on a personal web page or FTP site, or put it in a "finger" plan file on a server. It could also be distributed it to recipients on a floppy disk. To make a public key widely available, it could be added to a Public Key Server's keyring. This is a site which allows people to download other people's public keys. There are many Security Subgroup

Page 77 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

such sites but let's go back to MIT. If you would like to submit a public key go to: http://www-swiss.ai.mit.edu/~bal/pkscommands.html 2.6.3.8.7 PGP components For confidentiality, authentication and non-repudiation PGP use basic encryption and signing components described above. PGP provides also : Data compression before encryption, Generation of asymmetrical key pairs -

RSA of three lengths for low commercial grade (384 bits), high commercial grade ( 512 bits) and military grade (1024 bits)

-

Diffie-Hellman key pair of lengths from 768 bits to 4096 bits

Storing public keys in a keyring Protecting of the private keys in an encrypted file with a passphrase which can not be guessed or broken with a program designed to test words and phrase against the key. This file (secring.pgp) must always be backup. Optionally, a component (ARR: Additional Recipient Request) to allow enterprises to recover messages written by their employees. When a message is encrypted with the "wipe" option (-w) the original clear file is overwritten and then can be deleted, making it harder to undelete PGP has a good ergonomic design and source code is free.

Security Subgroup

Page 78 of 214

Security Subgroup

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Page 79 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

3 Organisational , legal and juridical aspects for security 3.1 Organisational apects Organisational aspects are adapted from AESOPIAN News Report "Best Practices in ICT" and from "Password Guidelines" from University of Cincinnati from the "Guide de la sécurité des systèmes d'information à l'usage des directeurs" du CNRS 'Centre National de la Recherche Scientifique - France). 3.1.1 Security management organisation 3.1.1.1 Security management overview Security risks are not only being dealt with in a technical way. Technical measures are only part of the measures that can be taken to avoid or diminish security risks. Often, procedural arrangement have to be made (e.g. key management). Even when a technical or software approach is chosen to be most appropriate, organisational measures have to be taken to guarantee proper implementation. In many cases cheaper solutions for risk avoidance can be found when taking clear organisational and procedural measures instead of by introducing a fully technical or software solution. Not all information and information services are equally important to the organisation. It is the value of the information that has to be protected. The starting point is the proper organisation of information security: responsibilities, powers and duties must be clearly specified in decreasing levels of abstraction: - Policy and/or codes of conduct: which objectives are being aimed for? - Processes: what has to happen to achieve those objectives? - Procedures: who does what and when? - Work instructions : how is it done and when? Management has to answer these questions on the basis of the available resources, i.e. people, time, and money, and the required level of information security. Different organisational and management practices can be found depending on the type of business setting involved. In principle the use of passwords, pin-codes, etc. is an important way to avoid unauthorised access to resources. However, the issuing of such passwords, codes etc. and the change, withdrawal, etc. of these objects needs some dedicated organisational and management contributions. Depending on the typical business setting different organisational and management practices are known. Some basic practices and strategies however are more generally used and are applicable in different settings. The role of Trusted Third Parties (TTPs) and electronic certificates is a special Security Management issue relevant for development of "Information and Communication Technology" (ICT) systems and services and the development of electronic commerce in particular. TTPs will not only play a role in networked environments to ensure that interorganisational transactions can take place in a reliable way, because of their expertise, their role in the management and control of information and infrastructure within all types of organisations will increase in the future.

Security Subgroup

Page 80 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

3.1.1.2 Security organisation The good level of security is this one beyond of which all supplementary effort has a cost more important than the waited advantages. Therefore the cost of the prevention is to put in connection with the cost of a possible incident of security. This evaluation requires a clear conscience of damages that can cause a malicious intent compared to the advantages provided by an adapted organisation. 3.1.1.2.1 Security : a management function The security of information systems is a transverse discipline that covers very varied aspects. It is therefore the responsibility of "non-specialised engineers" who have known how to acquire several specialities, specifically adapted to the equipment of which they have the responsibility. These "specialised non-specialised engineers" can manage efficiently their mission only with the support steadfast of their director. The director has to be informed of risks and vulnerabilities of its information system; he has to be aware of risks in order to define, with the assistance of the "men of the art", the security policy of the establishment and to make it apply. It is the director that decides. He must know that nobody else that him can take decisions of organisation that implies the security. The security is a part of the functioning safety (and reciprocally . ..). An information system that has not been correctly designed is impossible to make secure properly. A methodological approach has therefore to be adopted from the phase of design. This approach enters in a quality processes. It is indispensable to show to each user the constraints that it has to respect and to make he sign a charter of "the good usage of the computer means". 3.1.1.2.2 Staff organisation A model of organisation can be this one: The personnel in charge of the security is directed by a security manager who is put directly under the responsibility of the director of the establishment. It is share out into two groups: a functional structure and an operational structure, which is put in charge of the implementation of the security of the information systems , the network, the premises . .... According to the importance of the enterprise and its distribution on one or several sites, these structures can be more or less important. They will be constituted of a team at the head office of the establishment and of correspondents in the other sites. The functional structure is directed by the security manager; the responsible(s) of the operational structure are belonging to this one. It is in charge of the elaboration of a model of the policy of security and of the dashboard. It has to define what has to be protected, to estimate the costs of damages and the policy of security, to define the user rights , etc. The operational structure implements the security policy defined by the functional structure: it installs and updates the system of protection, detects attacks, distributes alerts in case of intrusion or malicious damage, implements fallback procedures, realises security audits . It works in co-ordination with the system managers and the networks manager. 3.1.1.2.3 Security actions - Information and recommendations distribution The distribution of information and recommendations can take several forms:

Security Subgroup

Page 81 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

- bulletin " computer Security" in the form paper or electronics (Web) - Information servers putting on-line official or general recommendations, courses , articles, hyperlinks to servers specialised in security. - closed lists of distribution, or lists with access control, to distribute alerts or for warning of the availability of new tools or antivirus - the organisation of awareness or course meeting Especially, it is necessary to make know orders on what each has to do if he is victim or if he is witness an incident of security The antivirus have to be installed or, at least, put at the disposal of users and their update has to be made regularly. Software tools allowing tests or the audits of security can also be distributed. - Security awareness campaigns With a methodology adapted to the different sites and services, the aim of these operations is to sensitise units, to help them to make a statement of their vulnerabilities, to improve and organise their security, to propose corrective actions and security tools. - Reaction to a security incident An security incident is always , a priori, a serious event. If a user discovers traces allowing to suspect a malicious damage on a system, it has first, and before all thing, to inform his director. It is necessary also to prevent that the damage spreads by warning those whose role is to try to keep the security of the network. It is necessary equally to isolate systems that have been violated, to make the statement of damages and to record all what can allow to find the origin of the aggression. Only after that, one can begin to repair. In brief, during of an incident, it is necessary: 1) To disconnect of the network the suspected machines or to put a filter that prevents all access from the exterior. 2) To undertake a backup of the system to preserve traces of the incident 3) To warn the person in charge of the security in the enterprise and the neighbour CERT. 4) To make the statement of damages; especially it is necessary to verify all machines of the network and to control if a sniffer is installed: in this case to change passwords of all users on all stations. 5) To reinstall the system and user accounts 6) To give no information on the incidents to the unauthorised third parties. 3.1.1.2.4 Data processing resources and network management - Installation In order to be able to re-establish a correct system after an incident, it is necessary to apply a procedure of installation to all systems: 1) to make a backup of configuration files. To automate this procedure to be able to remake it after an update or the installation of patches 2)to bring up to date the list of software (operating system, applications, services) with their implantation, their supplier and their license number. It is preferable that this list is centralised. 3) to suppress useless networks services (demons) 4) to install necessary software to give logged and controlled accesses - Security patches As soon as a incident of security is announced by CERT, it is immediately exploited. It is necessary therefore to rectify errors (patches) as early, because it is then a pursuit race Security Subgroup

Page 82 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

between the system administrator that has to update his machines in permanence and pirates who hope, by using the last weakness, to take in defect a certain number of system.. However the server must be chosen with the greatest prudence when you want upload a software. Some sites trap software that they propose ( Trojan horse , logical bomb, etc.). Then it is necessary to make confidence only to recognised and known sites and to verify the " signatures". This precaution is useful for all the security tools whose it is necessary a priori to distrust . This is even more true for the corrective systems - User account management 1) Each account has to belong to a clearly identified user. All access "disguised" (guest, visitor, .. .) must be proscribed. 2) For each new user, a procedure of entry has to be put into service: signature of the charter, attribution of disc space, machine, account, allocation of resources . 3) To verify regularly that all open accounts are again news. Unused accounts since more of three months have to be closed. 4) The solidity of passwords has to be regularly verified (with a software such that «crack») . It is recommended that users change regularly their passwords (that implies a procedure to put in place and to manage). 4) A procedure of exit of the user must be created: the system administrator has to be informed immediately the departure of a user; the temporary accounts (PhD students, trainees, visitors) can not be left to vau-l'eau. 3.1.1.2.5 Active protection - Physical access control Some files can present a particular confidentiality degree. It is the case of nominal files for example medical files. One finds also this particularity in industrial collaboration where clauses of confidentiality are imposed by contract, or in laboratories that use special equipment. The person in charge has then to propose specific means to implement particular safety measures. This can be controls of stricter access, or data and /or communications ciphering; this can be also the decision not to connect a " sensitive machine " to the network. Combined with an access control, the isolation of a system is the most sure means that one knows to protect data that it contains. If it is necessary that some servers have to be physically protected and placed in premises with secure access , it can be judicious to define different zones in the establishment: a zone with free access and a zone where the access is controlled. More particularly, it is necessary to do not forget that a computer, especially a PC, is an attractive prey for thieves. There is hardly other means to protect it that to lock the offices and to control the entries in the establishment. - Backup copies One never will tell enough that it is necessary to carry out, regularly, backup copies. The better is to decree precise rules that allow to be sure that they are suitable: - what files are to be saved? -what is the periodicity? - what is to be recovered? These rules define also where backup copies have to be stored, in such a way that: - in case of disaster or theft, they are not lost with the machine(s). It is an obviousness... that is not always shared, that backup copies must not be kept near the system, which it has

Security Subgroup

Page 83 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

to protect. - they are not out of reach of the first person to come, especially if they contain confidential files: nothing serves to well protect his system if backup are easily accessible for all! It is necessary to verify backup files. Experience shows that backup files, that have never been tested in restoration, can reveal large surprises. - Detection of attacks. An efficient protection can not be limited to strengthen the " solidity " of systems. It is necessary to know that ,sooner or later, it will be attacked successfully, often in a totally unexpected manner and with unforeseen consequences. It is necessary to be able to detect these attacks, to control them and to insure that damages will be reduced. The recording of the abnormal activity of the network allows a first possibility of detection. It is necessary: - to install detection devices of intrusion attempt at the level of entry equipment (alarms generated by filters on routers) and stations (message of tcp-wrapper for example) - to perform daily examination of these alarms. On stations, it is necessary: - to watch to what the accounting is effectively verified so as to notice accounts that no longer consume resources and to close them, as well as these whose consumption is unusual; - to insure that systems are verified regularly in a such way to detect their abnormal modifications. These verifications have to be carried out more attentively in periods of alerts (announcements of intrusions on linked sites, for example). - Alert procedure It is necessary to define what course of action shall be token in case of intrusion or malicious damage. A user that detects a "abnormal" fact has to warn the security correspondent of the service or of the site. This last has to co-ordinate, with his director, the measures to take according to the recommendations described above (§4.1.2.3) - Falling back and restarting procedures There is nothing more anguishing than a security incident when you do not know what to do; this is why it is necessary to have thanked in advance to the behaviour to adopt. It is good to have considered some typical incident consequences and to have studied possible answers. In some services, consequences of dysfunction can be too serious to be neglected: it is sometimes necessary to maintain a certain level of service, even degraded, during the phase of restoration of the good working of the system. This question arises, for example, for vital servers such as files or messaging servers. 3.1.1.2.6 Methodological approach The three phases of a methodological approach, that import us here, are: - the elaboration of a model; - the elaboration of a security policy ; - the elaboration of a dashboard. All methods have their originalities, but all ensue from a model that is a manner to formulate correctly the problem. All end up in what it is necessary to make (the policy of security) and give means to measure gaps between what is wished and what is has really obtained (the " dashboards ").

Security Subgroup

Page 84 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

- Model elaboration Models define, in each moment of the cycle of life of the information system , rules on the " resources ", the " constrains", the " functions " and the " products ". They express: - what is to be protected and why, - the needed level of protection , - against what to protect, - how to protect, - the effort that one is ready to do for insuring this protection. This type of model building allows to describe with precision "what one wants" with means available in the frame of existent constraints. It gives a "vision" or a measure of threats, risks and vulnerabilities, what allows to infer criteria of "mastery of the security" (management of the risk). A threat is a danger that exists in the environment of a system independently of the former. A vulnerability is a weakness of the system that makes it sensitive to a threat (bug, bad configuration, . ..). The risk is the probability that a particular threat could exploit a given vulnerability of the system. To process the risk, that is to take into account its threats and vulnerabilities. An information presents a certain vulnerability. To insure it a level of protection has a certain cost. The gap between the potential threat and its level of protection corresponds to the residual or accepted risk. - Security policy elaboration To determine a security policy , that is to define objectives (what it is necessary to protect), procedures, an organisation according to means. The processes are recursive: after a problem of security, the policy is adjusted. Procedures, means and sometimes the organisation are adapted. Sometimes, it is necessary to reduce its objectives. It is important to define correctly the rules of the model: what is authorised and what it is not (it is forbidden to read the mail of its neighbour without being invited there, even if the former has not known to protect it correctly... ). It is absurd, but it often seen - to will lock entries, to define prohibitions while one has not known to define rules to which these actions would have to refer . The security policy is designed from the security model: - potential or real threat analysis; - identification and analysis of vulnerabilities (audit, quality control.. .) - evaluation of risks and determination of the admissible level of risk. It is achieved by : - the integration of tools and services of system security or of network security (audit, access control, identification, antivirus, expert systems, nucleus of security... ); - the "software/system" validation ( formal techniques, quality analysis , dynamic and static tests, etc.); - the evaluation and the certification of systems and products . The security does not have to remain static because all defences can be bypassed; this is why a good security policy always consists of two parts : 1). The security a priori (" passive policy ") : it is the "armour plating" of the system. It is characterised by the elaboration of an explicit security policy, an organisation adapted to this policy, procedures of the methods for working , techniques and tools... 2). The security a posteriori (" active policy ") : it is the defence " in depth " ... It consists for example in: Security Subgroup

Page 85 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

- to supervise the protection means for controlling their efficiency (but also the efficiency of the security policy ) ; - to detect attacks and the bad configurations by recording accesses to sensitive services, by deploying automatism of intrusion detection, etc .; - to reply by corrective actions: stopping of session, dynamic reconfiguration of systems of access control, logging of sessions; - to install traps. - Dashboard elaboration It is necessary to conceive statistical indices so as to constitute the " dashboards " that allow to evaluate the impact of the security policy on the quality of the work, the organisation and the management. These dashboards constitute a real metric of the security. - They measure the residual vulnerability of a information system and allow to estimate its evolution. - They evaluate the efficiency of the security policy. - They indicate the modifications of the environment. - They alert on the appearance of new weaknesses. This metric allows to control the security policy by revealing the adaptations that, necessarily, turn out with the passing days. They are also indispensable "decision supports". Indeed, they give means to make real "assessments" on the well-based choice that have been made and therefore to justify investments that have been agreed compared with realised gain. Without them, the security can be seen only as a useless cost by the " decision-makers ". 3.1.2 Password Management Information handled by computer systems must be adequately protected against unauthorised modification, disclosure, or destruction. Effective controls for logical access to information resources minimises inadvertent employee error and negligence, and reduces opportunities for computer crime. Each user of a mission critical automated system is assigned a unique personal identifier for user identification. User identification is authenticated before the system may grant access to automated information. 3.1.2.1

Password Selection

Passwords are used to authenticate a user's identity and to establish accountability. A password that is easily guessed is a bad password, which compromises security and accountability of actions, taken by the logon id which represents the user's identity. Today, computer crackers are extremely sophisticated. Instead of typing each password by hand, crackers use personal computers to make phone calls to try the passwords, automatically re-dialing when they become disconnected. Instead of trying every combination of letters, starting with AAAAAA (or whatever), crackers use hit lists of common passwords such as WIZARD or DEMO. Even a modest home computer with a good password guessing program can try thousands of passwords in less than a day's time. Some hit lists used by crackers contain several hundred thousand words. Therefore, any password that anybody might guess to be a password is a bad choice. What are popular passwords? Your name, your spouse's name, or your parents' names. Other bad passwords are these names spelled backwards or followed by a single digit. Short passwords are also bad, because there are fewer of them; they are more easily Security Subgroup

Page 86 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

guessed. Especially bad are "magic words" from computer games, such a XYZZY. Other bad choices include phone numbers, characters from favourite movies or books, local landmark names, favourite drinks, or famous people. Some rules for choosing a good password are: - Use both uppercase and lowercase letters if the computer system considers an uppercase letter to be different from a lowercase letter when the password is entered. - Include digits and punctuation characters as well as letters. - Choose something easily remembered so it doesn't have to be written down. - Use at least 8 characters. Password security is improved slightly by having long passwords. - It should be easy to type quickly so someone cannot follow what was typed by watching the keyboard. - Use two or three short words and combine them with a special character or a number, like "barc#stut" or she-wolf752roma. - Put together an acronym that has special meaning to you, like NOTFSW (None Of This Fancy Stuff Works) or AVPEGCAN (All VAX Programmers Eat Green Cheese At Night). Some organisations use randomised password, which are automatically generated. They are very difficult to guess, but also to memorise. 3.1.2.2

Password Handling

A standard admonishment is "never write down a password." You should not write your password on your desk calendar, on a Post-It label attached to your computer terminal, or on the pull-out drawer of your desk. A password you memorise is more secure than the same password written down, simply because there is less opportunity for other people to learn a memorised password. But a password that must be written down in order to be remembered is quite likely a password that is not going to be guessed easily. If you write a password in your wallet, the chances of somebody who steals your wallet using the password to break into your computer account are remote. If you must write down a password, follow a few precautions: - Do not identify the password as being a password. - Do not include the name of the account or the phone number of the computer on the same piece of paper. - Do not attach the password to a terminal, keyboard, or any part of a computer. - Mix, with some "noise" characters or scramble, the written version of the password in a way that you remember, but make the written version different from the real password. - Never record a password on-line and never send a password to another person via electronic mail. Note: This information on passwords was adapted from the book Practical UNIX Security by Simson Garfinkel and Gene Spafford. 3.1.2.3

DCE architecture

Security Subgroup

Page 87 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

In the DCE architecture, passwords are centralised and stored on a secured authentication server and never transmit on the network. Users can have only one password for all the application servers that they have to access. In this way, the password management is simplified. 3.1.3 Risk analysis management Systems that provide 100 percent security guarantee are in general extremely expensive and are being built only for life-critical applications. So one has to find a good balance between avoiding risks and not spending too much on measures that help to minimise security risks. To find this balance a risk analysis should be made at some stages within a project. Several methods can be followed to handle such a risk analysis, depending on the specific objects that the ICT (Information and Communication Technology) service/system consist of and the actual business processes that are being supported. In practice there are three general goals for information security: confidentiality, integrity, and availability. In terms of measures, these are several best practices that actually are business standards (back-up procedures, ring-structured networking, passwords, etc.). Several of these well known best-practices are being mentioned below. In particular making these measures work and organising them in a good way is however not always a trivial matter. For telematic systems and services especially risks that deal with secure and reliable transmission of messages, data, information, etc. and access to resources by outsiders are almost always a matter of concern. Several examples of approaches are included, that have been used in practice to avoid risks related to the distributed nature of ICT systems and services. Several tools exist for helping organisations to perform risk analysis. Such tools may be helpful in many cases and provide often many ideas for improving the level of security. It is important however to be careful with following recommendations given by such tools too blindly, as the tools are often made with a specific type (e.g. bank, government organisation, computing centre) of organisation in mind. 3.1.3.1 Incident cycle Security measures focus on a specific moment in the incident cycle (event cycle), The following steps are distinguished in the incident cycle : Firstly, there is a risk that something might occur. If it occurs, we speak of a security incident (violation). This may result in damage (to information or to assets) that has to be repaired. These steps have to be attended to with suitable security measures. The choice of measures will also depend on the importance attached to the information. In the first place, preventive security measures are used to attempt to prevent an incident from occurring. The most well-known example of preventive measures is the allocation of access rights to a limited group of authorised people. The further requirements associated with this measure include the control of access rights

Security Subgroup

Page 88 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

(granting, maintenance and withdrawal of rights), authorisation (who is allowed to access which information and using which tools), identification and authentication (who is seeking access), access control (only authorised employees are allowed access). Measures are also taken in advance to minimise any damage that occurs. These are reduction security measures. Familiar examples of reduction measures are making regular backups and the contingency plan. If an incident occurs, it is important to discover it as soon as possible: detection. A familiar example of this is monitoring, linked to a warning. Repressive measures are then used to counteract any continuation or repetition of the incident. For example, an account or network address is temporarily blocked after numerous failed attempts to log on. The damage is repaired as far as possible using corrective measures. For example, corrective measures include restoring the backup, or returning to other stable situations (roll back, back out). Fallback can also been seen as a temporary corrective measure. In the case of serious incidents, an evaluation is necessary in due course, to determine what went wrong, what caused it and how it can be prevented in the future. Moreover, a reporting procedure for security measures and for evaluating the effectiveness and efficiency of the present security measures can be made on the basis of an insight into all incidents. This involves using logging files and audit files, and of course, the records of incident management. 3.1.3.2 Performing risk analysis when developing a ICT system ICT risk analysis should not take place at only one occasion, e.g. during the development of an ICT service or system. But at every stage of its further change, upgrading, extension or coupling to other systems or services. At any such occasion there may be new security risks that may occur. The first analysis however, when a new system or service is being developed, carried out by assessor is usually the most time-consuming and requires the most resources. Subsequent analyses will (to some extent) be based on previous work and the time required will decrease as expertise is gained. Unless we quantify the risk or impact and the probability of its occurrence, we have little basis for making an informed decision. Estimating the annual value of a loss provides the information necessary to determine whether security safeguards are needed and if so, the cost that can be allocated to safeguard that item. Additionally, estimating the annual loss associated with each risk provides a common denominator for determining the magnitude of each risk. An organisation may then develop safeguards against the high risks. After estimating the value of a loss, the risk analysis team is ready to identify alternative security safeguards and provide recommendations for cost-effective security solutions. Security measures designed during the development of a system are generally more Security Subgroup

Page 89 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

effective than those superimposed later. A risk analysis should be included in the conceptual analysis and design phases of every ICT system. 3.1.3.3 Multidisciplinary team to conduct the ICT system/service risk analysis The success of a security and risk management program depends on management involvement. This involvement is shown by endorsing the selection of the risk analysis team members and formally delegating authority and responsibility for the risk analysis task, explaining the purpose and scope of risk analysis, expressing support for information security and the risk analysis process to all levels of the organisation. Demonstrating commitment to the risk analysis process includes reviewing the findings produced by the risk analysis team. Milestones should be established at various points in the risk analysis process so that management can become meaningfully involved in reviewing progress and decision-making. Risk analysis can best be accomplished by a team of individuals representing a variety of disciplines. Risk analyses are best performed by a team of individuals representing the following disciplines: 1. data processing operations management 2. systems programming (operating systems) 3. systems analysis 4. applications programming 5. data base administration 6. auditing and controlling 7. physical security 8. communication networks 9. legal experts 10. functional owners and line managers 11. administrative organisation and procedures 12. system users 13. (ICT) security These entities should be represented on the team by people or should be available to provide inputs. Users can provide valuable input in terms of how a particular system is used and identify vulnerabilities that may not be apparent to custodians or owners of information. Each participant should understand what is to be achieved. 3.1.4 Network security management

As information will travel across networks that are not under the control of the user, all types of security risks have to be dealt with. Connections may fail and the transfer may not take place at all. In order to deal with reliable connections several best practices in terms of ring-type networking, buffering, etc. are known. Not only measures to avoid such problems are important. Also problem identifying procedures are necessary and corrective actions are sometimes needed to deal with connections failures and to make sure that an appropriate other connection is being chosen and made available. Whether or not a message has arrived at the right destination can be dealt with by using several types of confirmations at application and/or at network level. However, further measures are necessary when the origin of messages and information has to be ensured and when a possible malicious change in the information or message is being Security Subgroup

Page 90 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

suspected. Tapping and intercepting of messages is a particularly important issue. Not only are there financial risks, when credit card numbers are monitored and use is being made of such numbers for fraud and swindle, but also a serious infringement of personal privacy may occur. Fraud may also be the case when one tries to masquerade and impersonate another organisation. Authenticity of sources and information is also necessary to realise the believability of the information highway as the backbone for electronic commerce on a large scale. In this respect the further emergence of trusted third parties for providing reliability services is an important development. 3.1.4.1 Security measures for network services Security measures for networks requires in principle the same approach as managing the security of computer systems. The problems may be somewhat special in terms of the different nature of the threats (especially with regard to confidentiality and continuity), the shared use of the infrastructure (friend and foe share the same medium), the use of external services and the involvement of other people with commanding roles. Using additional measures, such as encryption techniques, may be considered on the basis of a risk analysis. It's necessary to take appropriate security measures if network services are used for exchanging information, such as EDI or e-mail. The necessary measures can be laid down in for example an Interchange Agreement (or, for internal use, guidelines). For internal use, groupware applications are becoming more common, which include electronic office applications and the electronic agenda. These developments (and in fact all changes in the use of IT) should be assessed in terms of their impact on the nature of the threats, the dependencies, and how they are embedded in the current organisation. 3.1.4.2 Penetration team to test the on-line service security Most ICT systems and services link their physically separated local area networks into a corporate wide area network. In addition, many ICT service providers, provide remote access to their LANs over dial-in or Internet connections. Finally, many organisations connect their internal networks to public networks, most notably the Internet. To ensure that outsiders cannot penetrate private corporate systems, and either cause harm or gain unauthorised access to confidential data, each of these network connections should be protected by a secure gateway. In order to test the external access an outside team could help to test the penetrability of the on-line service. - Remote Internet firewall penetration tests including: IP address probing. TCP and UDP port probing. Various protocol-based denial of service attacks (ICMP flooding and oversize ICMP packets, OOB data, port data flooding, DNS spoofing, `teardrop,' `land,' and `smurf packets, etc.). - Various service-based penetration tests (sendmail exploits, brute-force password attacks, WinNT ftp and SMB exploits, buffer overrun exploits, X-Windows and NFS exploits, etc.). Mail bombs and other data floods. - On-site firewall penetration tests , including: Packet sniffing. IP address spoofing. Source-routed packets. Session hijacking. Bogus ARP attacks. - Telephony penetration tests , including: War-dialers. Brute-force password attacks. On-site gateway could also be evaluated , including: Firewall configuration validation. Terminal server configuration validation. Review of remote-access and Internet policies

Security Subgroup

Page 91 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

and procedures. Operating system configuration review.

3.2 Physical Security It is a given in computer security if the system itself is not physically secure, nothing else about the system can be considered secure. Network, IT systems, buildings, assets and people are all vulnerable to physical harm. Damage may be the result of deliberate attack, accidental (fire, lightning, water damage, …) or environmental impact or as a consequence of an attack directed at others nearby. Magnetic fields can damage data on magnetic disks or tape. Magnetic fields are produced by electric motors, cables and telephones as well as speakers; Excessive heat and moisture is also bad for disks and tapes; Dirt and smoke can also lead to data loss. It can also damage the drive in which the tape or disk is placed. So where the data is stored is very important. The risk associated with each will vary throughout the business and so controls applied should be balanced with the risk presented. Elements critical to the operation of your network or continued service to customers are clearly high priority and should be well protected. Other elements may present a higher risk than expected when an informed analysis of diversity, redundancy, contingency-plans and resulting impact is undertaken. However, often it is fairly easy for someone to get access to systems they are not supposed to have access by simply walking up to a valid users desk. This can be the cleaning staff or a disgruntled (ex)employee making a visit. Often the subject of internal security is overlooked. This is the easiest type of security to implement and should definitely be included in any security plan. • Console security Machines and consoles need to be secure. A person can simply turn off a computer if one has access to it. If they have access to the console, they can often interrupt the boot process to get access to the root prompt. If this doesn't work, they can keep guessing the root password in hopes of compromising the system. For these reasons (and more), the computers and associated consoles should be kept in a secure room. A limited number of people should have access to this room, of course with a limited number of keys. Some places actually have security guards let people into the computer rooms for guaranteed secure access. •

Data Security Companies that value their data need a detailed backup recovery scheme. This includes on site backups for least amount of down time, a copy of this data off site in case of computer room disasters, as well as contingency plans in place. Unfortunately, an easy way to get access to a companies data is to gain access to backup tapes and sensitive printouts. Hence, all sensitive information should be stored in locked cabinets. Backup tapes sent off site should be in locked containers. Old sensitive printouts and tapes should be destroyed. To protect against computer damage from power outages (and spikes), be certain to have your computers on a UPS. This provides consistent power, protects against outages, as well as protects the computer from power spikes. For non-production systems, there should be a automatic way to shutdown the computer if the power

Security Subgroup

Page 92 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

has switched to the UPS for more than 1/2 the time the UPS is rated to supply. To prevent snooping, secure network cables from exposure. • Users practice secure measures Always have users lock their screen when away from their desk. It is best if they log off of their terminal/workstation at night. There should be no written passwords or password hints on a users desk. If users are using X, verify that they are using xauth/xhost to prevent others from reading their screen. • NO welcome banner on site Court cases have shown that initial banners must NOT say "welcome". Your banner should say something like: "Only authorised access allowed; violators will be prosecuted". The Web site : http://www.geocities.com/CollegePark/Center/8086/physical_security.htm give a Checklist to plan physical security. The main point of this is listed below : 1) 2) 3) 4)

Types of controls and level of effectiveness in operations Perimeter and barrier protection Identification of importance of product, process, information, etc. External planning and assessment factors: security environments - Assessment of the business or facility in relation to the surrounding neighbourhood, business district, industrial park, and other related setting - Assessment of factors pertaining to freedom of access and factors related to layout and design considerations - Assessment of the potential of unauthorised entry to high risk or sensitive areas

5) Procedural security and policy formulation - Identification of essential needs for a written security policy with well-defined procedures - Procedures and rules specify operational areas - Procedures, rules, and policies are clear-cut and understood with regard to all levels of operation in high risk locations within the company: - Procedure security planning 6) Sensitive document security planning considerations 7) Assignment of levels of responsibility for the security and protection of sensitive documents and papers - Establishing internal controls, degree of security needed, and levels of responsibility - Establishing internal controls and procedures 8) Basic physical security planning 9) What measures have been taken to protect the areas containing sensitive documents and papers? 10) Safes, vaults, and safe rooms 11) General considerations for safes and vaults - Review and analyse current usage, design, security aspects, related criteria concerning all safes and vaults - Procedures for safeguarding safe and vault areas

Security Subgroup

Page 93 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

12) Personnel control and security planning - Screening and background investigations - Supervision, monitoring, and evaluation - Assess training, education, and other security needs - What special security needs might be added to upgrade and enhance security of the area: 13) Computer-Related Natural Risks

3.3 Juridical and legal issues 3.3.1 Signatures A signature is not part of the substance of a transaction, but rather of its representation or form. Parties often represent their transactions in signed writings. Signing writings and other formalistic legal processes or customs serve the following general purposes: • Evidence, a signature identifies the signer with the signed document; by signing the signer marks the text in her own unique way and makes it attributable to her. • Ceremony, signing calls to the signer’s attention the legal significance of his act, and thereby helps prevent “inconsiderate engagements”. The act of signing may satisfy a human desire to mark and event. • Approval, in certain contexts defined by law or custom, a signature expresses the signer’s approval or authorisation of the writing and/or the signer’s intention that it have legal effect. Legal systems vary, both among themselves and over time, in the degree to which a particular form, including one or more signatures, is required for a legal transaction. If a particular form is required, legal systems also vary in prescribing consequences for failure to cast the transaction in the required form. The statute of frauds of the common law tradition (common law versus civil law latin tradition), for example, requires a signature, but does not render a transaction invalid for lack of one. Rather, it makes it unenforceable in court, and the persistent notion that the underlying transaction remained valid led case law to greatly limit the practical application of the statute. In general, the trend in most legal systems for at least this century has been toward reducing formal requirements in law, or toward minimising the consequences of failure to satisfy formal requirements. Nevertheless, sound practice remains to formalise a transaction in a manner that best assures the parties of its validity and enforceability. In current practice, that formalisation usually entails documenting the transaction and signing or authenticating the documentation. However, the centuries-old means of documenting transactions and creating signatures are changing fundamentally. Documents continue to be written on paper, but sometimes merely to satisfy the need for a legally recognised form. In many instances, the information exchanged to effect a transaction never takes paper form. It also no longer moves as paper does; it is not physically carried from place to place but rather streams along digital conduits at a speed impossible for paper. The computer-based information is also utilised differently than its paper counterpart. Paper documents can be read efficiently only by human eyes, but computers can also read digital information and take programmable

Security Subgroup

Page 94 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

actions based on the information. The law has only begun to adapt to the new technological forms. The basic nature of the transaction has not changed; however, the transaction’s form, the means by which it is represented and effected, is changing. Formal requirements in law need to be updated accordingly. The legal and business communities need to develop and adopt rules and practices which recognise in the new, computer-based technology the effects achieved or desired from the paper forms. To achieve the basic purposes of signatures outlined above, the following effects are needed: • Signer authentication, to provide good evidence of who participated in a transaction, a signature should indicate by whom a document or message is signed and be difficult for any other person to produce without authorisation. • Document authentication, to provide good evidence of the substance of the transaction, a signature should identify what is signed, and make it impracticable to falsify or alter, without detection, either the signed matter or the signature. • Affirmative act, to serve the ceremonial and approval functions of a signature, a person should be able to create a signature to mark an event, indicate approval and authorisation, and establish the sense of having legally consummated a transaction. • Efficiency, optimally, a signature and its creation and verification processes should provide the greatest possible assurance of authenticity and validity with the least possible expenditure of resources. The concepts of signer authentication and document authentication comprise what is often called “non-repudiation service” in technical documents. The non-repudiation service of information security “provides proof of the origin or delivery of data in order to protect the sender against false denial by the recipient that the data has been received, or to protect the recipient against false denial by the sender that the data has been sent.” In other words, a non-repudiation service provides evidence to prevent a person from unilaterally modifying or terminating her legal obligations arising out of a transaction effected by computer-based means. 3.3.2 Digital signatures The electronic or digital signature present two fundamental difficulties at the legal point of view: • The signature is a legal instrument defined by law. The question is the following: ¿is the digital signature a legal signature for the law?. The reply is different country by country. In most cases the reply is positive. But it is necessary to construct a legal demonstration to validate the use of electronic signature. • The weight of the electronic signature is different in countries of common law and in civil law countries. There is , indeed, a problem with the dematerialization of written documents. Is it authorised, yes or no? In common law countries, an electronic signature validates the whole process of dematerialization. In civil law countries, the tendency is: if the written document has a signature, the electronic document has to have a electronic signature. But it is necessary first to be able to dematerialise the written document. Sometimes this dematerialization is forbidden by law and in this case the digital signature serves for nothing. 3.3.2.1 Legal status of documents signed with digital signatures If digital signatures are to replace hand-written signatures they must have the same legal

Security Subgroup

Page 95 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

status as hand-written signatures, i.e., documents signed with digital signatures must be legally binding. Digital signatures have the potential to possess greater legal authority than hand-written signatures. If a ten page contract is signed by hand on the tenth page, one cannot be sure that the first nine pages have not been altered. However, if the contract was signed with digital signatures, a third party can verify that not one byte of the contract has been altered In USA, NIST has stated that its proposed Digital Signature Standard should be capable of "proving to a third party that data was actually signed by the generator of the signature." Furthermore, U.S. federal government purchase orders will be signed by any such standard; this implies that the government will support the legal authority of digital signatures in the courts. Some preliminary legal research has also resulted in the opinion that digital signatures would meet the requirements of legally binding signatures for most purposes, including commercial use. However, since the validity of documents with digital signatures has never been challenged in court, their legal status is not yet well-defined. . Currently, if two people want to digitally sign a series of contracts, they might first sign a paper contract in which they agree to be bound in the future by any contracts digitally signed by them with a given signature method and minimum key size. 3.3.2.2 European Parliament and Council Directive on Electronic Signatures The directive proposal is published during 1998 and is intended to harmonise legal framework at the European level so as to avoid the development of obstacles to the functioning of the internal market. The Directive requires Member states to ensure that "electronic signatures" which are based on a "qualified certificate" issued by a certification service provider which fulfils certain requirements are, on the one hand, recognised as satisfying the legal requirement of a hand written signature, and on the other, admissible as evidence in legal proceedings in the same manner as hand written signatures. Ensuring legal recognition of electronic signatures and certification services across borders is regarded as the most important issue in this area. This involves clarifying the essential requirements for certification service providers, including their liability. However several consultations to trade associations and others groups have identified some issues in the directive than would be developed in either European or national context (that is as a national choice implementation). So the current directive causes some controversial reactions from the different affected collectives (notaries, trade companies, software vendors, chambers of commerce, tax officers…). The frictions than the news and documents relate can mainly be grouped around the next items: 1. Identity vs. Authorisation Certificates. 2. Issuance of certificates. 3. Recognition, Accreditation and Non-discrimination. 4. Revocation. 5. Liability. 6. Data Protection. 7. Consumer rights. 8. Evidence. 9. Agency. 3.3.2.3 Digital signature bringing into play. A digital signature operates for electronic documents like a hand-written signature does for printed documents. The signature is an unforgeable piece of data that asserts that a named Security Subgroup

Page 96 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

person wrote or otherwise agreed to the document to which the signature is attached. A digital signature actually provides a greater degree of security than a hand-written signature. The recipient of a digitally signed message can verify both that the message originated from the person whose signature is attached and that the message has not been altered either intentionally or accidentally since it was signed. Furthermore, secure digital signatures cannot be repudiated; the signer of a document cannot later disown it by claiming the signature was forged. In other words, Digital Signatures enable "authentication" of digital messages, assuring the recipient of a digital message of both the identity of the sender and the integrity of the message. 3.3.2.3.1 How to sign and verify a message First a message digest shall be created by using a hash function on the message. The message digest serves as a "digital fingerprint" of the message; if any part of the message is modified, the hash function returns a different result. Then the message digest is encrypted with a secret key, generally the private key of the sender. This encrypted message digest is the digital signature for the message.

Receipt Signature Document Document

Signed Document Sending

Hashing

Signature

Digest

Signature

Document

Signature Hashing Digest Public Key

Private Key

Comparison of the digests

The message and its digital signature are sent together. The receiver he decrypts the signature using the secret key or the public key of the sender, thus revealing the message digest. To verify the message, he then hashes the message with the same hash function sender used and compares the result to the message digest he received. If they are exactly equal, receiver can be confident that the message did indeed come from signatory and has not changed. If the message digests are not equal, the message either originated elsewhere or was altered after it was signed. Note that using a digital signature does not encrypt the message itself. 3.3.2.3.2 Temporal validity: need of time-stamping Normally, a key expires after some period of time, such as one year, and a document signed with an expired key should not be accepted. However, there are many cases where it is necessary for signed documents to be regarded as legally valid for much longer than two years; long-term leases and contracts or patents are examples. By registering the Security Subgroup

Page 97 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

contract with a digital time-stamping service at the time it is signed, the signature can be validated even after the key expires. If all parties to the contract keep a copy of the time-stamp, each can prove that the contract was signed with valid keys. In fact, the time-stamp can prove the validity of a contract even if one signer's key gets compromised at some point after the contract was signed. Any digitally signed document can be time-stamped, assuring that the validity of the signature can be verified after the key expires. 3.3.2.3.3 Digital time-stamping service A digital time-stamping service (DTS) issues time-stamps which associate a date and time with a digital document in a cryptographically strong way. The use of a DTS would appear to be extremely important, if not essential, for maintaining the validity of documents over many years or decades. A message digest of the document using a secure hash function is computed and then sent t (but not the document itself) to the DTS, which sends in return a digital time-stamp consisting of the message digest, the date and time it was received at the DTS, and the signature of the DTS. Since the message digest does not reveal any information about the content of the document, the DTS cannot eavesdrop on the documents it time-stamps. Later, a verifier can compute the message digest of the document, makes sure it matches the digest in the time-stamp, and then verifies the signature of the DTS on the time-stamp. To be reliable, the time-stamps must not be forgeable. Consider the requirements for a DTS of the type just described: - The DTS itself must have a long key if we want the time-stamps to be reliable for, say, several decades. - The private key of the DTS must be stored with utmost security, as in a tamperproof box. - The date and time must come from a clock, also inside the tamperproof box, which cannot be reset and which will keep accurate time for years or perhaps for decades. - It must be infeasible to create time-stamps without using the apparatus in the tamperproof box. 3.3.2.3.4 Digital certificate Digital certificate is used to prove, electronically, your identity or your right to access information or services online. Digital certificates, also known as Digital IDs, bind an identity to a pair of electronic keys that can be used to encrypt and sign digital information. A digital certificate makes it possible to verify someone's claim that they have the right to use a given key, helping to prevent people from using phoney keys to impersonate other users. Used in conjunction with encryption, digital certificates provide a more complete security solution, assuring the identity of all parties involved in a transaction. A digital certificate is issued by a Certification Authority (CA) and signed with the Certification Authority 's private key. It typically contains the: - Owner's public key - Owner's name - Expiration date of the public key - Name of the issuer (the CA that issued the digital certificate) Security Subgroup

Page 98 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

- Serial number of the digital certificate - Digital signature of the issuer The most widely accepted format for digital certificates is defined by the CCITT X.509 international standard; thus certificates can be read or written by any application complying with X.509. Digital certificates can be used for a variety of electronic transactions including e-mail, electronic commerce, groupware and electronic funds transfers. Digital certificates can replace password dialogs for information or services that require membership or restrict access to particular users. A digital certificate is signed by the Certification Authority, which issued the digital certificate. Multiple digital certificates can be attached to a message or transaction, forming a certification chain where each digital certificate testifies to the authenticity of the previous. The top-level certification authority must be independently known and trusted by the recipient. When a digitally signed message is received, the signer’s digital certificate can be verified to determine that no forgery or false representation has occurred. 3.3.3 Trusted Third Parties(TTP) - Certification authorities hierarchies Trusted Third Parties (TTPs) appeared the first time in the EDI world. In USA, in the Security group of the ABA; in Europe, in the FAST program for Certification Authorities and in the report EDI-RA for Registered Authorities (naming and addressing in telecommunications). The TTPs system is now migrating into the Internet. At a technological level a TTP is linked to the electronic signature (as in USA) or to cryptography (as in France). There are questions with the geographical level: in a country, is there one or several TTPs ? Administrations and enterprises are concerned by a TTP, but will the administrations want a public TTP ? How harmonising several TTPs in several countries? It is necessary - to define a technical interoperability and - to impose the interoperability to all TTPs. At the technical level, the TTP emits electronic certificates (authentication and public keys). It will be necessary to guarantee the interoperability and/or to obtain a mutual approval between TTPs of different countries or between TTPs in "telecommunications" world (X.509, by the moment) and in EDI world (KEYMAN message). TTPs are therefore present in the EDI. Soon (or already) in the email. They will appear on the Web. But "financial TTPs" are already present to help users within the electronic payment. At the legal level, the fundamental question is: what is the legal basis for TTP? The basis can be legal, when a text exists in a country concerning the electronic signature or the cryptography. If there is no law, the legal basis is provided by users. Users give the legitimacy to the TTP by choosing to use its services in a contract concerning their secured electronics exchanges. The responsibility of the TTP is naturally different according to its legal legitimacy. It is necessary again to appreciate the validity of TTP services at the legal point of view for electronics exchanges. Users have to manage their electronic exchanges by a contract, for example, an interchange agreement. The interchange agreement tells what are the legal effects in using TTP (or TTPS).

Security Subgroup

Page 99 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

3.3.3.1 The need of legal framework for the TTP According to each country, the law defines or does not define the TTPs global Infrastructure (Public Key Infrastructure or PKI), the TTP, its role and its creation or accreditation. Many people think the reason that the commercial marketplace has not produced a viable certification authority industry is either because of legal uncertainty (TTPs are unable to determine their potential liability exposure because of a confusing array of applicable background law) or because existing law imposes too much liability on TTPs. Legislation is necessary in order to provide certainty in the marketplace and allow a muchneeded industry to emerge, as well as to address other issues such as the legal status of digitally signed documents. But a PKI legislation in not necessary because it is far too soon to conclude that the market will not produce commercial TTPs, and point to the increasing numbers of commercial TTPs emerging even in the absence of legislation. Time is solving the "uncertainty" problem and the "too much liability" problem is the product of flawed business models, not a flawed legal system. The real danger is that a group of lawyers will impose a set of inappropriate rules that will fundamentally skew a dynamic infant marketplace and locking a set of business models that the market would otherwise reject. The time for legislation and regulation is after identifiable problems exist in a mature industry, opponents say, not before an industry even exists. Existing legal mechanisms can address the issue of the legal status of digitally signed documents. 3.3.3.2 The legal TTP challenge: key escrow and key recovery As of mid-1998 there is a wide range of government, industry, and academic efforts toward specifying, prototyping, and standardising key recovery systems that meet governments specifications. Some of industry's efforts were stimulated by U.S. government policies that offer more favourable export treatment to companies that commit to designing key recovery features into future products, and by U.K. government moves to link the licensing of certification authorities to the use of key recovery software. Yet despite these incentives, and the intense interest and effort by research and development teams, neither industry nor government has yet produced a key recovery architecture that universally satisfies both the demands of government and the security and cost requirements of encryption users. 3.3.3.2.1 Key Escrow challenge: brief summary. A variety of "key recovery," "key escrow," and "trusted third-party" encryption requirements have been suggested in recent years by government agencies seeking to conduct covert surveillance within the changing environments brought about by new technologies The deployment of key-recovery-based encryption infrastructures to meet law enforcement's stated specifications will result in substantial sacrifices in security and greatly increased costs to the end-user. Building the secure computer-communication infrastructures necessary to provide adequate technological underpinnings demanded by these requirements would be enormously complex and is far beyond the experience and current competency of the field. Even if such infrastructures could be built, the risks and costs of such an operating environment may ultimately prove unacceptable. In addition, these infrastructures would generally require extraordinary levels of human

Security Subgroup

Page 100 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

trustworthiness. These difficulties are a function of the basic government(s) access requirements proposed for key-recovery encryption systems. They exist regardless of the design of the recovery systems -- whether the systems use private-key cryptography or public-key cryptography; whether the databases are split with secret-sharing techniques or maintained in a single hardened secure facility; whether the recovery services provide private keys, session keys, or merely decrypt specific data as needed; and whether there is a single centralised infrastructure, many decentralised infrastructures, or a collection of different approaches. All key-recovery systems require the existence of a highly sensitive and highly-available secret key or collection of keys that must be maintained in a secure manner over an extended time period. These systems must make decryption information quickly accessible to law enforcement agencies without notice to the key owners. These basic requirements make the problem of general key recovery difficult and expensive -- and potentially too insecure and too costly for many applications and many users. Attempts to force the widespread adoption of key-recovery encryption through export controls, import or domestic use regulations, or international standards should be considered in light of these factors. The public must carefully consider the costs and benefits of embracing government-access key recovery before imposing the new security risks and spending the huge investment required (potentially many billions of dollars, in direct and indirect costs) to deploy a global key recovery infrastructure. 3.3.3.2.2 Current situation: conclusion The claims for key escrow enforcement are nowadays withdrawing. For instance an European champion of key escrow as U.K. government has just recently dropped its regulations. The intention to enforce a mandatory schema for TTP’s is evolving to the promotion of at least a “voluntary” key recovery TTP schema; a evidence in this direction is for example the Dutch proposal with the promotion of a national volunteer TTP Chamber. Because key recovery is a subject related to not only data confidentiality but also with privacy and civil rights of users there is an strong technical and intellectual movement trying to delimit the broad of enforcement. However current cryptographic legality in many (not all) countries is still pivoting on concepts as “lawful interception of the signal communications” for communications transfer and “lawful access storage” for data storage. These matters arise complexity in a international environments. and use of TTPs is subject of international controversy. There is no idea how key escrow can be enforced over states. Because of controversy, by the moment, key escrow is mentioned as desirable but still is not a requirement in most current legal drafts managed by the European Governments. Also most experiences show than key material is generated and stored by end users so, key escrow is under a personal decision of the client; so nowadays is an evidence than because lack of technical solutions, and scale plus economic problems make impossible to implement a reliable and interoperable solution on key escrow. 3.3.4 Privacy Adapted from February 1999 Aesopian Report and EC directive 95/46/EC The American Electronic Privacy Information Centre (EPIC) reviewed 100 of the most frequently visited Web sites on the Internet. They checked whether sites collected personal information, had established privacy policies, made use of cookies, and allowed people to visit without disclosing

Security Subgroup

Page 101 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

their actual identity. EPIC found that few Web sites today have explicit privacy policies (only 17 out of 100). None of the top 100 Web sites meet basic standards for privacy protection. Many Web sites (49) collect personal information through on-line registrations, mailing lists, surveys, user profiles, and order fulfilment requirements. Only some of the sites do not collect any personally identifiable information.

Security measures regarding privacy can be taken by several actors: • Users of ICT services and systems • Provider of ICT services and systems • Telecommunication providers • Public authorities It is of major importance for ICT systems and service providers to gain the trust of both possible users and public authorities, organisational and technical measures must be taken to incorporate their requirements in the developed system or service. Next to this, proper agreements must be made with all the telecommunications providers involved to ensure secure transfer of information. Government may use regulation and legislation instruments to ensure secure transmission and correct use of personal information The American Electronic Privacy Information Center (EPIC) reviewed 100 of the most frequently visited web sites on the Internet. They checked whether sites collected personal information, had established privacy policies, made use of cookies, and allowed people to visit without disclosing their actual identity. EPIC found that few web sites today have explicit privacy policies (only 17 out of 100). None of the top 100 web sites meet basic standards for privacy protection. Many web sites (49) collect personal information through on-line registrations, mailing lists, surveys, user profiles, and order fulfilment requirements. Only some of the sites do not collect any personally identifiable information. Protecting privacy will be one of the greatest challenges for the Internet. Clear practices have to be implemented and good policies put in place. There are many different privacy policies, but all good policies share certain characteristics: they explain the responsibilities of the organisation that is collecting personal information and the rights of the individuals who provided the personal information. Typically, this means that an organisation will explain why information is being collected, how it will be used, and what steps will be taken to limit improper disclosure. It also means that individuals will be able to obtain their own data and make corrections if necessary. Users of web-based services and operators of web-based services have a common interest in promoting good privacy practices. Strong privacy standards provide assurance that personal information will not be misused, and should encourage the development of online commerce. We also believe it is matter of basic fairness to inform web users when personal information is being collected and how it will be used. 3.3.4.1 The US approach The US government approach towards privacy issues has, up to now, been based on industry-specific self-regulation. The business community has been asked to develop codes of conduct based on the widely accepted privacy principles developed by the OECD. The basis for a code of conduct is in fact an agreement between buyer and seller. The Security Subgroup

Page 102 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

seller must explain what data is wanted for what reason and for what purposes the data is going to be used. Buyer must be able to understand what the seller wants and how this affects their personal privacy. Buyer should also be able to have control over their personal information. This means that they should not only be able to know what it is used for and agree on that, but also to have the possibility to check the accuracy of this information and to change it when incorrect. 3.3.4.2 EC directive Recently the EC adopted a new directive related to the protection of personal data and privacy issues: the EC Data Protection Directive 95/46/EC. In this directive issues concerning the protection of individuals with regard to the processing of personal data are addressed, and the free movement of such data is discussed. A clear and open privacy policy, in line with the accepted regulations in this respect and in line with the requirements of users, is most important. This may involve a number of things: 1. Making sure that individual user agreements are being made, as there may be many users with different requirements, and thus different agreements, most difficulties occur in maintenance and administration. 2. Making sure that your privacy policy is presented properly to your customers, such a policy should at least be in line with the regulations of the countries or regions relevant for the ICT-system or service. 3. Developing a code of conduct with a group of ICT-service providers and communicating this to the (prospective) users of the service. Such a code of conduct does not exist yet and every organisation deals with this question in a different way. 4. Making sure that adequate security measures are taken to support the privacy policy. 3.3.5 Intellectual Property Rights This part is adapted from electronic documents extracted from the EU/DGXIII IPRHelpdesk website. Directorate General XIII/D/1 is mandated with the operational and strategic aspects of intellectual property. It has been implementing various measures concerning innovation protection systems in order to create awareness and to promote the importance of intellectual property rights issues in innovation processes. 3.3.5.1

Introduction

This section explains the basic principles of the legal protection of technical inventions, Trade Marks, Industrial design and copyrighted works and gives information on corresponding legislation, conventions and treaties. The legal instruments of these items of human creativity are covered by the generic term "Industrial Property", the first branch of Intellectual Property. Copyright is included in the second branch of Intellectual Property. This instrument serves to protect intellectual creations such as literature or works of art and software or data bases.

Security Subgroup

Page 103 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Industrial Property Rights ensure exclusive use in commerce and industry thanks to: Patents or Utility Models for inventions can be obtained as a means of ensuring the protection of technical inventions. The subject of such an industrial property right could be, for example, a product or manufacturing process. Industrial design, on the other hand, protect the aesthetic qualities that are expressed in the particular shape or form of a certain product, device, or whatever. Trade Marks deal with the protection of marketing assets such as brands or the names of firms. It is often the case that several different types of property rights are applicable to a single product: Patent or a Utility model, Industrial design and Trade Mark. Rules on unfair competition ensuring the protection of entrepreneurial achievements also belong to the second branch of Intellectual Property. However this legal domain has a different structure. It establishes general rules of conduct which individuals within an enterprise have to observe concerning their competitors. The protection of individual entrepreneurial achievement is secured by the observance of these rules. Note. In common-law countries, such as the United Kingdom and Ireland within the EU as well as the United States and elsewhere, Copyright is the equivalent of Author´s Rights. Although differences exist between Copyright and Author's Rights legislation, the general principles of literary and artistic property are respected by both systems. Although the World Intellectual Property Organisation and more significantly the Berne Convention, which contains the universal principles of literary and artistic property, use the term Author's Rights, we will use Copyright to refer to both Author's Rights and Copyright except where there is a clear need to distinguish between them. 3.3.5.2

Definitions and properties

Intellectual property covers two main areas: industrial property, covering inventions, Trade Marks, industrial designs, and protected designations of origin; Copyright, represented by literary, musical, artistic, photographic, audio-visual works, software and data bases. Intellectual property makes use of the following instruments: Patents; Utility Models; Industrial Design; Trade Marks; Semiconductor Chip protection; Plant Variety protection; Copyright. The scope of the protection obtained through intellectual property rights varies according to the type of instrument employed. The various instruments available have the following characteristics. Patents and Utility Models. Technical inventions can be protected by way of either type

Security Subgroup

Page 104 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

of industrial property right. In general, inventions should be novel and comprise an inventive step. Furthermore, the invention must be capable of industrial application. A major difference between these two property rights lies in the duration of protection. Industrial Design. The aesthetic appearance (design) of an object or specific shape (model), but not the technical invention as such, can be protected by means of a Registered Design. The subject matter of the industrial design may for instance be the external appearance of an object used in everyday life, but could also be the external form of machines or vehicles. Trade Marks (Marks). A Mark is used to differentiate a product or a service. Trade Marks can be two- or three-dimensional and can be made up of words, pictures, colours, and/or sounds and so forth. Semiconductor Chip protection. This instrument covers the protection of the geometrical structure or topography of a semiconductor product such as a microchip. In contrast to Patents or Utility Models, only the geometrical configuration of the microchip is protected, not its technical function nor its technological structure. Plant Variety protection. This protection is granted for plant varieties, which are new, distinct, uniform, and stable. Copyright. Copyright covers works of art such as literature, pieces of music, paintings, drawings, films, construction works, and scientific and technical representations. In addition, computer programs, databases, and multimedia products fall under copyright protection. The author of a work of art owns the inherent rights of ownership to his work, and is entitled to exploit it. In contrast to the other types of protection mentioned above, it is not necessary to apply for registration of the work, as the protection arises solely through the act of creation. The most important forms of protection are listed in the table below:

Security Subgroup

Page 105 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Intellectual Property - Industrial Property Patents

Utility models

Marks

Industrial Designs

Copyright

Technical inventions: - New - Inventive step - Industrial application

Technical inventions : - New - Inventive step - Industrial application (Protectable subject matter can be restricted)

Marks for products, services etc.: Two- or threedimensional words, pictures, colours, and/or sounds

Design: Aesthetic configuration of an object or a specific shape

Creative works: such as literature, science, culture, or software

Duration of 20 years protection:

10 years

10 years (renewable)

10 years (renewable)

70 years after decease of the originator

Instruments:

- National Patents - European Patents - International Application (PCT)

National Utility Models (not available in all EC member States)

- National Trade Marks - International registration of marks (IRTrade Mark) - Community Trade Marks

- National Industrial Designs - International deposit of industrial designs

Competent administratio ns for the application or the registration

- National National Patent Offices Patent Offices - EPO - WIPO

Object

Security Subgroup

- National Patent Offices - WIPO

No registration: The protection arises merely through the act of creation

Page 106 of 214

Security Subgroup

3.3.5.3

Security Guidelines for Inter-Regional Applications

20 October 1999

How does intellectual property work at the international level?

The legal relationships between the various nations are regulated by a large number of international agreements such as: The European Patent Convention (EPC). According to this Convention, which regulates the European Patent, a single Patent Application can be filed for several contracting states. However, as soon as the European Patent is granted by the European Patent Office, it breaks down into national Patents for each of the states explicitly designated in the application. The Paris Convention for the Protection of Industrial Property. This major international treaty is designed to help people of one country to obtain protection in other countries for their intellectual creations in the form of industrial property rights. The Patent Co-operation Treaty (PCT). This treaty implements the concept of a single international Patent Application that is valid in many countries. The Hague Agreement Concerning the International Deposit of Industrial Designs . This agreement gives the possibility of obtaining protection for an Industrial Design in several states by means of a single deposit. The states must be designated by the applicant at the time of deposit. The Madrid Agreement Concerning the International Registration of Marks (IR Trade Mark). Under this system, protection for a Mark can be obtained in several states at once by a single international registration with the International Bureau of the World Intellectual Property Organisation (WIPO). The effect of such a registration is that the same protection is obtained in each of the states designated in the deposit as if the Mark had been deposited directly in every state. The Council Regulation (EC) on the Community Trade Mark. The Community Trade Mark is a Mark registered with the Office for Harmonisation in the Internal Market (Trade Marks and Designs) (Alicante) or with a competent national administration. Either registration has effect in all the member countries of the European Union. The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). This international agreement has been concluded within the framework of agreements for the establishment of the World Trade Organisation (WTO) and was made in order to encourage and secure the effective and appropriate protection of industrial property rights. The States participating in this agreement have in particular granted equality of treatment for their respective citizens. 3.3.5.4

Why to protect an invention and how to find out if an invention is already protected?

The individual who protects his intellectual achievement through industrial property rights - either as a Patent or as a Utility Model - has the right for a limited period to stop any third party from making, using, or selling the object of the invention without his permission. Whether an invention is already protected by means of a Patent can be determined by carrying out a search in the relevant Patent literature. All the documents having a close link with the invention must be identified in databases or by means of a manual search in the literature collections of Patent Offices and libraries. The search is simplified by the International Patent Classification (IPC) which subdivides the whole field of technical Security Subgroup

Page 107 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

knowledge into symbols (from sections down to subgroups). With the help of these symbols, the location of a protected invention in a particular technical area is made easier. Of course, it is possible to carry out an own search in accessible data banks, for instance via the Internet by using the esp@cenet service provided by the European Patent Organisation through the EPO and the national offices of its member states. This service allows access to Patent documents world-wide; however esp@cenet should only be relied on for introductory searches. For deeper searches it is advisable to consult an intermediary such as a PATLIB centre, where search specialists can carry out a more thorough search for you on a commercial host. 3.3.5.5

Intellectual Property aspects of World Wide Web authoring

3.3.5.5.1 Objective In addition to the copyright issues which arise from the content of a web-page, there exists a range of intellectual property questions relating to the assemblage of the page itself, that is the technological aspects relating to associative tools which give the internet its novel character. The recent controversy over Domain Names and the law of Marks has highlighted one aspect of this field. This paper concerns itself with these issues which are important to anyone involved in online delivery or publishing. Note: Wherever possible we have created links to useful resources. Many of the case summaries are maintained by Netlitigation. Other references are drawn from diverse sources listed at the end of the paper. As will become apparent, there are several areas which suffer from a lack of judicial attention in Civil law jurisdictions. 3.3.5.5.2 The Domain Name System A domain name consists of a variable (name) linked to a domain category. Domain names are the monikers and shop fronts which organise the geography of the network. Each corresponds to an Internet Protocol address in numerical form. This process of pairing allows memorisation of an IP address in a manner similar to telephone mnemonics. As a system based on ease of identification, a Domain Name fulfils many of the functions traditionally served by trademarks. There exist seven Top Level Domains (TLD) with wide global recognition, three of these operate open systems of registration: .com, .net, .org. Below these seven domains exist national domains with their regional suffixes: e.g. .uk (Britain), .ie (Ireland), .fr (France). Registration takes place usually on a « first-come, first-served » basis, although amendments to Federal Trade Mark law in the U.S. now give protection to those who have pre-registered a Federal Trade Mark. Nonetheless more than one legal personality may use the same trademark provided they are operating in different market sectors. Problems have arisen due to the global character of the internet which erodes the ability to differentiate between different brands on the basis of territorial locality or product type. The coexistence of TLD suffixes such as .com (the commercial TLD) with national suffixes leads often to rival companies controlling the same mark under different domains and has created considerable litigation. For this reason an international structure has been established to administrate domain allocation and to put in place a quick and efficient system of arbitration in case of conflict. This is proceeding with the involvement of the World Intellectual Property Organisation who in December 1998 produced an interim report on the question : « The Management of Internet Names and Addresses : Intellectual Property Issues. » Security Subgroup

Page 108 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

3.3.5.5.3 Trademarks For the purposes of trademarks, Goods and Services are divided in 42 different classes, and one is only entitled to protection against manufacturers or services of the same or a closely related class. Originally conceived as an instrument of consumer protection, the courts protect trademarks to prevent consumer confusion due to the proximity of two marks. Likelihood of confusion is assessed through an evaluation of the following factors: the similarity of the marks visually or phonetically, the similarity of products /services, the geographical area of use (distinct or overlapping), the sophistication of the consumer group in question and its ability to distinguish correctly between similar items. In 1996 the US law was amended to prevent Trade Mark Dilution, that is : « …against another person’s commercial use in commerce of a mark or trade name, if such use begins after the mark has become famous and causes dilution of the distinctive quality of the mark. » This section may also be invoked where the use of mark tarnishes or blurs the identity of the brand. Infringing actions can be broadly divided into three categories: (a) Trademark infringement involving the misappropriation or passing off of another’s mark as your own as exemplified in Playboy Enterprises, Inc. v Calvin Designer Label. In such instances, judges have responded severely, enjoining all further use of the mark particularly where the usurper is an obvious parasite on a famous brand. This line of jurisprudence is well established also in Civil law jurisdictions as illustrated in the French case of Sapeso et Atlantel v Icare et Reve whose logic received recent confirmation in SFR v W3 System Inc. (b) Trademark dilution comprises that which either tarnishes or demeans the quality of the mark, by associating it with distasteful or inappropriate products, or diminishes the distinctive quality of the mark in general. The latter category generally applies to famous marks and those of particular prestige. The French case of Loreal et al. v PLD Enterprises provides a good example. (c) Device Mark Infringement; as is the case with copyright protection of visual designs, the owner of a mark with a creative logo also has protection over its use. Exempted from this are reproductions of everyday items, and simple utilitarian forms. As a result of legislative amendment in the United States one can ensure domain name protection through prior registration as a Federal Trade Mark. Opportunistic registration of particular names (such as whitehouse.com) has also provoked a technical response from software developers concerned about the strategies of adult-sites. The result is that the new versions of browsers such as Netscape automatically default to the site which they perceive is sought, thus www.whitehouse.com will bring you first to the home page of the White House rather than the pornographic site of the same name. Names drawn from the public domain, such as the names of towns have also been disallowed both in France and in Germany. 3.3.5.5.4 Metatags "hidden code in which the trademark of another entity is used on a web site in a way that is visually invisible to a human reader but is visible to search engines." (McCarthy on Trademarks, Section 25:69, page 25-107). Metatags are keywords identified by search engines which do not appear visibly on the web-page. They allow free riding on well-known products and services, and the practice of their use has been enjoined in US Courts as in Oppedahl & Larson v Advanced Concepts

Security Subgroup

Page 109 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

where the name of the litigant's Patent law practice was integrated into the defendants' web-pages so as to divert traffic attracted by their reputation for expertise in Intellectual Property. The legal status of this practice is ambiguous in Europe. Trademarks have traditionally been regarded as a guarantee for consumers based on their prior experiences, positive or negative, of a company's conduct. As metatags do not actually misrepresent, but rather piggyback on the mark-owners goodwill, it is unclear what attitude courts will take. Playboy v Terri Welles is an example of a case where a defendant has successfully defended their use of metatags. In this instance a former Playboy model established her own site and used marks such as "Playboy" & "Playmate" as hidden metatags. The court found in her favour as Playboy had bestowed these marks on her, furthermore her site made it clear that there was no commercial relationship with Playboy and indeed reference to their marks was minimised. Her utilisation of these marks thus satisfied fair use criteria as a ‘fair and accurate’ description of her. Prosecutions relating to metatags have depended heavily on anti-dilution and misappropriation principles. Alternative channels of protection are available through the laws of unfair competition, free-riding etc., which exist also in civil law (l'usage sans juste motif). 3.3.5.5.5 Hyperlinks: Deep v Surface Hyperlinks allow non-linear browsing through hotlinks which can refer/transport the reader to related documents or other sites of interest. The hyperlinks may also include visual marks. Hyperlinks refer to Universal Resource Locators (URLs) and as facts are not protected in themselves under copyright law. Surface links are simple links to the introductory page and are rarely the cause of controversy, on the contrary these are the channels that assist a site-owner in building up their traffic. Such links to other sites are generally taken as legitimate i.e. by placing the information in a publicly available location there exists an invitation to use, and consequently an implied license. Deep links facilitate a site entry which bypasses the introductory page of a site where normally the site-owner will display heavy advertising and information on its site use policy, and extract content located in a sub-directory. Prosecutions relating to this practice are currently pending in the US such as Ticketmaster v Microsoft. The situation differs however where the links take the form of verbatim reproductions of scripts, such as newspaper headlines, which may be entitled to copyright protection in themselves. These are the circumstances dealt with in Shetland Times Ltd. V Dr. Jonathan Wills and Another. 3.3.5.5.6 Frames Framing describes the division of a web page into several sectors, which allow the viewer to visit other sites/documents whilst staying within the basic structure of the original site. The issues here are similar to those involved in deep hyperlinking, essentially the absorption of others’ content into one’s own presentation. Several actions have been taken (see The Washington Post Co. v Total News) principally on the basis of dilution of trademark. As all cases have so far been settled out of court there is no clear legal precedent to draw on. A commonly stated concern expressed by plaintiffs in such cases is that damage is done to their goodwill, through the association of advertisements for companies in the defendant’s frames which they do not wish to be seen to endorse, and which lead to loss of advertising

Security Subgroup

Page 110 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

revenue on their own site through misappropriation. The holders of marks have invoked anti-dilution law to combat framing and metatagging. Many Civil law jurisdictions have also integrated anti-dilution principles into their national jurisprudence such as Article 13 A.1 d) of the Uniform Benelux law. 3.3.5.5.7 Service Provider Liability The question of liability for infringing or illegal content remains unsettled. A recent decision of the French courts (Estelle Hallyday / Valentin Lacambre) held a provider of free web space liable for materials placed there by an anonymous third party. Compuserve and the Bavarian Court : In re Somm, Urteil des Amtsgericht München [local court], No. 8340 Ds 465 Js 173158/95 (May 28, 1998). In 1997 Felix Somm, the manager of the German subsidiary of Compuserve USA was sentenced to a two year suspended sentence for having been an accomplice to the distribution of pornographic writings in violation of German law. The images were stored on Compuserve USA’s server in North America, who failed to block access to the newsgroups distributing explicit sexual material. Finally a recent decision of the High Court (Queen’s Bench Division) in England, held the service provider Demon Internet liable for a defamatory posting to a newsgroup which was accessed through its server. Significantly the judge treated the ISP akin to a publisher as opposed to a common carrier, thus incurring a high degree of responsibility for content accessed through its facility. These cases demonstrate the potential dangers for ISPs, particularly where they are requested to remove particular materials or to block access to certain newsgroups. In case of refusal they render themselves vulnerable to prosecution. 3.3.5.5.8 Rights to Industrial Drawings Icons used to embellish or facilitate, enjoy statutory protection of a shorter duration in Common law jurisdictions and are protected by Copyright under the Civil law (copyright will belong to the designer rather than the content provider, unless of course produced under a work-for-hire contract), provided there is an element of creativity involved; functional icons of a utilitarian type are not entitled to protection. 3.3.5.5.9 Protection of Typographical Arrangement In the case of the reproduction of public domain materials not subject to copyright protection in themselves (such as those on which protection has been exhausted), publishers should be careful not to take this as a carte blanche for the imitation of another producers formatting, choice of fonts, alignment with graphics etc. 3.3.5.5.10 Minimising the Threat of Prosecution There are several steps which authors/publishers may take pre-emptively to deter legal proceedings. Before embarking on a domain name registration procedure one should run a search on existing registrations ; this is easily accomplished through options on various search engines such as HotBot or at the site of the principal administrator of the TLDs : Network Solutions. Alta Vista also retain a selection of domain name resources. The use of a clearly displayed disclaimer, stating clearly that the use of other’s marks or links to their pages is in no way an attempt to indicate/claim an association with them.

Security Subgroup

Page 111 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Secondly one can state a willingness to remove any items on the page which the owner may find objectionable or prejudicial to the value of their mark. One can contact those with whom one wishes to link; in certain cases this could allow the possibility of reciprocal linking which may have other benefits as well. Records of permissions granted should be retained for future reference and in case of a later law suit. Finally, should one be employing a mark which resembles that of a well established company, it would be wise to clarify the distinction on the front page, so as to minimise the risk of confusion. 3.3.5.6

Patentability of computer programs

3.3.5.6.1 Exclusion of patentability principle Article 52-2c of the European Patent Convention explicitly states that computer programs are not to be considered as inventions and cannot be protected as such by way of a patent. At the same time, the Council Directive of 14 May 1991, called "Computer Programs" states in Article 1 that Member States must protect computer programs by copyright. Hence the present legislative European context clearly excludes computer programs from the patentability field. In reality, only computer programs "as such" are excluded from the patentability field. When they are incorporated into a machine or a process fulfilling the patentability requirements (novelty, inventive step and industrial application), these computer programs can be protected by patent. There is a certain amount of pressure, especially from companies operating in this sector, to widen the patent application field to computer programs, as protection conferred by copyright alone is considered too risky. This attitude can easily be understood when considering the huge investments made in order to develop these programs. 3.3.5.6.2 Trend of Judicial Reasoning in Europe In July 1986, the Technical Board of Appeal of the European Patent Office delivered a decision with major consequences in the "VICOM" case. In order to become patentable, a computer program must deliver a technical solution to a problem, that is to say it must have "technical effects". Thus, for a long period of time, algorithms, which are the basis of computer programs, as they were assimilated to mathematical theories, could not produce any technical effects. The "VICOM" decision introduces a new distinction between "pure" mathematical algorithms and "applied" algorithms to be used in a process. As a result of this case, the Technical Board of Appeal of the EPO considered that a process cannot be excluded from patentability for the sole reason that it is based on an algorithm. This decision has lead to the possibility that a process, even if it is made of non-patentable elements, may be considered both as making a contribution to the state of the art and as patentable, as long as it solves a technical problem. Other European precedents came later on as confirmation that a computer program which produces a technical effect or solves a technical problem may be patented, provided such program meets the requirements of novelty, inventive step and industrial application. On the occasion of two recent decisions, (1 July 1998 and 4 February 1999) the Technical Board of Appeal of the EPO explained in more detail the notion of "technical effect". Security Subgroup

Page 112 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

These decisions confirm and develop some earlier precedents, such as the VICOM lawcase, and indicate that a computer program, by the unique fact of being a computer program, in referring to the exclusion principle of article 52-2 of the European Patent Convention, does not produce any technical effects. Hence, the simple interaction between a program and a computer, that is to say the internal electrical changes to the computer induced through the execution of a program, are not sufficient to endow it with the term "technical effect". The changes of electrical state of the computer components are merely a consequence induced by the execution of any program. Only computer programs producing effects going beyond that simple interaction with the computer can be considered as inventions. According to these precedents, computer programs, when they are a means, or the means of solving a technical problem may be considered an invention and may be patented. 3.3.5.6.3 Situation in the United States and in Japan There is no legal text excluding on principle the patentability of programs in the United States. In the first place, the US Patent Office refused to grant patents for programs, because it considered such programs more a creation of the mind than a technical invention. Examination directives were then adopted, allowing access to patents for certain programs containing a technical effect, as for instance, the control of a machine. Since then, the USPTO requirements for granting a computer program patent have been regularly simplified. Since the publication of the examination directives in June 1995, examiners have had to concentrate their analysis on the novelty and usefulness of the program without considering the presence of a mathematical algorithm. The situation in Japan is approximately the same, since computer programs are still excluded from protection. 3.3.5.6.4 Advantages and Disadvantages of patentability of computer programs Protection through copyright has been chosen as a means of protection because a computer program "a priori" resembles more a creation of the mind than a technical invention. The main advantage of copyright lies in its flexibility; there is no need for registration or of any formality because copyright protects a computer program as soon as it has been created. Copyright has the advantage of allowing small-sized companies and individuals to have their creations protected where they do not have the means or the desire to get involved in a long and expensive patenting procedure. Protection through copyright, however, is imperfect. Copyright may easily be bypassed by the method known as the "blind room". In reality, a computer program showing strong similarities with that of a competitor will not be considered as counterfeited if the author can prove that his creation is independent. It is generally accepted by judicial doctrine that the "blind room" method enables programmers to acquire an independent creation. The implementation of such a technique requires two teams of researchers. The first one dissects and analyses the competitors’ products, as with reverse engineering, then transmits the results to a second team who develops a new product based on the first team’s results. Thus, in patent legislation, where the simple fact of achieving the same result is considered as counterfeiting, the notion of "independent creation" disappears (the only exception being previous personal possession right). 3.3.5.6.5 Prospects Article 27 of the Trade Related Aspects of Intellectual Property Law (TRIPS) deals with Security Subgroup

Page 113 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

patentable objects. It outlines that a "patent can be obtained for any invention of a good or a process, in any technological field", provided that the classical criteria of novelty, inventive step and industrial application are being met. In this way, computer programs are not "a priori" excluded from the patentability field. Some supporters of the protection of computer programs through patents consider that harmonising this text with European texts should lead to the modification of article 52-2c of the European Patent Convention and suppress the principle exclusion of programs from the patent application field. The Conference on patentability of computer programs organised by the British presidency of the European Council in March 1998 concluded that the situation in Europe was not favourable to European companies compared to the situation in the United States and in Japan. In a Communication about patents of 29 July 1998, the European Commission considers that a Community initiative is advisable in this field in order to clarify the rules governing patentability of computer programs in Europe. 3.3.5.7

Legal texts

3.3.5.7.1 Legal protection of computer programs : Council Directive of 14 May 1991 91/250/EEC Other language versions: DE ES FR IT Contents Recitals Article 1 Object of protection Article 2 Authorship of computer programs Article 3 Beneficiaries of protection Article 4 Restricted Acts Article 5 Exceptions to the restricted acts Article 6 Decompilation Article 7 Special measures of protection Article 8 Term of protection Article 9 Continued application of other legal provisions Article 10 Final provisions Article 11 3.3.5.7.2 Legal protection of databases Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 Other language versions: DE ES FR IT Contents EU Databases Directive - Recitals EU Databases Directive - text CHAPTER I - SCOPE Article 1 - Scope Article 2 - Limitations on the scope CHAPTER II - COPYRIGHT Security Subgroup

Page 114 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Article 3 - Object of protection Article 4 - Database authorship Article 5 - Restricted acts Article 6 - Exceptions to restricted acts CHAPTER III - SUI GENERIS RIGHT Article 7 - Object of protection Article 8 - Rights and obligations of lawful users Article 9 - Exceptions to the sui generis right Article 10 - Term of protection Article 11 - Beneficiaries of protection under the sui generis right CHAPTER IV - COMMON PROVISIONS Article 12 - Remedies Article 13 - Continued application of other legal provisions Article 14 - Application over time Article 15 - Binding nature of certain provisions Article 16 - Final provisions Article 17 3.3.5.7.3 International Treaties and Conventions

Full texts of the treaties administered by the World Intellectual Property Organisation are available on their website The Berne Convention for the Protection of Literary and Artistic Works

for the Protection of Literary and Artistic Works Paris Act of July 24, 1971 (as amended on September 28, 1979) The Convention Establishing the World Intellectual Property Organization (Stockholm, 1967) The full text of the European Patent Convention (Munich, 1973) The Madrid Agreement Concerning the International Registration of Marks (1891) The Paris Convention for the Protection of Industrial Property (1883) The Patent Cooperation Treaty (PCT) (Washington, 1970) The Rome International Convention for the Protection of Performers,...

1961 International Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations (done at Rome on October 26, 1961) The Strasbourg Agreement Concerning the International Patent Classification (1971) The Treaty on Intellectual Property in Respect of Integrated Circuits (Washington, 1989)

WIPO Copyright Treaty WIPO Performances and Phonograms Treaty

3.3.5.7.4 European Directives Links to the Legal Advisory Board (LAB) of the European Commission, which contains the text of the European directives on Author’s Rights and Copyright

Security Subgroup

Page 115 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Directive harmonising the term of protection of Copyright and certain related rights (93/98/EEC) Directive on rental right and lending right and on certain rights related to Copyright in the … Directive on the co-ordination of certain provisions laid down by law, regulation, or … Directive on the co-ordination of certain rules concerning Copyright and rights related to …

Directive on the legal protection of computer programs (""Software"", 91/250/EEC) Directive on the legal protection of databases (96/9/EC) Other European Directives

3.3.5.7.5 National Laws in EU-Member States -

Austria Austrian site giving access to miscellaneous Intellectual Property legal texts from Austria and Germany

-

-

-

-

Belgium Belgium Copyright Act (in Dutch) France French site giving access to the French Intellectual Property Code (CPI). Germany Text of the German Patent Act List of German legal texts, including the German Copyright Act Italy Text of the Italian Copyright Act Luxembourg Luxembourg Ministry of Economy website: the Intellectual Property service section contains the Patent and Copyright Acts The Netherlands Dutch version of the Dutch Copyright Act . Dutch legal site giving access to miscellaneous legal texts from almost 60 countries world-wide Spain Spanish Copyright Act (in Spanish)

3.3.5.7.6 National Laws out of EU -

Canada Canadian Copyright Act (R.S.C. 1985, c. C-42) Swiss Swiss copyright USA US Copyright Act (Title 17 of the United States Code)

3.3.5.8

Annexes

3.3.5.8.1 Services offered by the IPR-Helpdesk The IPR-Helpdesk offers two main services: Security Subgroup

Page 116 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

General information highlighting the importance of IPR protection and exploitation. This is achieved mainly by means of the IPR web site In addition, the Helpdesk will offer a number of self-run tutorials on IPR and related subjects. This service is available to all interested parties free of charge. A personalised telephone Helpline facility which will provide information to individual companies and research organisations is also available. However at present this can only be accessed by organisations involved in current or recently completed RTD projects sponsored by the European Commission, and those who are potential participants in Fifth Framework RTD Calls for Proposals. In order to make use of this facility, users must first register with the IPR-Helpdesk service. The role of this facility is to guide enquirers on how to obtain help in researching the IPR background in their area of interest; how to protect IPR assets; and who to approach in order to exploit them by way of licensing agreements. It will also help applicants to locate professional advisers and patent attorney associations who will be able to assist them further. The Helpdesk will frequently direct enquirers to the competent authorities in particular areas. For example, the IPR-Helpdesk does NOT : - Perform searches for specific topics - Submit applications for patents - Furnish legal advice in the domain of copyright, trademarks, design, or licensing - Provide any form of legal or technical advice, or support for a specific given idea or invention The IPR-Helpdesk service is aimed primarily at participants in Commission-funded R&D projects. We will be able to assist those working on projects currently in progress, as well as organisations whose projects have been completed within the previous six months. The service is free of charge for such applicants, as long as it relates to the subject and scope of the funded project in question 3.3.5.8.2 Lexicon in English, French and German: http://www.ipr-helpdesk.org/en/i_007_en.asp 3.3.6 Statutory and legal framework of the cryptology in France Adapted from “Droit de l’Informatique, du Multimédia & des Réseaux” n°33 Mai-juin 1998 In France, the legal status of the cryptology rests on the law 96-659 of the 26th of July 1996, on the decrees of the 24th of February 1998 modified by the decrees of the 17th of March 1999 and regulative text of 17th of March 1999. This law has greatly softened the law of the 29th of December 1990 and the decree of the 28th of December 1992. Henceforth, it's possible to know with precision the French system. However, practical modes remain again to put in place. The new legislation distinguishes on the one hand using and on the other hand providing cryptographic products and has created a "trusted third party (TTP)". A trusted third party (TTP) can be described as: "an impartial organisation delivering Security Subgroup

Page 117 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

business confidence, through commercial and technical security features, to an electronic transaction. It supplies technically and legally reliable means of carrying out, facilitating, producing independent evidence about and/or arbitrating during an electronic transaction. Its services are provided and underwritten by technical, legal, financial and/or structural means". 3.3.6.1

Using or importing cryptographic services or means

For authentication functions and/or integrity control User is exempt from any prior formality. Cryptographic functions are use only for passwords, identifiers, authentication codes, signatures, access control information, integrity keys, cryptographic keys or similar information. For confidentiality functions Free using and importation if: - The cryptographic key is smaller than 40 bits - The cryptographic key is smaller than 128 bits and exclusively reserved to an individual. - The cryptographic key, longer than 40 bits and smaller than 128, is registered with a "trusted third party " - The provider has soon the benefit of a general authorisation A request for authorisation to SCSSI (Service Central de la Sécurité des Systèmes d'Information, 18 rue du Docteur Zamenhof, 92131 Issy-les-Moulineaux Cedex, France phone: 33 (0)1 4146 3700 Fax: 33 (0)1 4146 3701) is mandatory in any other case. To read French laws and decrees : http://www.legifrance.gouv.fr/citoyen/officiels.ow 3.3.6.2 Providing cryptographic services or means 3 types of system shall be distinguished: System of freedom The decree 99-200 of the 17th of March 1999 set up a list of cryptographic means or services for which providing, import (from outside of European Community) or export are free. Generally these techniques are secondary use of cryptography, for example in mobile phones, DVD, cash dispensers. System of registration: Providing, import (from outside of European Community), export of cryptographic means or services that provide authentication function (decree 98-101 of the 24th February 1998) or confidentiality function with small keys (decree 99-199 of the 17th March 1999) shall be prior registered. A particular is applied to means for E-Commerce: a simplified system is used if the declarant assures he needs confidentiality functions… System of prior authorisation: Any other case (than quoted above) needs a prior authorisation. An exemption is provided for to develop, to validate or to demonstrate cryptographic techniques. Nevertheless, the provider shall mandatory inform SCSSI, at least 2 weeks in advance. Trusted Third Party The Trusted Third Party (TTP) is dedicated to hold cryptographic secret keys. Through TTP, the French State can control cryptographic means or services, allowing some freedom to companies.

Security Subgroup

Page 118 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Mandatory, Trusted Third Party will have to enter into a contract with user and to take steps to preserve users security keys.

Security Subgroup

Page 119 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4 Business aspects 4.1 : EDI (Electronic Document Interchange) on Internet 4.1.1 Non-Internet Based Transactions Secured EDI (Electronic Data Interchange) transactions have come up in the early 1980s. At that time, the Transaction Automation industry was the only possible verification source. They had software driven products by the mid - 1980s, which referred to as electronic point-of-sale payment systems. The realm of verification in this automation was basically aimed for credit card transactions. They quickly gained popularity among pointof-sale personnel since they could afford it for their customers, reduce processing cost and time, decrease the incidence of fraud, and gather accurate customer information. 4.1.2 Internet Based Transactions Connection to the Internet will usually be by dial-up or leased line to a service provider, using Serial Line Internet Protocol (SLIP), or its successor, Point to Point Protocol (PPP). The service provider will be linked to other Internet networks by telecommunications lines, over which runs the Transmission Control Protocol/Internet Protocol (TCP/IP). The numeric addresses used by this protocol are mapped to the more familiar addresses (e.g. www.ris.at) by Domain Name Servers (DNS). • Users can send/receive email using Simple Mail Transfer Protocol (SMTP) or Post Office Protocol (POP - not to be confused with Point of Presence). Data can be included as attachments to email using Multipurpose Internet Mail Extensions (MIME). • Users can engage in real-time conversations using Internet Relay Chat (IRC). • Files of data can be transferred using File Transfer Protocol (FTP). • Browsing the information on the Internet is possible using programs like Gopher or Archie (archive), which present Internet files as a series of menus, containing other menus, directories or files - which may be downloaded. • The most popular way of browsing is to use software like Netscape Navigator or Microsoft Internet Explorer, which allow access to the World Wide Web (WWW). This uses hypertext links, like Windows help files, which means the Internet can be navigated by simply pointing and clicking. The graphical interface can support graphics, movies and sound. Data is transferred using the HyperText Transfer Protocol (HTTP). • It is also possible to use other computers on the network via applications such as Telnet.

4.1.3 EDI So what has all this to do with EDI? Well, first of all the Internet is easy to connect to, on a global basis, with relatively low charges. For example, sending a 1000 byte message (e.g. a small EDIFACT ORDER message) anywhere in the world costs a few Euro-cents. Some

Security Subgroup

Page 120 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

networks just have a fixed monthly cost of around a few pounds. The Internet can carry EDI messages in a standard way by email using MIME, or with the WWW using Web Forms to input data. An area of concern for traditional EDI has been the relatively low take up amongst small to medium sized enterprises (SMEs). One possible way of addressing this is to use "Lite EDI". This tends to be an umbrella term, covering various mechanisms for delivering EDI capability in a simple way and at low cost. Typical solutions utilise a web browser form where the user keys in data, such as an order, which is then transmitted to the recipient either directly as an EDI message, or via a third party service which can translate the information into an EDI format. Security can be applied at the EDI level, or at the Internet level, depending on the particular scenario. There are already around 1000 times more users already connected, though, of course, a significant proportion will be academic and private users. However, there is now growing usage for commercial purposes, such as marketing, sales, payments and information services. This is typically aimed at individuals rather than organisations. Traditionally, the Internet, because of its academic origins, has no charging mechanisms, especially not cross-charging - it has been a "free" service. Of course, the providers make charges for the access they provide, and other end users, for example in universities, pay via their IT budgets. The lack of a cross-charging mechanism means that service providers cannot recover costs for passing a message to a recipient from the sender of a message. This resulted in one traditional Value Added Network charging recipients of Internet messages - previously largely unheard of either on the Internet or in telecommunications generally (though some EDI networks do this). 4.1.4 Security Like many public networks, the Internet itself is not secure, whilst the VANS often used for EDI normally tend to be trusted. While your own service provider may have suitable controls, there is no guarantee over which networks your messages may flow - messages may be read or lost. There are confirmation of delivery and receipt options, which may help cover some threats. There are also various methods for additional security for Internet traffic. Applied to EDI, they can only protect whole interchanges, whilst EDI security can protect individual messages or transaction sets. 4.1.5 Security of web browsing By default, web browsing over the Internet is an entirely insecure process. All data transported between client and server is sent in the clear, potentially leaving it open to interception. Although the risks of Internet data interception are frequently exaggerated, it is essential when conducting any dialogue where confidential information is involved to employ a more advanced form of security. The most widely used method of securing web sessions is by means of the Secure Sockets Layer (SSL); pages secured in this way can be identified by the URL scheme https: (http secure) and by the presence of a solid "key" or "lock" icon, (usually at the bottom left-hand corner of the browser window). 4.1.5.1 Threats Broad threats to data transmitted using the Internet are similar to those for EDI in transit generally. These are: Security Subgroup

Page 121 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

• Messages may be intercepted and modified • Messages may be lost or replayed • Messages may be read by a third party • A third party may pretend to be one of the original two parties • One of the parties may claim it never sent/received a particular message For the Internet, some additional threats are: • Network unavailability may be more of an issue • The prospect of receiving unwanted advertising mail must also be viewed with concern - although the Internet is supposed to be self regulating to mitigate such issues, there is no central enforcement. 4.1.5.2 Secure EDI over the Internet EDI is considered to be the electronic exchange of messages between computer applications, without any human intervention, to agreed conventions and standards. So, what we intend by secure EDI is that, the data transmitted has : • not been read by anyone except the intended recipient • not been modified (either accidentally or maliciously) • come from the claimed sender • reached the intended recipient The threats to the local systems differ depending on the type of access. Sending and receiving email should present no additional worries, providing there is no embedded automatically executed batch file, document macro (e.g. in spreadsheets or word processor documents) or program code. Using file transfer means that programs or document macros which may contain viruses can be downloaded. These will only become active if the program is run or the document is viewed. It is obviously a good idea to have some corporate policy on such activity by employees. Telnet probably presents the biggest danger if someone else accesses your computer system. The normal line of defence is the use of a firewall. Use of a Firewall, which filters data from the outside world, allowing certain users predefined access levels, will help to address some of the local issues. There are three main types of firewall component: • Packet filters - look at Internet addresses and port numbers, often related to the type of application, to make decisions regarding access accordingly. • Circuit level gateways - proxy servers hold separate sessions with the two parties and determine who can do what. The proxy server can handle multiple applications, but modified clients are required. • Application level gateways - needs specific proxy servers in the gateway for each application, but no modification of clients, other than revised procedures to connect to the proxy server. Firewalls need careful administration and testing, and should not promote a feeling of overconfidence in their security capability. 4.1.5.3 Privacy If the privacy of data is vital, it must therefore be encrypted. Encryption is the art of encoding information in such a way that only the holder of a secret password can decode and read it. It is based on two things, an algorithm and a key. The algorithm is a mathematical formula that can be applied to intelligible text to transform it into unintelligible or cipher-text, as it is known. The opposite transformation is Security Subgroup

Page 122 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

called decryption. The key is used in conjunction with the algorithm to produce different results. If an algorithm requires a unique key to decrypt previously encrypted text, then it is considered secure. Text encrypted using such an algorithm, can only be dishonestly decrypted by trying every key combination until the correct one is found. This is what is known as a ‘brute force’ attack. 4.1.5.4 Symmetric Encryption Symmetric (or secret-key) encryption algorithms use the same key to encrypt and decrypt the text. Advantages: Simple and fast! Disadvantages: • an appropriate key has to be agreed, and secretly exchanged between trading partners • where many trading partners are involved, this process would be repeated many times which would no doubt lead to mistakes and possible security breaches • the key cannot be used for proof of origination as two or more trading partners can use the same key Examples of symmetric encryption algorithms: • Shift Cipher • Substitution Cipher • Affine Cipher • Vigenere Cipher • Hill Cipher • Permutation Cipher 4.1.5.5 Asymmetric Encryption Asymmetric (or public-key) encryption is based on a pair of keys. One key is distributed (or made public) to all trading partners and is used to encrypt the data, while the other (private key) is kept secret and is used for decryption. As the private key is held by the recipient, only they can decrypt messages intended for them. Advantage: The public key can be freely and easily distributed, since it is only used for encrypting the data it is of no use in the decryption process. And, it overcomes all the short comings in the Symmetric encryption. Disadvantage: It is complex and relatively slow (unsuitable for encrypting large EDI interchanges) To resolve the short comings in the Symmetric encryption, a random symmetric key is generated for each EDI interchange and used to encrypt that interchange. The key is then itself encrypted, using the recipients public key, and is sent with the interchange. Hence only the true recipient can obtain the symmetric key in order to decrypt the interchange. This process is effective since public key cryptography is only used on the symmetric key and not the entire interchange. Examples of asymmetric encryption algorithms: • RSA Cryptosystem • ElGamal Cryptosystem 4.1.5.6 Present trends IDEA (International Data Encryption Algorithm) is fairly new but is used by the popular email protection products Security Subgroup

Page 123 of 214

Security Subgroup

• •

Security Guidelines for Inter-Regional Applications

20 October 1999

For public key encryption, RSA has become the de facto standard. This is also a method patented by RSA Data Security Inc. For authenticity of the key for a particular recipient, use of public directories that are controlled by Certification Authorities (CAs) or Trusted Third Parties (TTPs) is prevalent. CAs and TTPs control the issuing of - and access to -certificates which provide a reliable means of establishing a party’s claim to a given public key. The most common standard for the distribution and control of public keys is X.509.

4.1.5.7 Certification Only one company is currently widely accepted as a CA; Verisign Inc. and as yet they only operate in the US. They offer four levels of certification: Level 1 guarantees an unambiguous name and e-mail address while Level 2 requires a trusted third party to verify the applicant’s identity. Levels 3 and 4 require personal interviews for verification, 3 for individuals and 4 for companies. 4.1.6 Content Integrity Content integrity is ensuring that the data has not been altered in any way. There are two potential causes of information modification: accidental corruption and malicious modification. 4.1.6.1 Accidental Corruption To see how we can maintain content integrity, we have to look at the two main methods of data transfer via the Internet : FTP and MIME (over SMTP). FTP (File Transfer Protocol) allows a file to be transferred between trading partners while they have an on-line connection to the Internet. It's fast and simple but has limitations related specifically to content integrity: • due to the diversity of Internet routers, certain characters can be converted during the session • each FTP session could be different for each trading partner due to difference in server characteristics • FTP cannot be used while the recipient is off-line • FTP protocol has a feature known as port redirection which would enable a firewall server to be bypassed, thus enabling hackers to access systems behind any security firewall ! MIME (Multi-Purpose Internet Mail Extensions) overcomes most of these problems. • has a means of encoding the data for transfer (via SMTP) without any accidental character conversions • it also encapsulates the data so that it can reside in a temporary location until the recipient comes on-line • trading partners do not require any knowledge of each others systems 4.1.6.2 Malicious Modification Protection against malicious modification is handled differently. It is vulnerable because, between source and destination the data is routed via a number of intermediate nodes and these are potentially, security weak points. To ensure that the information is not modified in any way between sender and recipient, the sender must include an integrity control value. This value is computed using the Security Subgroup

Page 124 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

content of the message and a special algorithm known as a one-way hash function. The recipient of a protected message runs the same algorithm against the data and compares the resultant hash value with that sent with the data. If the two are equal then the data has not been tampered with. MD5 is a widely used hash algorithm in securing EDI data being sent via MIME. 4.1.6.3 Non-repudiation of Origin To guarantee that the message came from the claimed sender is known as non-repudiation of origin. This prevents what is known as spoofing, where a sender can appear to be a different trading partner and could possibly supply false information to the recipient. Non-repudiation of origin is accomplished using digital signatures. Digital signatures also use public key cryptography, but in reverse to that previously discussed in the section on encryption. When encrypting data using public key cryptography, the public key is used to encrypt the data and only the private key can decrypt it; whereas with digital signatures the private key is used to encrypt a code within a message, while the public key is used to decrypt it. The fact that the private decryption key of the algorithm never leaves the possession of the owner ensures the legal standing of a message signed in such a way. In other words, it proves beyond reasonable doubt which trading partner sent it, and non-repudiation of origin is achieved. 4.1.7 Summary The Internet can offer an easy low cost route for EDI traffic, and many other business opportunities. Its intrinsic lack of security can be overcome by the use of existing generic security techniques, as described in the annex. The use of EDI specific security techniques is more flexible, since EDI information can then be protected even over non-Internet routes via gateways, and individual messages can be protected, rather than just whole interchanges.

4.2 Public Key Infrastructure (PKI) 4.2.1.1 Introduction The use of public key cryptography has characterised the latest developments in security technology. Threats posed by the open networks use are removed when the main conceptual attributes of security are operative: authentication, confidentiality, integrity and non repudiation. Public Key Cryptography permits to own a pair of related keys, called private and public key, by a user and therefore applications can apply these keys to provide the mentioned security attributes. The issue is that these keys must have different status: while private key must be kept as a non shareable secret, public one must be “public”. This sound like a simple matter but is not. The problem is that mechanisms commonly used for advertising public keys are insecure themselves: publish a public key with the security that it will not be tampered is a necessity if public key cryptography has to be operative. X.500 directories, finger files, www pages or DNS are mechanisms that are currently considered or used. However, it is not possible merely to store a public key with this mechanisms as the key can be modified in several ways and this breaches security. The solution to this situation is certification where rather than advertising a public key, it is advertised a public certificate. A public certificate is a package were a subject identity, his Security Subgroup

Page 125 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

public key and some other information (like validity period….) is bound and digitally signed by a third party (a certification authority, CA). If a user requiring someone’s else public key, receives a certificate signed by a CA that he knows, then he has facilities to determine the integrity and validity of the certificate and the subject’s public key. The certification mechanism introduces the extra problem of Trust Models: a) What individual users are expected to know about certification authorities? b) Who individual users have to trust in order to obtain useful results? c) Which mechanisms are needed to allow two user who have never previously met to establish a useful level of assurance about the identity of each other? A practical and reliable means of publishing the public keys is called Public-Key Infrastructure or PKI and is based in a pre-established key trust model. As more information systems get conducted over intranets and the Internet, network application vendors and others (e.g., Web, messaging and groupware) have begun using public-key technology to provide end-to-end access control, integrity and privacy of corporate data. The need for end-to-end security has, in turn, placed a new burden on security administrators, who now must create and maintain a wide Public Key Infrastructure (PKI) to manage the public portion of the cryptographic keys associated with each end user. Next sections we will describe some common basics of public-key infrastructure before examining the peculiarities of trust models in common use and what each one can offer. 4.2.2 Public-key infrastructure concepts It is supposed some knowledge in the basics of digital cryptography in the next section. Concepts associated with the public key cryptography are a recommended pre-requisite lecture: public and private keys, hashing, digital signatures, key lengths and so on. A public-key infrastructure (PKI) is the comprehensive system required to provide publickey ciphering and digital signature services. Some main basic characteristics than template a PKI trust model will be described below. These characteristics may help to compare one model in front another. All this issues will be explained with much more detail for the X.509 PKI model in next sections. A PKI is essential because only at the application layer can true end-to-end security be provided; by contrast, network-layer security stops at the network access device (e.g., firewall or router), despite today network devices authentication are PKI enabled. A PKI consists of the cryptographic support services--primarily certificate issuance and validation--needed to manage PKI clients. In most cases, this consists of certificates and directory servers and client software (e.g., Web browsers and server and messaging software) that are "certificate-aware". 4.2.2.1 Public-key-based trust relationships One useful definition for the word “trust” by X.509 is: “Generally, an entity can be said to “trust” a second entity when it (the first entity) makes the assumption that the second entity will behave exactly as the first entity expects”

Security Subgroup

Page 126 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

Naturally, the first entity makes the assumption only about a relevant area of the second entity’s behaviour, and so the trust between them is limited only to that area. In the realm of public-key technology, a necessary step towards establishing a trust relationship is for the first entity to import a public key from the second one and protect its integrity in storage or communication to other entities. The entity that imports the public key is known as the relying party. Because it intends to rely upon the public key for protecting subsequent exchanges with the key holder (see opposite figure).

20 October 1999

imports public key

relying party

keyholder

trust

4.2.2.2 What is a Public-key Infrastructure As mentioned above in its most simple form, a PKI is a system for publishing the publickeys values used in public-key cryptography. The effective purpose of a public-key infrastructure is to manage keys and certificates. A PKI should enable the use of encryption and digital signatures services across a wide variety of applications. The purpose of a PKI is to provide trusted and efficient key- and certificate management, thus enabling the use of authentication, non-repudiation, and confidentiality. There are two common basic operations to all PKIs: • Certification, which is the process of binding a public-key value to an individual, organisation or other entity, or even to some other piece of information such as a permission or credential. • Validation, which is the process of verifying that a certification is still valid. How this two operations are implemented define the singularity of a PKI and the underlying trust model PKI. 4.2.2.3 Certification A certificate is the form in which a PKI communicates public values or information about public keys or both. In more traditional terms, a certificate is a collection of information that has been digitally signed by its issuer (see figure hereafter ). The information contained in a certificate is a basic characteristic of different PKIs. We call certificate subject the entity that binds the public key published by the certificate (person, institution, company). This subject is called End Entity. The certificate user is an entity (a person, institution, company) who relies upon the information contained in a certificate. Very often subject and user are the same entity (i.e. we have an identity certificate), but this not a requirement. Another example, we will see than in the PGP model certificate subject and the CA are the same entity. The relationship between the CA issuer of the certificate, the certification user and the certification subject is a PKI basic characteristic for a trust model too. As we have exposed above all three may be distinct entities, or any two (or all three) may be the same entity. Another aspect of certificates than may distinguish models is the kind of certificates the model is able to manage. An identity certificate simply identifies the certificate subject and lists the public key(s) for that entity (an individual, corporation, government or other

Security Subgroup

Page 127 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

organisation). A credential certificate describes non-entities such as a permissions or credentials, also a credential certificate may or may not identify the entity to which the credential is attached. This credential certificates named also attribute certificates should not be confused with the one’s referred at the ANSI draft x9.45. Whether a PKI release identity or credential certificates or both is a basic characteristic of a PKI trust model. 4.2.2.4 CA arrangements It is obviously impractical to have a single CA acting as the authority for the entire world. Therefore, most trust models permit CAs to certify other CAs, so the certificates issued to individual user are called end-user certificates while the certificates issued by a CA to other CA are called CA-certificate or cross-certificates. If a end-user of a CA has to obtain a trusted public key of another end-user of different CA, it will have to validate a chain of certificates between them, in a process called certification path validation, see figure. How the CAs are arranged in a trust model is a basic PKI characteristic. This adopts forms than range from strictly hierarchical (PEM) up to “web of trust” (PGP) (this structures will be explained in next sections). Also, the CA relationships in a PKI, governs its scalability; if a trust model must operate globally its functions and operations must scale up to millions of users retaining its practicality. 4.2.2.5 Validation A certificate user needs to be sure that the certificate’s data validity period is correct because information in the certificate can change over time. There are two methods for validate a certificate: • On-line validation: the user can ask the CA directly for the validity period every time the certificate is used. •

Off-line validation: a validity period specified in the certificate define the range during which the information of the certificate can be considered as valid.

Close to validation operation is the certificate revocation. Certification revocation is the process of letting users know when the information in a certificate becomes unexpectedly invalid. For instance this may happen when a subject private key becomes compromised. How a PKI trust model revokes certificates is basic characteristic of it. If the validation is on-line then the revocation process becomes trivial. In case of off-line validation, the most common revocation method uses a certificate revocation lists: a list of revoked certificates that is signed and periodically issued by a CA. The CRL approach must face the called CRL time granularity problem : what happens between the time a CA receives notification that a certificate should be revoked and the time next CRL will be published by the CA ? How responsibilities are dealt in a such case ? Since the revoked certificate will only appear on the next CRL, any user checking his current CRL will assume that the certificate is still valid. Another concern is the size of the CRLs that for some CAs can be expected to grow very large. This also may have impact on the bandwidth and in the process time needed to check the CRL validity. These problems have lead to several refinements of CRL approach: separate CRLs for different kind of certificates or subjects, incremental or deltaCRLs…..the fact is that on-line revocation and validation methods and specifications are in Security Subgroup

Page 128 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

a draft coming and have still to spread. 4.2.2.6 Authentication/Non-repudiability When a CA issues an entity certificate and a end-user validates this certificate, the entity is said to have been authenticated by the end-user. The degree to which a user can trust the certificate information and its validity is a measure of the strength of the authentication made by the certificate issuer. Authentication made through a PKI is called in-band authentication to opposite the one’s performed using traditional methods like telephone, physical meeting etc. and called out-of-band authentication. The objective of a PKI must be to minimise the out-of-band authentication, reducing it at the initial verification process to issue the certificate. Its clear than different CAs will manage this with different security levels and therefore the “amount of trust” on the authentication will be different. At this point the split between certification and registration functions arises. Sometimes the initial process to authenticate a subject end entity and other functions (like token distribution, reporting….) may be performed by what is called a registration authority. For example suppose than an RA registration authority may be responsible for a first time out-of-band identity authentication, remaining to the CA certification authority the technical aspects in the life cycle of the certificate. This will exposed in more detail in the X.509 model review. Anyway, the extent to which out-of-band authentication is required by a PKI trust model, is related to the degree of “non-repudiability” or irrefutability a PKI trust model implementation can provide. So, the degree of non-repudiation than a PKI trust model can provide is another PKI basic characteristic. However legal, social and technical implications factors converge at this issue. At the end, the “contextual” legal framework in which a PKI model operates and the certificate practice statements of a CA implementation (that is, how well the CA do the things) are determinant in the amount of irrefutability the PKI model can provide. 4.2.2.7 Anonymity/Privacy The degree of non-repudiation of a PKI trust model brings up another issue: anonymity as another basic PKI characteristic. Ideally, a PKI should provide both, strong irrefutable authentication and a high degree of privacy through anonymity. For example, imagine a PKI that is set up to identify people in the certificates by their name, address, phone number, place of work, job title….such a PKI would be perfectly suitable when users are acting in the capacities of their jobs. However, same users might be reluctant to use these PKI certificates to make routine purchases because of his identifying information will be available to the merchant. Credential authentication holds the promise to allow to PKI those traits. 4.2.2.8 Summary of basic PKI trust models characteristics The following checklist of basic characteristics aids to determine the PKI trust model a CA is based. • Certificate Information. What data is contained in the certificate? Is it predefined, or can the certificate hold any kind of information? • CA Arrangement or trust model. Are the CAs arranged in a strict hierarchy or is the PKI unstructured? If unstructured, does the PKI use a web of trust or some other mechanism? Security Subgroup

Page 129 of 214

Security Subgroup



Security Guidelines for Inter-Regional Applications

20 October 1999

CA/subject/end-user entities relationship.

Are these three entities distinct? How strong does each tie the others? For example is the subject an employee of the CA? Is that a requirement? • CA/subject/end-user trust relationship. Where the three entities are distinct, what is the degree of trust they must place in each other. Who assumes the most liability? • Certificate validation method. On-line or off-line validation. If off-line, how are the CRL issues addressed. • Certificate revocation method. CRL issues have to be addressed. • Identity vs. credential certificates. Does the PKI serve only as means of identifying the public keys of entities? Can it also be used for credentials such a permissions, authorisations and other non-entities attributes? • Non-repudiation and strong authentication. Is impossible to deny the digital signature of a subject? • In-band vs Out-band authentication. How much out-of-band authentication is required for the operation of the PKI? • Anonymity. Does the PKI provide its users with any anonymity. Are the irrefutability and authentication diminished? 4.2.3 Basic functions of a PKI What is described below are some formal PKI definition functions. This definition helps to better understand the concepts developed in the next sections. 4.2.3.1 Registration Process whereby a subject first makes itself known to a CA ( directly, or through an ORA organisational registration authority) prior to that CA issuing a certificate(s) to that subject. Registration involves the subject providing its name and other attributes to be put in the certificate, followed by the CA (possibly with help of the RA) verification in accordance with its CPS (Certification Practice Statement) that the name and other attributes are correct. 4.2.3.2 Initialisation Initialisation is produced when the subject (e.g. the user or client system) gets the values needed to begin communicating with the PKI. For example, initialisation can involve providing the client system with the public key and certificate of a CA or generating the client system’s own public/private key pair. 4.2.3.3 Certification This is the process in which a CA issues a certificate for subjects public key and returns that certificate to the subject and/or posts that certificate in a repository.

Security Subgroup

Page 130 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.3.4 Key Pair Recovery In some implementations, key exchange or encryption keys will be required by local policy to be "backed up", or recoverable in case the key is lost and access to previously-encrypted information is needed. Such implementations can include those where the private key exchange key is stored on a hardware token which can be lost or broken, or when a private key file is protected by a password which can be forgotten. Often, a company is concerned about being able to read mail encrypted by or for a particular employee when that employee is no longer available because he is ill or no longer works for the company. In these cases, the user's private key can be backed up by a CA or by a separate key backup system. If a user or his employer needs to recover these backed up key materials, the PKI must provide a system that permits the recovery WITHOUT providing an unacceptable risk of compromise of the private key. 4.2.3.5 Key Generation Depending on the CA's policy, the private/public key pair can either be generated by the user in his local environment, or generated by the CA. In the latter case, the key material may be distributed to the user in an encrypted file or on a physical token - e.g., a smart card or PCMCIA card. 4.2.3.6 Key Update All key pairs need to be updated regularly, i.e., replaced with a new key pair, and new certificates issued. This will happen in two cases: normally, when a key has passed its maximum usable lifetime; and exceptionally, when a key has been compromised and must be replaced. In the normal case, a PKI needs to provide a facility to gracefully transition from a certificate with an existing key to a new certificate with a new key. This is particularly true when the key to be updated is that of a CA. Users will know in advance that the key will expire on a certain date; the PKI, working together with certificate-using applications, should allow for appropriate keys to work before and after the transition. In the case of a key compromise, the transition will not be "graceful" in that there will be an unplanned switch of certificates and keys; users will not have known in advance what was about to happen. Still, the PKI must support the ability to declare that the previous certificate is now invalid and shall not be used, and to announce the validity and availability of the new certificate. Note, however, that the compromise of a private key associated with a self-signed rootCA certificate is always catastrophic. That is, once the rootCA's private signature key has been compromised, there is no way to reliably convince users and subordinate CAs to accept a new key for the rootCA. If the key is compromised, any "update" message telling subordinates to switch to a new key could have come from an attacker in possession of the old key, and could point to a new public key for which the attacker already has the private key. When a rootCA's private signature key is compromised, the only option is dismantling the entire infrastructure subordinate to that rootCA and starting over again from scratch. It is possible to have anticipated this event, and "pre-placed" replacement rootCA keys with all relying parties, but some secure, out-of-band mechanism will have to be used to tell users to make the switch, and this will only help if the replacement key has not been compromised.

Security Subgroup

Page 131 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.3.7 Cross-certification A cross-certificate is a certificate issued by one CA to another CA which contains a public CA key associated with the private CA signature key used for issuing certificates. Typically, a cross-certificate is used to allow client systems/end entities in one administrative domain to communicate securely with client systems/end users in another administrative domain. Use of a cross-certificate issued from CA_1 to CA_2 allows user Alice, who trusts CA_1, to accept a certificate used by Bob, which was issued by CA_2. (Note: cross-certificates can also beissued from one CA to another CA in the same administrative domain, if required.) 4.2.3.8 Revocation When a certificate is issued, it is expected to be in use for its entire validity period. However, various circumstances may cause a certificate to become invalid prior to the expiration of the validity period. Such circumstances include change of name, change of association between subject and CA (e.g., an employee terminates employment with an organisation), and compromise or suspected compromise of the corresponding private key. Under such circumstances, the CA needs to revoke the certificate. X.509 defines one method of certificate revocation: CRL’s. A CRL is a time stamped list identifying revoked certificates which is signed by a CA and made freely available in a public repository. Each revoked certificate is identified in a CRL by its certificate serial number. When a certificate-using system uses a certificate (e.g., for verifying a remote user's digital signature), that system not only checks the certificate signature and validity but also acquires a suitably-recent CRL and checks that the certificate serial number is not on that CRL. The meaning of "suitably-recent" may vary with local policy, but it usually means the most recently-issued CRL. A CA issues a new CRL on a regular periodic basis (e.g., hourly, daily, or weekly). CAs may also issue CRLs aperiodically; e.g., if an important key is deemed compromised, the CA may issue a new CRL to expedite notification of that fact, even if the next CRL does not have to be issued for some time. (A problem of aperiodic CRL issuance is that end-entities may not know that a new CRL has been issued, and thus may not retrieve it from a repository.) An advantage of the CRL revocation method is that CRLs may be distributed by exactly the same means as certificates themselves, namely, via untrusted communications and server systems. One limitation of the CRL revocation method, using untrusted communications and servers, is that the time granularity of revocation is limited to the CRL issue period. For example, if a revocation is reported now, that revocation will not be reliably notified to certificate-using systems until the next CRL is issued -- this may be up to one hour, one day, or one week depending on the frequency that the CA issues CRLs. 4.2.3.9 Certificate and Revocation Notice Distribution/Publication As alluded to in sections above, the PKI is responsible for the distribution of certificates and certificate revocation notices (whether in CRL form or in some other form) in the system. "Distribution" of certificates includes transmission of the certificate to its owner, and may also include publication of the certificate in a repository. "Distribution" of revocation notices may involve posting CRLs in a repository, transmitting them to endentities, and/or forwarding them to on-line responders.

Security Subgroup

Page 132 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.4 PKI Trust models review Several approaches have been developed aimed to solve the problem: “Given some information about a user (usually a name or a public key) how do we determine which certificates and CRLs (certification revoked list’s) are needed to obtain an authenticated public key for that user, and who do we have to trust?”. We are going to introduce some significant and historic proposals as well as other newer tentatives. 4.2.4.1 PGP See § 2.6.3.8 Pretty Good Privacy is a public-key cryptography program created by Philip Zimmermann. It uses RSA and IDEA to encrypt and/or sign messages and was originally designed for use with Internet e-mail. The model of trust used if Pretty Good Privacy program assumes that only individual users are competent to decide who to trust and consequently that all users are competent to decide the implications of trusting someone and to assess the levels of assurance than can be inferred. PGP summary characteristics The PGP approach to trust works well in very small groups of users, for example a small company where one person signs the certificates for people in the company and everyone declares that person to be trusted. Of course, what we have here is an organisational CA which signs everyone’s keys and which people within the organisation trust. • Certificate information The PGP certificate is simple and rigid. It contains only a public key, an email address, and the degree-of-trust attribute. It is not extensible. • CA arrangement PGP CAs are arranged in a web of trust. • CA Subject and User relationship Each PGP user is his own root CA. Subjects may or may not be CAs. • CA Subject and User trust relationships Since each user is their own CA, the PGP user completely trusts her CA. The CAs can assign a degree of trust to their subjects (i.e. other CAs), but they have no way of preventing their trust from being infinitely extended. • Certificate validation method PGP uses neither online validation nor validity periods. Once a certificate is added to a user’s keyring, it is considered valid until the user decides otherwise. • Certificate revocation method PGP relies on word-of-mouth to propagate information about revoked certificates. PGP does not use CRLs. • Identity vs. credential certificates PGP uses purely identity certificates. They have no provisions to include credentials. • Irrefutability and strong authentication PGP has very weak authentication. The sole means of identifying a subject is with an

Security Subgroup

Page 133 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Internet email address. • In-band vs. out-of-band authentication PGP relies almost entirely on out-of-band authentication. • Anonymity PGP does not provide for any direct anonymity. A degree of anonymity can be achieved by using a “fake” email address. 4.2.4.2

PEM/RFC-1422

See § 2.6.3.5 PEM was proposed in early 1993 to endow Internet email with the four attributes of the security. The standard never caught on the internet community, for various reasons, one of which was that the proposed PKI trust model proved to be a poor fit on the internet’s peerbased structure. The Privacy Enhanced Mail secure electronic mail protocol (RFC’s 1421-4) define a strict top down hierarchy with three or more levels of CAs. At the top level of the hierarchy there is a single CA which merely ties together all of the CAs at the level below without making any statements about what they do. The Internet Society has set up the “Internet Policy Registration Authority” (IPRA) to fulfil this role of top level certification authority (TLCA). Anyway other TLCAs has been constituted in other projects (i.e. EC PASSWORD) so there is not a single TLCA. The second level of the RFC1422 hierarchy comprises Policy CAs (PCA). These are CAs which certify other CAs, ensuring that they only certify CAs that enforce a minimum level of assurance among the subjects that they certify (end users or other CAs). Policy CAs form certification islands ( or certification domains if we think the subject names as a distinguished names DN). These certification domains were typically expected to coincide with national boundaries, sensitive organisations or vertical market and business sectors. At the third level of the RFC1422 hierarchy there are the organisational CAs. These issue certificates for other users and possible lower level departmental CAs according to their stated policy of identifying users before issue a certificate (authentication), which is enforced by the PCA which certifies the organisational CA. The policy of identifying user may be very different and with a variable degree of trust. Some CAs may be high assurance CAs which take enormous care to verify that a user is who he claims to be before signing his certificate, including possible DNA fingerprinting or other techniques. Some organisational CAs may be medium assurance requiring a visit to a particular office and a visual inspection by someone who check out-of-band the subject identity and is trusted by the CA. Some low assurance CAs may just require a telephone call or an email message. • PEM conclusions The cumbersome nature of its PKI contributed to PEM’s downfall as Internet standard. The lesson learned from experience with PEM is that a strict, hierarchical trust model does not work well in the global Internet: there is not going to be a single hierarchy for the whole world. •

Certificate information

PEM certificates are X.509v1 certificates. Security Subgroup

Page 134 of 214

Security Subgroup



Security Guidelines for Inter-Regional Applications

20 October 1999

CA arrangement

PEM uses a rigid, top-down CA hierarchy. • CA Subject and User relationship Users and subjects are distinct from CAs. No PEM user can be a CA. • CA Subject and User trust relationships All PEM users must place complete trust in the IPRA, as all certification paths start with the IPRA’s key. A user must also trust the PCAs and CAs they encounter in a certification path. The user must be familiar with each PCA’s policies, and trust that the PCA and the CAs have not deviated from those policies. • Certificate validation method Certificates are validated “online” using email. It is expected that the message originator would include the full certification path to his key in the message, which the recipient can validate using the IPRA’s key. While performing this validation, the user must also verify that no certificates have been revoked or expired, and that DN subordination has been followed. • Certificate revocation method PEM uses X.509v1 CRLs, distributed via email to each user. • Identity vs. credential certificates PEM certificates are purely X.509v1 identity certificates. • Irrefutability and strong authentication The PEM architecture allows for strong authentication of users. • In-band vs. out-of-band authentication Each user needs to obtain the IPRA’s key via some out-of-band means. Given that key, all other authentication can be performed in-band. • Anonymity PEM provides an anonymity mechanism through what it calls “PERSONA” CAs. A PERSONA CA is distinct from a regular PEM CA in that it explicitly does not vouch for the identity of its subjects. 4.2.4.3 ICE/TEL The ICE-TEL project is a pan-European project that is building an Internet X.509 based certification infrastructure throughout Europe, plus several secure applications that will use it. The ICE-TEL trust model is based on a merging of and extensions to the existing Pretty Good Privacy (PGP) web of trust and Privacy Enhanced Mail (PEM) hierarchy of trust models, and is called a web of hierarchies trust model. The web of hierarchies model has significant advantages over both of the previous models. 4.2.4.3.1 ICE-TEL trust model requirements Based on experience gained previously with other trust models, the following requirements can be listed for the ICE-TEL trust model: 1. The trust model shall be capable of operating without the use of certificates or CRLs, through trusted exchange of public keys. 2. Where certificates are used, the trust model shall require the use of the X.509 standard Security Subgroup

Page 135 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

v3 certificates and v2 CRLs expected to be ratified in 1997. Earlier versions of the certificate and CRL may be supported by implementations while they remain in widespread use, though these may not provide access to all facilities of the trust model. Proprietary certificate and CRL formats shall not be supported. 3. The trust model shall allow for the creation of security domains encompassing • single users; • multiple users (small/simple organisations); • arbitrarily complicated organisations. 4. The trust model shall allow organic growth among the users and organisations; and shall allow security domains to grow, shrink and reorganise at any time with minimum inconvenience to the users of the domain and to people communicating with those users. 5. The trust model shall allow the administrator of a security domain to choose which other domains are to be trusted. There shall be no requirement that inter-domain trust is mutual and there shall be no points that any domain is required to trust. Inter-domain trust shall not be transitive. 6. The operation of the trust model as a whole shall not depend on the existence and operation of any single part of the deployed infrastructure. In particular, there need not be a central top level registration authority like the PEM IPRA. 4.2.4.3.2 Principles of the ICE-TEL PKI Trust Model These are the principles components of the ICE-TEL trust model and their roles or responsibilities: • The trust model comprises users and CAs. All CAs are equal in terms of what they are capable of doing (though, of course, some do it more securely than others). • All users have a private key and a public key. The user may have generated their own key pair, or it may have been generated for them by either a certification authority or a third party registration authority. • All CAs have a private key and a public key. The CA will normally have generated its own key pair, though it may have been generated by a superior certification authority. • Any public key may be signed by zero or more certification authorities. • Any user can explicitly trust any other user by obtaining an out of band copy of the user’s public key • Any user can explicitly trust any CA by obtaining an out of band copy of the CA’s public key • Any CA can certify any user (provided the identity of the certified user is appropriate to the certification policy of the certifying CA) by publishing a signed copy of the user’s public key • Any CA can certify any other CA (provided the identity and the certification policy of the subordinate CA is appropriate to the certification policy of the superior CA) by publishing a signed copy of the subordinate CA’s public key. • Any CA can cross-certify any other CA (provided the certification policy of the crosscertified CA is appropriate to the cross-certification policy of the cross-certifying CA) by publishing a signed copy of the CA’s public key. • Every CA must have a certification policy that describes the operation of the CA with respect to the entities that it certifies (whether they are subordinate CAs or users). This policy will be published and a CA will usually expect its public key to be certified or cross-certified by other CAs who (a) find its policy acceptable and (b) trust it to implement the stated policy. Security Subgroup

Page 136 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999



A CA will also have a cross-certification policy, which states the minimum policy that a CA requires another CA to implement in order to issue a cross-certificate. A crosscertificate means that any entity trusting the cross-certifying CA will be able to verify certificates signed by the cross-certified CA or by other CAs certified by the crosscertified CA. Note that this does not extend to the ability to verify certificates signed by CAs crosscertified by the cross-certified CA (in other words, cross-certification is not transitive). This is because a cross-certificate signifies one CA accepting the ability of another CA to implement a certification policy, but does not make any statement at all about the other CA’s cross-certification policy. This meets the requirement that the administrator of a security domain chooses which other domains are to be trusted by the users within his/her domain, and does not inherit other administrators’ choices. In general, there can be no single meaningful/simple picture of “the trust model”. Strict hierarchies are easy to draw, but general mechanisms quickly become too complex to comprehend. Instead, each user can be considered to have their own view of the world with: • themselves at the top of hierarchy • at the second level comes everything for which they hold an out-of-band (trusted) public key. For users who are part of one or more hierarchies, this will comprise the public keys of the root CA for each hierarchy. It will also comprise users and CAs who have been explicitly trusted • at the third level comes everything certified or cross-certified by the CAs at the second level • at the fourth level comes everything certified or cross-certified by the CAs at the third level…. and so on 4.2.4.3.3 ICE-TEL model scalability and flexibility: organic growth of CAs In order for a security infrastructure to become widely used, it is important that the level of effort required from users of the infrastructure is proportional to the use and benefit they expect to obtain from the infrastructure. In particular, small scale users with occasional security needs should have a small start-up cost. The three different types of security domain introduced by this trust model ensure that this requirement is met. For users with small scale security requirements, the model allows them to generate a key pair for themselves and exchange the public key with users with whom they want to communicate securely. Companies with medium security requirements can create a CA to certify users within the company and can cross-certify other companies they deal with as the need arises. Companies with large security requirements can create a hierarchy of different CAs with different policies and with different cross-certificates for each part of the hierarchy. However, it is also important that the model allows the security requirements of a user or an organisation to change over time, and that these changes do not require disproportionate amounts of work. Migration between the three types of security domain (users, CAs, hierarchies) should be well defined. In particular, the following transitions can be identified: 1. One or more uncertified users merge to form a simple CA 2. The users would previously have had self-issued key-pairs, and these would all be signed by the newly created CA according to its newly devised certification policy. The public key of the new CA would be distributed to users. If the users previously held

Security Subgroup

Page 137 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

out-of-band public keys for other users or CAs then these can be retained. They could also be signed by the CA, according to its cross-certification policy, thereby becoming cross-certificates for all users of the CA. 3. One or more CAs merge to form a hierarchy (or hierarchies merge) 4. The public keys of the CAs would be signed by the newly created CA at the root of the new hierarchy, and the public key of the new root CA would be securely distributed to users. The new root CA would devise and advertise a certification policy and a crosscertification policy according to the weakest policies of the original CAs. Any crosscertificates issued by the original CAs can be retained by those CAs, or they can be resigned by the new root CA as long as the cross-certification policy permits. 5. A hierarchy divides into a collection of CAs (or other hierarchies) 6. Each subordinate CA signed by the CA at the root of the dividing hierarchy would pass its public key to its subordinate users by out-of-band mechanisms. It may also choose whether to sign the public keys in any cross-certificates signed by the former root. Users would discard the public key of the former root CA and any certificates signed with that key. 7. A CA divides into a collection of uncertified users 8. Each user would discard the public key of the CA and their certificate signed by that CA. The users may choose to add the public keys in the cross-certificates issued by the CA to their list of trusted public keys. 9. A CA subdivides into a collection of CAs 10. A new CA will be created (with a newly devised certification policy and crosscertification policy) which will sign the public keys of existing users (or can, of course, issue the users with new key pairs) and will securely pass its public key to the users. The users do not need to discard the public key or certificate of the old CA. 4.2.4.3.4 Managing Complexity A simplification to the trust model which tends to reduce the number of possible certification paths is to restrict the CAs that are allowed to issue cross-certificates and what they are allowed to certify. In general, the model mixes the “hierarchical trust” and “web of trust” models to form a “web of hierarchies” model; and specifically, three types of hierarchy can be identified: 1. Hierarchies with two or more levels correspond to existing hierarchies such as described in RFC1422. The root of the hierarchy will be a PCA which describes and advertises the minimum policy which subordinate CAs enforce and to which end entities adhere. This type of hierarchy is very suitable for large national or multinational companies with well-defined security needs. The security hierarchy can be organised on the basis of geographical distribution, company organisational distribution, distinct security needs of different groups of users or any other criteria. 2. Hierarchies with exactly one level. This corresponds to a simple CA issuing certificates for a collection of users. The CA must specify a security policy that it enforces. This arrangement is very suitable for small organisations or groups of users with a single set of security requirements. Of course, the direct trust CA may issue certificates for a very large number of users, making this model also suitable for a large company which does not wish to or need to replicate the certification process as required by a larger hierarchy. 3. Hierarchies with exactly zero levels. This corresponds to uncertified user making its public key available out-of-band. There will be no policy statement involved. This

Security Subgroup

Page 138 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

arrangement, which corresponds closely to the PGP model, is very well suited for individual end users or for organisations with no overall security requirement where an individual user has a short term need for security. Each of these hierarchies has a “Trust Point” (TP) at its root (the root being the PCA in the first case, the simple CA in the second case, and the uncertified user itself in the third case) and each hierarchy defines a separate security domain. In a simplified trust model, we can specify that 1. a user may only be a member of one security domain; and 2. security domains can be interlinked only by cross-certification among the TPs. (of course, a user can be a member of multiple security domains if he/she possesses multiple certificates issued by different CAs within different security domains, but the trust model is not aware that these are the same user). A user within a security domain must trust the TP within that security domain and must have received an out-of-band copy of the public key of that TP. The effect of this is that a fixed certification path can be constructed between two users called user1 and user2: 1. user1 knows the trust point at the root of his security domain 2. user1 discovers (by unspecified means) the trust point at the root of user2’s security domain; and calculates a path from user2’s TP to user2 (this is simply a matter of traversing a hierarchy) 3a. if user1 and user2 are in the same security domain (i.e. share the same TP) then no further certificates are needed for authentication to take place 3b. if user 1 and user2 are in separate security domains then the cross-certificate from user1’s TP to user2’s TP needs to be retrieved and, if it exists, prepended to the certification path. Note that the first part of step 2 may be quite complicated, but can be assisted if this information is stored in a well-defined publicly accessible place (such as the user’s entry in some directory service) or is made available by user2’s CA in some well-defined way. The second part of step 2 may also be assisted if each user stores a full certification path from their TP in a well-defined place. In general, the more information that can be precalculated, the less needs to be done at authentication time. 4.2.4.4 SPKI, Simple Public Key Cryptography The SPKI Working Group has developed a standard form for digital certificates whose main purpose is authorisation rather than authentication. The current state of SPKI work is far from maturity. However, the strength of this new approach to certificate concepts and structures comes from a weakness in the classical identity certificates: these are not well suited to some applications and requirements specially in the process of dealing access control. SPKI is linked to name's SDSI theory and both are emerging with strength in the electronic certificate’s world. The main drawback of this new model approach to certificates is a general deployment of identity certificates in current applications and the perception, by so many, than identity certificates solve authorisation requirements in most applications thanks to extensions recently defined in x.509 v3. This fitness still have to be proved:

Security Subgroup

Page 139 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

“ A key-holder’s name is one attribute of the key-holder, but rarely of security interest. A user of a certificate needs to know whether a given key-holder has been granted some specific authorisation ….The certificate holder should be able to release a minimum of information in order to prove his or her permission to act” What is exposed below are some introductory concepts to the SPKI theory. 4.2.4.4.1 Antecedents Certificates were originally viewed as having one function: binding names to keys or keys to names. This thought can be traced back to the paper by Diffie and Hellman introducing public key cryptography in 1976. Prior to that time, key management was risky, involved and costly, sometimes employing special couriers with briefcases handcuffed to their wrists. Diffie and Hellman thought they had radically solved this problem: "Given a system of this kind, the problem of key distribution is vastly simplified. Each user generates a pair of inverse transformations, E and D, at his terminal. The deciphering transformation, D, must be kept secret but need never be communicated on any channel. The enciphering key, E, can be made public by placing it in a public directory along with the user's name and address. Anyone can then encrypt messages and send them to the user, but no one else can decipher messages intended for him." [DH] This modified telephone book, fully public, took the place of the trusted courier. This directory could be put on-line and therefore be available on demand, world-wide. In considering that prospect, Loren Kohnfelder, in his 1978 bachelor's thesis in electrical engineering from MIT [KOHNFELDER], noted: "Public-key communication works best when the encryption functions can reliably be shared among the communicants (by direct contact if possible). Yet when such a reliable exchange of functions is impossible the next best thing is to trust a third party. Diffie and Hellman introduce a central authority known as the Public File." 4.2.4.4.2 SPKI main goals The SPKI is intended to provide mechanisms to support security in a wide range of Internet applications, including IPSEC protocols, encrypted electronic mail and WWW documents, payment protocols, and any other application which will require the use of public key certificates and the ability to access them. It is intended that the Simple Public Key Infrastructure will support a range of trust models. 4.2.4.4.3 Rethinking Global Names The assumption that the job of a certificate was to bind a name to a key made sense when it was first published. In the 1970's, people operated in relatively small communities. Relationships formed face to face. Once you knew who someone was, you often knew enough to decide how to behave with that person. As a result, people have reduced this requirement to the simply stated: "know who you're dealing with". Names, in turn, are what we humans use as identifiers of persons. Therefore, it was natural for people to translate the need to know who the keyholder was into a need to know the keyholder's name. Computer applications need to make decisions about key-holders. These decisions are almost never made strictly on the basis of a key-holder's name. There is some other fact Security Subgroup

Page 140 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

about the key-holder of interest to the application (or to the human being running that application). If a name functions at all, it is as an index into some database (or human memory) of that other information. The assumption that names are valid identifiers remains true in much of daily life, but is not true on a global scale. It is extremely unlikely that the name by which we know someone, a given name, would function as a unique identifier on the Internet. Given names continue to serve the social function of making the named person feel better when addressed by name, but they are almost never globally unique. Therefore they are inadequate as the identifiers envisioned by Diffie, Hellman and Kohnfelder. In the late 1990's with the explosion of the Internet, it is likely that one will encounter keyholders who are complete strangers in the physical world and will remain so. Contact will be made digitally and will remain digital for the duration of the relationship. In that case, a globally unique name would not succeed in accessing information about a person encountered for the first time on the net because there is no physical-world relationship giving information to be accessed and there is no digital relationship yet established through which cyberspace information can be gathered. One might remedy this situation by assigning everyone a globally unique and unchangeable name and then using that name to index a global database of facts about the person. This might bring us back to the mode of operation we were accustomed to in small towns. However, that solution would constitute a massive privacy violation and would probably be rejected as politically impossible. A globally unique ID might even fail when dealing with people we do know. Few of us know the full given names of people with whom we deal. A globally unique name for a person would be larger than the full given name (and probably contain it, out of deference to a person's fondness for his or her own name). It would therefore not be a name by which we know the person, barring a radical change in human behaviour. A globally unique ID that contains a person's given name poses a special danger. If a human being is part of the process (e.g., scanning a database of global IDs in order to find the ID of a specific person for the purpose of issuing an attribute certificate), then it is likely that the human operator would pay attention to the familiar portion of the ID (the common name) and pay less attention to the rest. Since the common name is not an adequate ID, this can lead to mistakes. Where there can be mistakes, there is an avenue for attack. Perhaps the best globally unique identifier is one that is uniform in appearance (such as a long number or random looking text string) so that it has no recognisable sub-field. It should also be large enough (from a sparse enough name space) that typographical errors would not yield another valid identifier. 4.2.4.4.4 SDSI 2.0 Names, Local Names and Global Identifiers • SDSI 2.0 Names A basic SDSI 2.0 name is an S-expression (non empty string of elements with the first element called “type” of the expression) with two elements: the word "name" and the chosen name. For example, (name fred) // george is a basic SDSI name, where the comment indicates that the name is the name space defined by george. If fred in turn defines a name, for example,

Security Subgroup

Page 141 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

(name sam) // fred then one can refer to this same entity as (name fred sam) // george relative to george's name space. • Local Names and Global Identifiers The names we use are local names. These are the names we write in our personal address books or use as nicknames with e-mail agents. They can be IDs assigned by corporations (e.g., bank account numbers or employee numbers). Those names or IDs do not need to be globally unique. Rather, they need to be unique for the one entity that maintains that address book, e-mail alias file or list of accounts. Ron Rivest and Butler Lampson showed with SDSI 1.0 [SDSI] that one can not only use local names locally, one can use local names globally. The clear security advantage and operational simplicity of SDSI names caused in the SPKI group to adopt SDSI names as part of the SPKI standard. Even though humans use local names, computer systems often need globally unique identifiers. If we are using public key cryptography, we have a ready source of globally unique identifiers. When one creates a key pair, for use in public key cryptography, the private key is bound to its owner by good key safeguarding practice. If that private key gets loose from its owner, then a basic premise of public key cryptography has been violated and that key is no longer of interest. The private key is also globally unique. If it were not, then the key generation process would be seriously flawed and we would have to abandon this public key system implementation. The private key must be kept secret, so it is not a possible identifier, but each public key corresponds to one private key and therefore to one keyholder. The public key, viewed as a byte string, is therefore a global identifier for the keyholder. If there exists a collisionfree hash function, then a collision-free hash of every public key is also a globally unique identifier for every keyholder, and probably a shorter one than the public key. 4.2.4.4.5 SDSI Fully Qualified Names SDSI local names are of great value to their definer. Each local name maps to one or more public keys and therefore to the corresponding keyholder(s). Through SDSI's name chaining, these local names become useful potentially to the whole world. To a computer system making use of these names, the name string is not enough. One must identify the name space in which that byte string is defined. That name space can be identified globally by a public key. It is SDSI 1.0 convention, preserved in SPKI, that if a relative SDSI name occurs within a certificate, then the public key of the issuer is the identifier of the name space in which that name is defined. However, if a SDSI name is ever to occur outside of a certificate, the name space within which it is defined must be identified. This gives rise to the Fully Qualified SDSI Name. That name is a public key followed by one or more names relative to that key. If there are two or more names, then the string of names is a SDSI name chain. For example, (name (hash sha1 |TLCgPLFlGTzgUbcaYLW8kGTEnUk=|) jim therese) is a fully qualified SDSI name, using the SHA-1 hash of a public key as the global

Security Subgroup

Page 142 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

identifier defining the name space and anchoring this name string. 4.2.4.4.6 SPKI Attribute certificates. • Authorisation Fully qualified SDSI names represent globally unique names, but at every step of their construction the local name used is presumably meaningful to the issuer. Therefore, with SDSI name certificates one can identify the keyholder by some name relevant to someone. However, what an application needs to do, when given a public key certificate or a set of them, is answer the question of whether the remote keyholder is permitted some access. That application must make a decision. The data needed for that decision is almost never the spelling of a keyholder's name. Instead, the application needs to know if the keyholder is authorised for some access. This is the primary job of a certificate, according to the members of the SPKI WG, and the SPKI certificate was designed to meet this need as simply and directly as possible. We realise that the world is not going to switch to SPKI certificates overnight. Therefore, we developed an authorisation computation process that can use certificates in any format. • Attribute certificates An Attribute Certificate, as defined in X9.57, binds an attribute that could be an authorisation to a Distinguished Name. For an application to use this information, it must combine an attribute certificate with an ID certificate, in order to get the full mapping: authorisation -> name -> key Presumably the two certificates involved came from different issuers, one an authority on the authorisation and the other an authority on names. However, if either of these issuers were subverted, then an attacker could obtain an authorisation improperly. Therefore, both the issuers need to be trusted with the authorisation decision. • X.509 extensions X.509v3 permits general extensions. These extensions can be used to carry authorisation information. This makes the certificate an instrument mapping both: authorisation -> key and name -> key In this case, there is only one issuer, who must be an authority on both the authorisation and the name. Some propose issuing a master X.509v3 certificate to an individual and letting extensions hold all the attributes or authorisations the individual would need. This would require the issuer to be an authority on all of those authorisations. In addition, this aggregation of attributes would result in shortening the lifetime of the certificate, since each attribute would have its own lifetime. Finally, aggregation of attributes amounts to the building of a dossier and represents a potential privacy violation. For all of these reasons, this is desirable that authorisations be limited to one per certificate. • SPKI attribute certificates A basic SPKI certificate defines a straight authorisation mapping: authorisation -> key If someone wants access to a keyholder's name, for logging purposes or even for punishment after wrong-doing, then one can map from key to location information (name,

Security Subgroup

Page 143 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

address, phone, ...) to get: authorisation -> key -> name This mapping has an apparent security advantage over the attribute certificate mapping. In the mapping above, only the authorisation -> key mapping needs to be secure at the level required for the access control mechanism. The key -> name mapping (and the issuer of any certificates involved) needs to be secure enough to satisfy lawyers or private investigators, but a subversion of this mapping does not permit the attacker to defeat the access control. Presumably, therefore, the care with which these certificates (or database entries) are created is less critical than the care with which the authorisation certificate is issued. 4.2.5 X.509 PKI Overview 4.2.5.1 Introduction Is the authentication framework designed (1988) to support X.500 directory services. Both x.509 and x.500 are part of standards proposed by the ISO and ITU. The X.500 standard is designed to provide distributed directory services on large computer networks and X.509 provides a PKI framework for authenticating X.500 directory services. The experience gained in attempts to deploy PEM/RFC 1422 made it clear that the v1 and v2 initial certificate formats are deficient in several respects. Most importantly, more fields were needed to carry information which PEM design and implementation experience has proven necessary. In response to these new requirements, ISO/IEC/ITU and ANSI X9 developed the X.509 version 3 (v3) certificate format. The v3 format extends the v2 format by adding provision for additional extension fields. Particular extension field types may be specified in standards or may be defined and registered by any organisation or community. In June 1996, standardisation of the basic v3 format was completed [X.509]. In fact, X.509 has been widely adopted by many companies world-wide to produce X.509based products and the number is growing. Efforts are currently underway to make the X.509-based PKI, support a global network such as the Internet. The X.509 description has produced increased amounts of specifications around it just because its own success. This makes its description more prolific. In this section after an overview of the X.509 trust model some concepts used to describe some aspects of the X.509 will be introduced: X.500 directory and OID’s. Other specifications such as OCSP or Time Stamp Protocols will be treated at the end of the section. 4.2.5.2 How the X.509 trust model works Let’s suppose than user A wants a secure communication with user B, so A wishes to know the public key of B. A therefore retrieves the certificate signed by B’s CA: CA(B). This is not enough to A because A don’t know B’s CA so temporally don’t trust CA(B). The problem now is to find a certify of the public key of CA(B). Its clear than the problem is recursive CA(CA(B)), ….until A finds a certificate that was issued by a CA that A trust. This normally means CA(A), the CA than issued A’s own certificate. In this case, in order to be sure of the binding between the identity of B and the public key of B, user A only has to trust the CA that issued his own certificate CA(A). The list of CAs between A and B is known as a certification path. A certification path logically forms an unbroken chain of trusted points between two users wishing to communicate. User A may end up with a mechanism for determining that user B owns a Security Subgroup

Page 144 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

particular public key, but extra information is needed in order to know how much assurance can be placed in this information: user A needs to know the policies of all the CAs that form the certification path. If one of the CAs has a policy that it will issue certificates to any entity who asks without checking what this entity is, then the chain of trust is meaningless. In general, establishing a certification path in the way described above, between two users who have never met, will be difficult unless each CA issues certificates for other CAs, these certificates are called cross-certificates. The existence of this cross-certificates aren’t a guarantee of the existence of a path between two users. On the other hand, if CAs do issue a lot of cross-certificates then, determining which to use at any time is not a trivial matter. If you also want to only use CAs which provide a minimum level of assurance as to the identity the users they certificate then this complicate the process further. A useful simplification mentioned in the X.509 recommendations is to arrange CAs in a hierarchy and this makes certification path between two user much easier to identify. 4.2.5.3 X.500 directory. Full understanding of X.509 PKI requires some basic knowledge of the X.500 directory. X.500 is a standard for a Directory Service by the International Telecommunications Union (ITU). The same standard is also published by the ISO/IEC. The latest version of the standard is from 1993 [ITU (1993)]. However, most of the current implementations still follow the 1988 version of the standard. A directory is a database oriented to a “yellow pages” stile. All information in the directory is stored in “entries”, each of which belongs to at least one so-called “object class”. Examples of these object-classes are “country”, “organisation”, “organisational unit”, “person”….. • Every “entry” has attributes which are in correspondence with the object class the entry belongs. For example the object class “person” has attributes as Common Name, e-mail address,…….and these attributes appear in the entry. At least one attribute value is designated to specify a name for an entry. The entry of a “person” is usually named by the value of the attribute type “common name”. An X.500 directory entry can represent any real-world entity, not just people but also computers, companies and of course a digital X.509 certificate. To support looking up entries in the directory, each entry is assigned a globally unique name, called distinguished name or DN. To ensure the uniqueness, these names are assigned in a very specific way: a hierarchical structure called Directory Information Tree or DIT. Except the root vertex, each vertex in the tree has one parent and possibly any children. Each vertex except the root is assigned a relative distinguished name RDN, that is unique among the vertex siblings (nodes at the same level on the tree). The RDN’s of a vertex ancestor are concatenated with the vertex own RDN to form the DN of this vertex. See figure to illustrate this process. Despite X.509 does not dictate a particular CA arrangement trust model it describes the hierarchical model with cross-certificates and encourages its use. 4.2.5.4 Object registration. Every time X.509 needs identify some object (a signature algorithm, a certification policy, a user-defined extension….) uses a internationally defined object identifier mechanism OID. An Object Identifier (OID) is a numeric value composed of a sequence of integers, Security Subgroup

Page 145 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

that is unique with respect to all other OIDs. 0 itu-t

The OID are assigned following a hierarchical structure of valueassigning authorities. See the example in the figure, the object identifier 2-16-840-1-45356-3-15 is the identifier of a certification policy from the CA registered as 2-16-840-45356. The first number qualifier “2” indicates the branch of the hierarchy administered jointly by ISO and ITU.

2 joint-iso-itu-t

1 iso

16 country

840 US

1 organisation 45356 the CA

1 algorithms

2

3 policies

15 the policy

So what it appears as extensions on a certificate are a list of numerical structured values, each one referencing a registered object. As we will see the X.509 registered objects have a common unique structure with three fields: type, criticality and

value. X.509 version 3, certificate format description

Description of the certificate fields: • Certificate format version The version field indicates version 1, 2, 3 of the certificate format with provision to future standards. • Certificate serial number This field specifies the unique numerical identifier of the certificate in the domain of all public keys certificates issued by the CA. When a certificate is revoked, it is actually the certificate serial number that is posted in the CRL signed by the CA. • Signature algorithm issuer CA This field identifies the algorithm used by the CA to sign the certificate. The algorithm identifier is a number object identifier OID registered with an international-recognised standards organisation (e.g. ISO) and this numerical reference binds both the public-key algorithm and the hashing algorithm (e.g. RSA with MD5) used by the CA to sign certificates. • Issuer X.500 CA name This field specifies the X.500 distinguished name (DN) of the CA that issued the certificate. • Validity period The validity period specify dates and times for the start date and expiry date of the

Security Subgroup

Page 146 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

certificate. • Subject X.500 name This field specifies the X.500 distinguished name (DN) of the entity holding the private key corresponding to the public key in the certificate; for example the DN c=ES, O= CIGESA, CN=Jordi Ruiz might be the distinguished name for the employee Jordi Ruiz of the CIGESA corporation. • Subject public key information This field identifies two important pieces of information: 1. The value of the public key owned by the subject. 2. The identifier specifying the algorithm with which the public key is manipulated. The identifier is a numeric value registered as a OID in an international-recognised standards organisation and specifies both the public key algorithm and the hashing algorithm (e.g. DSA with SHA-1). 4.2.5.5 Certificate extensions X.509version 3 introduced significant changes to the X.509 with the concept of standard extensions. This is a mechanism whereby certificates can be “extended” in a standardised and generic way in order to include additional information. The term standard extension refers to the fact that the version 3 defines some broadly or common extension applicable to the version 2 of certificates. However, certificates are not constricted to only the standard extensions and anyone can register a numeric OID Type standard extension and its Value within the appropriate authorities (e.g. ISO). So the main idea is that extensions are a completely generic mechanism. This flexibility introduced by extensions makes interoperability between PKI components more difficult: • ITU defines a set of standards extensions for the ISO. •

ANSI X9.55 defines extensions for the banking community.



The Secure Electronic Transactions (SET) defines even longer list of extensions.

Each extension consist of three fields: • Type; this field defines the type of the data (text string, numerical, graphic…) in the extension value field. Values of Type field are object identifiers (OID’s) so all data types must be registered in order to guarantee interoperability. •

Criticality, is a single bit flag with a default value false; when an extension is flagged as critical, it indicates that the associated extension value contains information than can’t be ignored. If a certificate-using application cannot process a critical extension the application should reject the certificate.



Value, this extension field contain the actual data for the extension. The format of the data is reflected in the type field.

There is an important distinction between a critical extension and a required information in a certificate. A particular application may need than certain extension be available in any certified processed by the application. This, however, does not imply than the extension needs to be flagged as critical. Critical extensions are intended for data than must be understood by all applications. The vast majority of extensions are non critical. Critical extensions should only be added Security Subgroup

Page 147 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

to a certificate after much consideration and with the understanding than doing so could create interoperability problems with other CA security domains and applications. The problem is that with non-critical extensions applications may choose whatever interpretation (that is the non critical extensions has an advisory nature only). This means than, for example, if a certificate has a extension constraint which declare that key pair is only be used for digital signing, because this is not a critical extension therefore it can be used by specific-application for ciphering anyway. Note: Implementation and spread of support to some field extensions is in an early stage. 4.2.5.6

The standard X.509v3 certificate extensions description

The standard extensions presents four groups described below. 4.2.5.6.1 Key information extensions. The fields of this kind of extensions provide information about the intended uses for a certificate and the key pair associated: four fields carry the information. • Authority Key Identifier extension field. Identifies the unique key pair used by the CA to sign certificates. This extension aid in the process of verifying a certificate signature in the case a CA has to change its signing key pair. • Subject Key Identifier extension field This field plays the same role than the previous field but for the subject key pair. Useful when a user has updated his key pair during his existence on a security domain. • Key Usage extension field. The key usage field specifies the intended use(s) of the key. The following list is possible settings of this field: certification signing, CRL signing, symmetric key encryption for data transfer, Diffie Hellman key agreement. • Private Key Usage Period extension field. This field carries the deadline validity date for the signing key pair (not ). However most profiles for X.509 (profiles meaning implementations), the signing keypair is the same as the ciphering key pair. 4.2.5.6.2 Policy Information extensions. The policy information extensions provide a mechanism for the CA to distribute information intended to the ways a certificate should be used and interpreted. The CAs can include with the certificate a list of policies that were followed in creating the certificate. These policies are intended to help users decide if a certificate is suitable for a particular purpose. For instance, a policy might indicate that a certificate key can be used for casual email messages but not for financial transactions. A certificate policy indicates such things as CA security procedures, subject identification measures (out-of-band authentication), legal disclaimers or provisions, and others. Policy mapping allows a CA to indicate whether one of its policies is equivalent to another CA policy. There is more detailed explanation of this fields in the section of Certification Policy Statements. • Certificate Policies extension field. Specifies the policies under which the certificate should be used and interpreted. The

Security Subgroup

Page 148 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

policies are represented by OID sequences and several policies can be referenced in a certificate. Policy Mappings extension field. This field applies only to cross certificates and provides a mechanism for the signing CA to map its policies to the policies of the CA certified, that is specified as subject in the crosscertificate. This policy mapping information is critical when an application is processing a certificate. This policy mapping information is critical when an application is processing a certificate chain that crosses CA domain boundaries. An application uses mapping information to ensure that all certificates in a chain have a consistent and acceptable policy. 4.2.5.6.3 End entity (User and CA) attribute extensions The three fields of this extension provide additional mechanisms to specify identifying information for a user or a CA (the subject and the issuer). The purpose of subject/issuer additional names is to support applications like e-mail, where a user reference name for this application must be unique but is not the same as the certificate X.500 subject field. • Subject Alternative Name Permissible name forms are Internet e-mail address, Internet Domain name, Internet IP address, Web uniform resource identifier and any other name type with a registered OID. • Issuer Alternative Name Applies the same as the field above, for the issuer (the CA). • Subject Directory Attributes Provides for additional X.500 directory attributes to be included in the subjects certificate. Additional information beyond that provided in the subject X.500 name and subject alternative name fields. 4.2.5.6.4 Certification path constraints extensions The certification path constraints extensions provide mechanisms for a CA control and limit third-party trust in a cross-certified environment. This mechanism avoids the grow of certification paths in the X.509 trust model. For example, the issuer CA (the superior CA) only trust the subordinate CA certifying in a particular name space (within a given Internet domain) or to certificates that follow a specific set of certification policies. This is an important extension because it allows CAs to employ a progressive constraint trust model that prevents the formation of long paths. 1. Basic Constraints extensions field The basic constraint field simply indicates whether or not the subject of the certificate may act as a CA. If the subject is a CA, then the certificate is a cross-certificate. A crosscertificate may also specify the maximum length of the certificate chain beyond a crosscertificate. This specification is optional may not be present in all implementations. 2. Name constraints extension field Also used in cross-certificates, this field provides CAs a mechanism to restrict the domain of trustworthy names in a cross-certified environment (remember X.500 DN as certificate subject names). For example, ZAFI SA and IFAZ SA are to cross-certify. Zafi wants to accept all certificates issued by Ifaz to its own employees, but not certificates issued by Ifaz to anyone outside Ifaz. To implement the certificate name space constraint, ZAFI could issue Security Subgroup

Page 149 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

a cross-certificate to IFAZ with name constraints field set to “o= IFAZ SA, c=ES” assuming than Ifaz is a Spanish company. 3. Policy constraints extension field This field provides CA administrators the capability to specify the set of acceptable policies in a certificate chain extending from a cross-certificate. The policy constraint field can specify whether or not all certificates in a chain must meet a specific policy and whether or not to inhibit policy mapping when processing a chain. 4.2.5.7 •

Summary of X.509 PKI Characteristics

Certificate information

Fully extensive, can include any information bound enabled to a public key. • CA arrangement Trust constraint mechanisms are provided. The general hierarchy with cross certification is encouraged. • CA Subject and User relationship Issuer (CAs), subject and users are distinct. • CA Subject and User trust relationships Each user is expected to fully trust at least one CA. CAs can constrain how their trust in subjects and other CAs is delegated . • Certificate validation method Offline with CRLs, but could be online thought extensions: OCSP •

Certificate revocation method

Sophisticated CRL mechanism. On line methods available via extensions: OCSP. • Identity vs. credential certificates Mainly identity certificates. Certain standard extensions provide some credential-like functionality and can be extended to provide full credential certification. • Irrefutability and strong authentication CA and its policy is still responsible for certification accuracy. • In-band vs. out-of-band authentication Users must obtain at least one CA key out-of-band. Also, the extensive use of OID’s requires out-of-band communication whenever a new extension is defined. • Anonymity Use attribute Extensions can be used to provide fully anonymous service.X.509 PKI main protocols overview 4.2.5.8 X.509 PKI, Certificate Management Protocols (CMP) This paragraph is adapted of the RFC2516 standard 4.2.5.8.1 PKI Management Overview A PKI trust centre must be structured to be consistent with the types of individuals who must administer it. Providing such administrators with unbounded choices not only Security Subgroup

Page 150 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

complicates the software required but also increases the chances that a subtle mistake by an administrator or software developer will result in broader compromise. Similarly, restricting administrators with cumbersome mechanisms will cause them not to use the PKI. Management protocols are REQUIRED to support on-line interactions between Public Key Infrastructure (PKI) components. For example, a management protocol might be used between a Certification Authority (CA) and a client system with which a key pair is associated, or between two CAs that issue cross-certificates for each other. The architecture of a CA implementation and its interactions among PKI components must be defined in an early design phase of a trust centre project. Somebody must be in charge of the functional specifications within the model elected. To determine a concrete CA policy must be the next task in a CA project implementation, all that in the framework of the trust model assumed. What is exposed next lines are the certification management protocols that up-to-date, X.509v3 draft establishes. These protocols let two peers (CA, RA, end entity) communicate messages with a definite format and workflow in order to compliment a necessary PKI task (request a certificate, revoke… and so on). Special interest are the “optional” and “may” specifications, components or entities because they define the discriminant spots within the same trust model and, decisions will have to be taken about these issues in order to get an operative CA. We first define the entities involved in PKI management and their interactions (in terms of the PKI management functions required). We then group these functions in order to accommodate different identifiable types of end entities. Finally those mandatory management functions than need messages in order to communicate PKI components are described. 4.2.5.8.2 X.509 PKI Entities Definitions The entities involved in PKI management include the end entity (i.e., the entity to be named in the subject field of a certificate) and the certification authority (i.e., the entity named in the issuer field of a certificate). A registration authority MAY also be involved in PKI management. • Subjects and End Entities The term "subject" is used to refer to the entity named in the subject field of a certificate; when we wish to distinguish the tools and/or software used by the subject (e.g., a local certificate management module) we will use the term "subject equipment". In general, the term "end entity" (EE) rather than subject is preferred, in order to avoid confusion with the field name. It is important to note that the end entities will include not only human users of applications, but also applications themselves (e.g., for IP security). This factor influences the protocols which the PKI management operations use; for example, application software is far more likely to know exactly which certificate extensions are required than are human users. PKI management entities are also end entities in the sense that they are sometimes named in the subject field of a certificate or cross-certificate. Where appropriate, the term "endentity" will be used to refer to end entities who are not PKI management entities. All end entities require secure local access to some information -- at a minimum, their own name and private key, the name of a CA which is directly trusted by this entity and that CA's public key (or a fingerprint of the public key where a self-certified version is

Security Subgroup

Page 151 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

available elsewhere). Implementations MAY use secure local storage for more than this minimum (e.g., the end entity's own certificate or application-specific information). The form of storage will also vary -- from files to tamper-resistant cryptographic tokens. Such local trusted storage is referred to here as the end entity's Personal Security Environment (PSE). Though PSE formats are beyond the scope of this document (they are very dependent on equipment, et cetera), a generic interchange format for PSEs is defined here a certification response message MAY be used. • Certification Authority, CA The certification authority (CA) MAY OR MAY NOT actually be a real "third party" from the end entity's point of view. Quite often, the CA will actually belong to the same organisation as the end entities it supports. Strictly, the term CA is referred to the entity named in the issuer field of a certificate; when it is necessary to distinguish the software or hardware tools used by the CA is much better to use the term "CA equipment". The CA equipment will often include both an "off-line" component and an "on-line" component, with the CA private key only available to the "offline" component. This is, however, a matter for implementers (though it is also relevant as a policy issue). The term "root CA" have to be used to indicate a CA that is directly trusted by an end entity; that is, securely acquiring the value of a root CA public key requires some out-ofband step(s). This term is not meant to imply that a root CA is necessarily at the top of any hierarchy, simply that the CA in question is trusted directly. A "subordinate CA" is one that is not a root CA for the end entity in question. Often, a subordinate CA will not be a root CA for any entity but this is NOT MANDATORY. • Registration Authority, RA In addition to end-entities and CAs, many environments call for the existence of a Registration Authority (RA) separate from the Certification Authority. The functions which the registration authority may carry out will vary from case to case but MAY include personal authentication, token distribution, revocation reporting, name assignment, key generation, archival of key pairs, et cetera. The RA is an OPTIONAL component - when it is not present the CA is assumed to be able to carry out the RA's functions so that the PKI management protocols are the same from the end-entity's point of view. Again, we have to distinguish, where necessary, between the RA and the tools used (the "RA equipment"). Note that an RA is itself an end entity. It is also assumed that all RAs are in fact certified end entities and that RAs have private keys that are usable for signing. How a particular CA equipment identifies some end entities as RAs is an implementation issue (i.e., this document specifies no special RA certification operation). It is NOT MANDATORY that the RA is certified by the CA with which it is interacting at the moment (so one RA MAY work with more than one CA whilst only being certified once). In some circumstances end entities will communicate directly with a CA even where an RA is present. For example, for initial registration and/or certification the subject may use its RA, but communicate directly with the CA in order to refresh its certificate. • Justifying RA: reasons for the CA/RA split. The reasons which justify the presence of a RA fall into those which are due to technical factors and those which are organisational. Technical ones: Security Subgroup

Page 152 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

-

If hardware tokens are going to be used, then not all end entities will have the equipment needed to initialise these; the RA equipment can include the necessary functionality to do it (out-of-band loading). Of course this is a matter of policy. - Some end entities may not have the capability to publish certificates (this we will see is a requirement) and again RA can assume this functionality. - The RA will be able to issue signed revocation request on behalf end entities than can’t do it (because the key pair has been lost or compromised). Some organisational reasons which argue for the RA presence could be: - It may be more cost effective to concentrate functionality in the RA equipment than to supply complete functionality to all end entity equipment, especially if token initialisations equipment are going to be used. - RA’s may be better placed to identify people in order to be out-of-band authenticated for a identity certificate request. - Also may occur than initial end entity registration/authentication scheme needs that the on-line end entityà PKI messages be authenticated. - For many applications there will already be in place some administrative structure so that candidates for the role of RA are easy to find (which is not true for the CA). 4.2.5.8.3 PKI Management Requirements The X.509v3 model establishes the following requirements on PKI management protocols: 1. PKI management must conform to the directory services standard ISO 9594-8 and the associated amendments (certificate extensions). 2. It must be possible to regularly update any key pair without affecting any other key pair. 3. The use of confidentiality in PKI management protocols must be kept to a minimum (in order to keep ease regulatory problems). 4. PKI management protocols MUST allow the use of different industry-standard cryptographic algorithms, (specifically including RSA, DSA, MD5, SHA-1) -- this means that any given CA, RA, or end entity MAY, in principle, use whichever algorithms suit it for its own key pair(s). 5. PKI management protocols MUST NOT preclude the generation of key pairs by the end-entity concerned, by an RA, or by a CA -- key generation may also occur elsewhere, but for the purposes of PKI management we can regard key generation as occurring wherever the key is first present at an end entity, RA, or CA. 6. PKI management protocols MUST support the publication of certificates by the end-entity concerned, by an RA, or by a CA. Different implementations and different environments may choose any of the above approaches. 7. PKI management protocols MUST support the production of Certificate Revocation Lists (CRLs) by allowing certified end entities to make requests for the revocation of certificates - this must be done in such a way that the denial-ofservice attacks which are possible are not made simpler. 8. PKI management protocols MUST BE usable over a variety of "transport" mechanisms, specifically including mail, http, TCP/IP and ftp. 9. Final authority for certification creation rests with the CA. 10. No RA or end-entity equipment can assume that any certificate issued by a CA will contain what was requested . A CA may alter certificate field values or may add, delete or alter extensions according to its operating policy. In other words, all PKI

Security Subgroup

Page 153 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

entities (end-entities, Ras and CAs) must be capable of handling responses to requests for certificates in which the actual certificate issued is different from that requested (for example, a CA may shorten the validity period requested). Note that policy may dictate that the CA must not publish or otherwise distribute the certificate until the requesting entity has reviewed and accepted the newly-created certificate (typically through use of the PKIConfirmmessage). 11. A graceful, scheduled change-over from one non-compromised CA key pair to the next (CA key update) must be supported (note that if the CA key is compromised, re-initialisation must be performed for all entities in the domain of that CA). An end entity whose PSE contains a new CA public key (following a CA key update) must also be able to verify certificates verifiable using the old public key. End entities that directly trust the old CA key pair must also be able to verify certificates signed using the new CA private key. (Required for situations where the old CA public key is "hardwired" into the end entity's cryptographic equipment). 12. The Functions of a RA, MAY in some implementations or environments, be carried out by the CA itself. The protocols must be designed so that end entities will use the same protocol (but, of course, not the same key!) regardless of whether the communication is with an RA or CA. 13. Where an end entity requests a certificate containing a given public key value, the end entity must be ready to demonstrate possession of the corresponding private key value. This may be accomplished in various ways, depending on the type of certification request. See Section 2.3, "Proof of Possession of Private Key", for details of the in-band methods defined for the PKIX-CMP (i.e., Certificate Management Protocol) messages. 4.2.5.8.4 PKI (Certificate) Management Operations The following diagram shows the relationship between the entities defined above in terms of the PKI management operations. The arrows in the diagram indicate "protocols" in the sense that a formatted defined set of PKI management messages can be sent along each of the lettered lines. • Operations for management messages At a high level the set of operations for which management messages are necessary can be grouped as follows: 1. CA establishment. When establishing a new CA, certain steps are required (e.g., production of initial CRL, export of CA public key). 2. End entity initialisation. This includes importing a root CA public key and requesting information about the options supported by a PKI management entity.

Security Subgroup

Page 154 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Out-of-band loading i g

Certificate publishing

End entity

b a Certs / CRL Repository

Initial registration/certification key pair recovery key pair update certificate update revocation request

PKI USERS PKI MANAGEMENT ENTITIES

a b g

a

RA

b Out-of-band publication j

c

g h

CA

d

CRL publishing e f

cross-certification cross certificate update

CA2

PKI Entities 3. Certification. Various operations result in the creation of new certificates: - Initial registration/certification. This is the process whereby an end entity first makes itself known to a CA or RA, prior to the CA issuing a certificate or certificates for that end entity. The end result of this process (when it is successful) is that a CA issues a certificate for an end entity's public key, and returns that certificate to the end entity and/or posts that certificate in a public repository. This process may, and typically will, involve multiple "steps", possibly including an initialisation of the end entity's equipment. For example, the end entity's equipment must be securely initialised with the public key of a CA, to be used in validating certificate paths. Furthermore, an end entity typically needs to be initialised with its own key pair(s). - Key pair update. Every key pair needs to be updated regularly (i.e., replaced with a new key pair), and a new certificate needs to be issued. - Certificate update. As certificates expire they may be "refreshed" if nothing relevant in the environment has changed. - CA key pair update. As with end entities, CA key pairs need to be updated regularly; however, different mechanisms are required. - Cross-certification request (one CA requests issuance of a cross-certificate from another CA).

Security Subgroup

Page 155 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

A "cross-certificate" is a certificate in which the subject CA and the issuer CA are distinct and the field SubjectPublicKeyInfo contains a verification key (i.e., the certificate has been issued for the subject CA's signing key pair). When it is necessary to distinguish more finely, the following terms MAY be used: a cross-certificate is called an "inter-domain cross-certificate" if the subject and issuer CAs belong to different administrative domains; otherwise it is called an "intradomain cross-certificate". Notes: Note 1. The above definition of "cross-certificate" aligns with the defined term "CAcertificate" in X.509. Note that this term is not to be confused with the X.500 "cACertificate" attribute type, which is unrelated. Note 2. In many environments the term "cross-certificate", unless further qualified, will be understood to be synonymous with "inter-domain cross-certificate" as defined above. Note 3. Issuance of cross-certificates may be, but is not necessarily, mutual; that is, two CAs may issue cross-certificates for each other. - Cross-certificate update. •

Operations for certificate update

Similar to a normal certificate update but involving a cross-certificate : 1. Certificate/CRL discovery operations. Some PKI management operations result in the publication of certificates or CRLs: - Certificate publication. Having gone to the trouble of producing a certificate, some means for publishing it is needed. The "means" defined in PKIX MAY involve the messages formats specified in PKIX document “CMP - Data Structures” or MAY involve other methods (LDAP, for example) as described in the "Operational Protocols" documents of the PKIX series of specifications. - CRL publication. As for certificate publication. 2. Recovery operations. Some PKI management operations are used when an end entity has "lost" its PSE: - Key pair recovery. As an option, user client key materials (e.g., a user's private key used for decryption purposes) MAY be backed up by a CA, an RA, or a key backup system associated with a CA or RA. If an entity needs to recover these backed up key materials (e.g., as a result of a forgotten password or a lost key chain file), a protocol exchange MAY be needed to support such recovery. 3. Revocation operations. Some PKI operations result in the creation of new CRL entries and/or new CRLs. - Revocation request. An authorised person advises a CA of an abnormal situation requiring certificate revocation. 4. PSE operations. The definition of PSE operations (e.g., moving a PSE, changing a PIN, etc.) is not considered, but in X.509 formats it is defined a PKIMessage (CertRepMessage) which can Security Subgroup

Page 156 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

form the basis of such operations. Note that on-line protocols are not the only way of implementing the above operations. For all operations there are off-line methods of achieving the same result, and the X.509v3 specification does not mandate use of on-line protocols. For example, when hardware tokens are used, many of the operations MAY be achieved as part of the physical token delivery. There are X.509 sections than define a set of standard messages supporting the above operations. The protocols for conveying these exchanges in different environments (file based, on-line, E-mail, and WWW) are also specified. 4.2.5.8.5 Assumptions and restrictions • End entity initialisation. The first step for an end entity in dealing with PKI management entities is to request information about the PKI functions supported and to securely acquire a copy of the relevant root CA public key(s). • Initial registration/certification. There are many schemes that can be used to achieve initial registration and certification of end entities. No one method is suitable for all situations due to the range of policies that a CA may implement and the variation in the types of end entity which can occur. We can however, classify the initial registration / certification schemes that are supported by X.509v3 specification. Note that the word "initial", above, is crucial - we are dealing with the situation where the end entity in question has had no previous contact with the PKI. Where the end entity already possesses certified keys then some simplifications/alternatives are possible. Having classified the schemes that are supported by the X.509 specification, then some are specify as mandatory and some as optional. The goal is that the mandatory schemes cover a sufficient number of the cases which will arise in real use, whilst the optional schemes are available for special cases which is necessary to give a special flavour to the implementation. In this way X.509 achieve a balance between flexibility and ease of implementation. We will now describe the classification of initial registration / certification schemes. 1. Criteria used -

Initiation of registration / certification.

In terms of the PKI messages which are produced we can regard the initiation of the initial registration / certification exchanges as occurring wherever the first PKI message relating to the end entity is produced. Note that the real-world initiation of the registration/certification procedure may occur elsewhere (e.g., a personnel department may telephone an RA operator).The possible locations are at the end entity, an RA, or a CA. - End entity message origin authentication. The on-line messages produced by the end entity that requires a certificate may be authenticated or not. The requirement here is to authenticate the origin of any messages from the end entity to the PKI (CA/RA).In this specification, such authentication is achieved by the PKI (CA/RA) issuing the end entity with a secret value (initial authentication key) and reference value (used to identify the transaction) via some out-ofband means. The initial authentication key can then be used to protect relevant PKI Security Subgroup

Page 157 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

messages. We can thus classify the initial registration/certification scheme according to whether or not the on-line end entity -> PKI messages are authenticated or not. Note 1: We do not discuss the authentication of the PKI -> end entity messages here as this is always REQUIRED. In any case, it can be achieved simply once the root-CA public key has been installed at the end entities equipment or it can be based on the initial authentication key. Note 2: An initial registration / certification procedure can be secure where the messages from the end entity are authenticated via some out-of-band means (e.g., a subsequent visit). - Location of key generation. In this specification, "key generation" is regarded as occurring wherever either the public or private component of a key pair first occurs in a PKIMessage. Note that this does not preclude a centralised key generation service - the actual key pair MAY have been generated elsewhere and transported to the end entity, RA, or CA using a (proprietary or standardised) key generation request/response protocol (outside the scope of this specification). There are thus three possibilities for the location of "key generation": the end entity, an RA, or a CA. - Confirmation of successful certification. Following the creation of an initial certificate for an end entity, additional assurance can be gained by having the end entity explicitly confirm successful receipt of the message containing (or indicating the creation of) the certificate. Naturally, this confirmation message must be protected (based on the initial authentication key or other means). This gives two further possibilities: confirmed or not. 2. Mandatory/Optional schemes The criteria above allows for a large number of initial registration /certification schemes. This specification mandates that conforming CA equipment, RA equipment, and EE equipment MUST support the second scheme listed below. Any entity MAY additionally support other schemes, if desired. - Centralised scheme (OPTIONAL). In terms of the classification above, this scheme is, in some ways, the simplest possible, where: - - initiation occurs at the certifying CA; - - no on-line message authentication is required; - - "key generation" occurs at the certifying CA (1.1.1.3 location of key generation); - - no confirmation message is required. In terms of message flow, this scheme means that the only message required is sent from the CA to the end entity. The message must contain the entire PSE for the end entity. Some out-of-band means must be provided to allow the end entity to authenticate the message received and decrypt any encrypted values. - Basic authenticated scheme (MANDATORY). In terms of the classification above, this scheme is where: - initiation occurs at the end entity;- message authentication is REQUIRED; - "key generation" occurs at the end entity (1.1.1.3 location of key generation); - a confirmation message is REQUIRED.

Security Subgroup

Page 158 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

In terms of message flow, the basic authenticated scheme is as follows: (Where verification of the confirmation message fails, the RA/CA MUST revoke the newly issued certificate if it has been published or otherwise made it available.)

RA/CA

Out-of-band distribution of Initial authentication key IAK and reference value

End entity

Key generation Creation of certification request Protect request with IAK Certification request Verify request Process request Create response

Certification response Handle response Create confirmation Confirmation message

Verify confirmation

• Proof of Possession (PoP) of Private Key In order to prevent certain attacks and to allow a CA/RA to properly check the validity of the binding between an end entity and a key pair, the PKI management operations specified here make it possible for an end entity to prove that it has possession of (i.e., is able to use) the private key corresponding to the public key for which a certificate is requested. A given CA/RA is free to choose how to enforce PoP (e.g., out-of-band procedural means versus PKIX-CMP in-band messages) in its certification exchanges (i.e., this may be a policy issue). However, it is REQUIRED that CAs/RAs MUST enforce PoP by some means because there are currently many non-PKIX operational protocols in use (various electronic mail protocols are one example) that do not explicitly check the binding between the end entity and the private key. Until operational protocols that do verify the binding (for signature, encryption, and key agreement key pairs) exist, and are ubiquitous, this binding can only be assumed to have been verified by the CA/RA. Therefore, if the binding is not verified by the CA/RA, certificates in the Internet Public-Key Infrastructure end up being somewhat less meaningful. POP is accomplished in different ways depending upon the type of key for which a certificate is requested. If a key can be used for multiple purposes (e.g., an RSA key) then any appropriate method MAY be used (e.g., a key which may be used for signing, as well as other purposes, SHOULD NOT be sent to the CA/RA in order to prove possession). This specification explicitly allows for cases where an end entity supplies the relevant proof to an RA and the RA subsequently attests to the CA that the required proof has been received (and validated!). For example, an end entity wishing to have a signing key certified could send the appropriate signature to the RA which then simply notifies the relevant CA that the end entity has supplied the required proof. Of course, such a situation

Security Subgroup

Page 159 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

may be disallowed by some policies (e.g., CAs may be the only entities permitted to verify PoP during certification). -

Signature Keys

For signature keys, the end entity can sign a value to prove possession of the private key. -

Encryption Keys

For encryption keys, the end entity can provide the private key to the CA/RA, or can be required to decrypt a value in order to prove possession of the private key. Decrypting a value can be achieved either directly or indirectly. The direct method is for the RA/CA to issue a random challenge to which an immediate response by the EE is required. The indirect method is to issue a certificate which is encrypted for the end entity (and have the end entity demonstrate its ability to decrypt this certificate in the confirmation message). This allows a CA to issue a certificate in a form which can only be used by the intended end entity. This specification encourages use of the indirect method because this requires no extra messages to be sent (i.e., the proof can be demonstrated using the {request, response, confirmation} triple of messages). - Key Agreement Keys For key agreement keys, the end entity and the PKI management entity (i.e., CA or RA) must establish a shared secret key in order to prove that the end entity has possession of the private key. Note that this need not impose any restrictions on the keys that can be certified by a given CA -- in particular, for Diffie-Hellman keys the end entity may freely choose its algorithm parameters -- provided that the CA can generate a short-term (or one-time) key pair with the appropriate parameters when necessary. • Root CA key update This discussion only applies to CAs that are a root CA for some end entities. The basis of the procedure described here is that the CA protects its new public key using its previous private key and vice versa. Thus when a CA updates its key pair it must generate two extra cACertificate attribute values if certificates are made available using an X.500 directory (for a total of four: OldWithOld; OldWithNew; NewWithOld;and NewWithNew). When a CA changes its key pair those entities who have acquired the old CA public key via "out-of-band" means are most affected. It is these end entities who will need access to the new CA public key protected with the old CA private key. However, they will only require this for a limited period (until they have acquired the new CA public key via the "out-of-band" mechanism). This will typically be easily achieved when these end entities' certificates expire. The data structure used to protect the new and old CA public keys is a standard certificate (which may also contain extensions). There are no new data structures required. Note 1. This scheme does not make use of any of the X.509 v3 extensions as it must be able to work even for version 1 certificates. The presence of the KeyIdentifier extension would make for efficiency improvements. Note 2. While the scheme could be generalised to cover cases where the CA updates its key pair more than once during the validity period of one of its end entities' certificates, this generalisation seems of dubious value. Not having this generalisation simply means that the validity period of a CA key pair must be greater than the validity period of any certificate issued by that CA using that key pair.

Security Subgroup

Page 160 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Note 3.This scheme forces end entities to acquire the new CA public key on the expiry of the last certificate they owned that was signed with the old CA private key (via the "out-ofband" means). Certificate and/or key update operations occurring at other times do not necessarily require this (depending on the end entity's equipment). - CA Operator actions To change the key of the CA, the CA operator does the following 1. Generate a new key pair; 2. Create a certificate containing the old CA public key signed with the new private key (the "old with new" certificate); 3. Create a certificate containing the new CA public key signed with the old private key (the "new with old" certificate); 4. Create a certificate containing the new CA public key signed with the new private key (the "new with new" certificate); 5. Publish these new certificates via the directory and/or other means (perhaps using a CAKeyUpdAnn message); 6. Export the new CA public key so that end entities may acquire it using the "out-of-band" mechanism (if required). The old CA private key is then no longer required. The old CA public key will however remain in use for some time. The time when the old CA public key is no longer required (other than for non-repudiation) will be when all end entities of this CA have securely acquired the new CA public key. The "old with new" certificate must have a validity period starting at the generation time of the old key pair and ending at the expiry date of the old public key. The "new with old" certificate must have a validity period starting at the generation time of the new key pair and ending at the time by which all end entities of this CA will securely possess the new CA public key (at the latest, the expiry date of the old public key). The "new with new" certificate must have a validity period starting at the generation time of the new key pair and ending at the time by which the CA will next update its key pair. • Verifying Certificates. Normally when verifying a signature, the verifier verifies (among other things) the certificate containing the public key of the signer. However, once a CA is allowed to update its key there are a range of new possibilities. These are shown in the table below. Signers Certificate is protected using NEW public key Directory server contains This is the standard key NEW and OLD public keys where the verifier can directly and verify the certificate without PSE contains NEW public using the directory. key (Case 1)

Signers Certificate is protected using OLD public key In this case the verifier must access the directory in order to get the value of the OLD public key. (Case 2) Directory server contains In this case the verifier must In this case the verifier NEW and OLD public keys access the directory in order to can directly verify the and get the value of the NEW certificate without using PSE contains OLD public public key. the directory. key (Case 3) (Case 4) Security Subgroup

Page 161 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

Directory server contains only OLD public key and PSE contains NEW public key

20 October 1999

Although the CA operator has not updated the directory the verifier can verify the certificate directly, as case 1. (Case 5)

The verifier thinks this is the situation of case 2 and will access the directory; however the verification will fail. (Case 6) Directory server contains In this case the Ca operator Although the CA only OLD public key and has not updated the directory operator has not updated PSE Contains OLD public and so the verification will the directory the verifier key fail. can verify the certificate (Case 7) directly as in the case 4. (Case 8) Ø Verification in cases 1, 4, 5 and 8. In these cases the verifier has a local copy of the CA public key which can be used to verify the certificate directly. This is the same as the situation where no key change has occurred. Note that case 8 may arise between the time when the CA operator has generated the new key pair and the time when the CA operator stores the updated attributes in the directory. Case 5 can only arise if the CA operator has issued both the signer's and verifier's certificates during this "gap" (the CA operator SHOULD avoid this as it leads to the failure cases described below). Ø Verification in case 2. In case 2 the verifier must get access to the old public key of the CA. The verifier does the following: 1. Look up the caCertificate attribute in the directory and pick the OldWithNew certificate determined based on validity periods); 2. Verify that this is correct using the new CA key (which the verifier has locally); 3. If correct, check the signer's certificate using the old CA key.

Case 2 will arise when the CA operator has issued the signer's certificate, then changed key and then issued the verifier's certificate, so it is quite a typical case. Ø Verification in case 3. In case 3 the verifier must get access to the new public key of the CA. The verifier does the following: 1. Look up the CACertificate attribute in the directory and pick the NewWithOld certificate (determined based on validity periods); 2. Verify that this is correct using the old CA key (which the verifier has stored locally); 3. If correct, check the signer's certificate using the new CA key. Case 3 will arise when the CA operator has issued the verifier's certificate, then changed key and then issued the signer's certificate, so it is also quite a typical case. Ø Failure of verification in case 6. In this case the CA has issued the verifier's PSE containing the new key without updating the directory attributes. This means that the verifier has no means to get a trustworthy Security Subgroup

Page 162 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

version of the CA's old key and so verification fails. Note that the failure is the CA operator's fault. Ø Failure of verification in case 7. In this case the CA has issued the signer's certificate protected with the new key without updating the directory attributes. This means that the verifier has no means to get a trustworthy version of the CA's new key and so verification fails. Note that the failure is again the CA operator's fault. -

Revocation - Change of CA key

As we saw above the verification of a certificate becomes more complex once the CA is allowed to change its key. This is also true for revocation checks as the CA may have signed the CRL using a newer private key than the one that is within the user's PSE. The analysis of the alternatives is as for certificate verification. 4.2.5.8.6 PKI Mandatory Management functions The PKI management functions outlined in paragraph 1.1.1.3 above are described in this section. This section deals with functions that are "mandatory" in the sense that all end entity and CA/RA implementations MUST be able to provide the functionality described (perhaps via one of the transport mechanisms defined at the end). This description is effectively a profile of the PKI management functionality that MUST be supported. Note that not all PKI management functions result in the creation of a PKI message. • Root CA initialisation (See paragraph 7.6.1.2.2 for definition of "root CA".) A newly created root CA must produce a "self-certificate" which is a Certificate structure with the profile defined for the "newWithNew" certificates (see § below) issued following a root CA key update: in other words a certificate with the new CA public key signed with the new CA private key. In order to make the CAs self certificate useful to end entities that do not acquire the self certificate via "out-of-band" means, the CA must also produce a fingerprint for its public key. End entities that acquire this fingerprint securely via some "out-of-band" means can then verify the CA's self-certificate and hence the other attributes contained therein. The data structure used to carry the fingerprint is the OOBCertHash. 4.2.5.8.7 Root CA key update CA keys (as all other keys) have a finite lifetime and will have to be updated on a periodic basis. The certificates NewWithNew, NewWithOld, and OldWithNew (see Section 2.4.1) are issued by the CA to aid existing end entities who hold the current self-signed CA certificate (OldWithOld) to transition securely to the new self-signed CA certificate (NewWithNew), and to aid new end entities who will hold NewWithNew to acquire OldWithOld securely for verification of existing data. 4.2.5.8.8 Subordinate CA initialisation [See Section 7.6.1.2.2 for this document's definition of "subordinate CA".] From the perspective of PKI management protocols the initialisation of a subordinate CA is the same as the initialisation of an end entity. The only difference is that the subordinate CA must also produce an initial revocation list. Security Subgroup

Page 163 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.5.8.9 CRL production Before issuing any certificates a newly established CA (which issues CRLs) must produce "empty" versions of each CRL which is to be periodically produced. 4.2.5.8.10 PKI information request Then a PKI entity (CA, RA, or EE) wishes to acquire information about the current status of a CA it MAY send that CA a request for such information. The CA must respond to the request by providing (at least) all of the information requested by the requester. If some of the information cannot be provided then an error must be conveyed to the requester. If PKIMessages are used to request and supply this PKI information, then the request must be the GenMsg message, the response must be the GenRep message, and the error must be the Error message. These messages are protected using a MAC based on shared secret information (i.e., PasswordBasedMAC) or any other authenticated means (if the end entity has an existing certificate). • Cross certification The requester CA is the CA that will become the subject of the cross-certificate; the responder CA will become the issuer of the cross-certificate. The requester CA must be "up and running" before initiating the cross-certification operation. The cross-certification scheme is essentially a one way operation; that is, when successful, this operation results in the creation of one new cross-certificate. If the requirement is that cross-certificates be created in "both directions" then each CA in turn must initiate a crosscertification operation (or use another scheme). This scheme is suitable where the two CAs in question can already verify each other signatures (they have some common points of trust) or where there is an out-of-band verification of the origin of the certification request. Detailed description 1. Cross certification is initiated at one CA known as the responder. The CA administrator for the responder identifies the CA it wants to cross certify and the responder CA equipment generates an authorisation code. The responder CA administrator passes this authorisation code by out-of-band means to the requester CA administrator. The requester CA administrator enters the authorisation code at the requester CA in order to initiate the on-line exchange. 2. The authorisation code is used for authentication and integrity purposes. This is done by generating a symmetric key based on the authorisation code and using the symmetric key for generating Message Authentication Codes (MACs) on all messages exchanged. 3. The requester CA initiates the exchange by generating a random number (requester random number). The requester CA then sends to the responder CA the cross certification request (ccr) message. The fields in this message are protected from modification with a MAC based on the authorisation code. 4. Upon receipt of the ccr message, the responder CA checks the protocol version, saves the requester random number, generates its own random number (responder random number) and validates the MAC. It then generates (and archives, if desired) a new Security Subgroup

Page 164 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

requester certificate that contains the requester CA public key and is signed with the responder CA signature private key. The responder CA responds with the cross certification response (ccp) message. The fields in this message are protected from modification with a MAC based on the authorisation code. 5. Upon receipt of the ccp message, the requester CA checks that its own system time is close to the responder CA system time, checks the received random numbers and validates the MAC. The requester CA responds with the PKIConfirm message. The fields in this message are protected from modification with a MAC based on the authorisation code. The requester CA writes the requester certificate to the Repository. 6. Upon receipt of the PKIConfirm message, the responder CA checks the random numbers and validates the MAC. Notes: 1.The ccr message must contain a "complete" certification request, that is, all fields (including, e.g., a BasicConstraints extension) must be specified by the requester CA. 2.The ccp message SHOULD contain the verification certificate of the responder CA - if present, the requester CA must then verify this certificate (for example, via the "out-ofband" mechanism). • End entity initialisation As with CAs, end entities must be initialised. Initialisation of end entities requires at least two steps (other possible steps include the retrieval of trust condition information and/or out-of-band verification of other CA public keys). 1. Acquisition of PKI information The information REQUIRED is: - - the current root-CA public key - - (if the certifying CA is not a root-CA) the certification path from the root CA to the certifying CA together with appropriate revocation lists - - the algorithms and algorithm parameters which the certifying CA supports for each relevant usage. Additional information could be required (e.g., supported extensions or CA policy information) in order to produce a certification request which will be successful. However, it is NOT MANDATORY that the end entity acquires this information via the PKI messages. The end result is simply that some certification requests may fail (e.g., if the end entity wants to generate its own encryption key but the CA doesn't allow that). 2. Out-of-Band Verification of Root-CA Key An end entity must securely possess the public key of its root CA. One method to achieve this is to provide the end entity with the CA's self-certificate fingerprint via some secure "out-of-band" means. The end entity can then securely use the CA's self-certificate. • Certificate Request An initialised end entity MAY request a certificate at any time (as part of an update procedure, or for any other purpose). This request will be made using the certification request (cr) message. If the end entity already possesses a signing key pair (with a corresponding verification certificate), then this cr message will typically be protected by the entity's digital signature. The CA returns the new certificate (if the request is successful) in a CertRepMessage. •

Key Update

Security Subgroup

Page 165 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

When a key pair is due to expire the relevant end entity MAY request a key update - that is, it MAY request that the CA issue a new certificate for a new key pair. The request is made using a key update request (kur) message. If the end entity already possesses a signing key pair (with a corresponding verification certificate), then this message will typically be protected by the entity's digital signature. The CA returns the new certificate (if the request is successful) in a key update response (kup) message, which is syntactically identical to a CertRepMessage. 4.2.5.9 X.509 OCSP protocol overview. This X.509 specifications address the challenge of implement On-line Certificate Status Protocol (OCSP) validations, that is, how determine the current status of a digital certificate without requiring a CRL. Applications than requires reduce or eliminate the CRL time granularity problem, like high-value funds transfer or large stock trades and others, will take advantage of this specification. An OCSP client issues a status request to a OCSP responder and suspend the acceptance of the certificate in question until a response is provided by the responder. The protocol specifies the format of the messages between an application checking the status of a certificate and the server providing this status. 1. The OCSP request. The request must be conformed with at least: protocol version, service request, target certificate identifier, optional extensions which may be processed by the responder. Upon receipt of a request, an OCSP responder determines if the message is well formed and if it has all the information needed . 2. The OCSP response. OCSP responses can be of various types however there is one basic response type that MUST be supported by all OCSP servers and clients. A response consists of a response type and the bytes of the actual response. All definitive responses messages MUST be digitally signed and the key used to do it must belong to: - The CA who issued the certificate in question. -

A Trusted Responder whose public key is trusted by the requester.

-

A CA/DR, CA Designated Responder (or Authorised Responder) who holds a special certificate issued by the CA indicating than the CA/DR may issue OCSP responses for that CA.

A definitive response message consist of: - Version of the response syntax. -

Name of the responder.

-

Responses for each certificates in a request.

-

Optional extensions.

-

Signature algorithm OID.

The response for each certificate in a request consist of:

Security Subgroup

Page 166 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

-

Target certificate identifier.

-

Certificate status value.

-

Response validity interval.

-

Optional extensions.

20 October 1999

The specification defines the following definitive response indicators for use in the certificate status: -

good

-

revoked

-

unknown

The good state indicates a positive response indicating that at least the certificate is not revoked, but this not necessarily mean that the time the response is produced is within the certificates validity interval. The revoked and unknown responses mean exactly what is expected. Prior to accepting a signed response as valid OCSP client MUST confirm that: 1. The certificate identified in a received response corresponds to which was identified in the corresponding request. 2. Validate the signature of the response. 3. The identity of the signer matches the intended recipient of the request. 4. The signed is currently authorised to sign the response. 5. The time at which the status being indicated is known to be correct is sufficiently recent. 6. When available, the time at or before the newer information status of the certificate will be available is great than the current time. There are three time fields in a messages response than informs about the validity of the status informed: -

thisUpdate, the time at which the status being indicated is known to be correct.

-

nextUpdate, the time at or before which newer information will be available about the status of the certificate.

-

producedAt, the time at which the OCSP responder signs a response.

If nextUpdate is not set, the responder is indicating that newer revocation information is available all the time. 4.2.5.10 X.509 Time Stamp protocol

4.2.5.10.1 TSA overview. This protocol describes the format of the data and the protocols using in communicating Security Subgroup

Page 167 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

with a Time Stamp authority (TSA). The time stamping service can be used as a TTP as one component in building reliable non-repudiation services. The X.509 protocols also introduce the optional Temporal Data Authority (TDA) in order to reduce the amount of trust required of a TSA. In order to associate a message with a particular point in time a TSA may need to be used. This TSA provides proof of existence of a particular message at an instant in time. A TSA may also be needed when a trusted unique time reference is required and the local clock available cannot be trusted by all parties. TSA put a time stamp in a message indicating the time before message was generated. This can be used for example: -

To verify than a digital signature was applied before a certificate was revoked thus allowing a revoked public key certificate to be used for verifying signatures created prior to the time of revocation. This is a very important PKI operation.

-

To indicate the time of a submission when deadline is critical.

-

To indicate the time of transaction for entries in a log.

These examples show that applications of this protocols can be very important. Specially the time stamp feature can be one of the necessary keystones to facilitate the definition of a transactional common framework for the legal liaisons and validity of digital signatures in courts. In order to reduce the amount of trust required by a TSA the concept of TDA, Temporal Data Authority, is introduced. A (TDA) is a TTP than creates a temporal data token which function is provide further corroborating evidence of the time contained in the TSA token. For example a TDA could associate as a proof of the time stamped in the token, the most recent closing value of a stock market information, sports or lottery results et cetera. TDA/TSA’s has a specific interchange message protocol to accomplish the token request/delivery functions validations necessary for the time stamp service. Also validations have to be performs by the TSA’s over the status of the TDA certificate. Placing a signature in a particular point in time makes possible to give validity to some signatures which their public key certificate has been revoked at the time the validation is done. Also a digital document signed with time stamp is a guarantee for liabilities derived from the content of the signed document when this content is time dependent. 4.2.5.10.2 TSA functional assumptions and requirements The X.509 TSA is required the next functional specifications: 1. To provide a trusted source of time. 2. Not to include any identification of the requesting entity in the time stamp token. 3. Not to examine the imprint being time stamped in any way. 4. To include a monotonically incrementing value of the time of the day into its time stamp token. 5. To produce a time stamp token upon receiving a valid request from the requester. 6. To include within each time stamp token an identifier to uniquely indicate the trust and

Security Subgroup

Page 168 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

validation policy under which the token was created. 7. To only time stamp a hash representation of the message. 8. To examine the OID of the one way collision resistant hash function and verify than this function is sufficient. 9. To sign each time stamp token using a key generated exclusively for this purpose and have this property indicated on the corresponding certificate with the extended key usage field extension. This certificate extension MUST be critical. 10. To include supplementary temporal information in the time stamp token (from TDA’s) if asked by the requester. If this is not possible the TSA will respond with an error message. 11. To provide a signed receipt to the requester( i.e. in the form of an appropriately defined time stamp token), where appropriate is defined by the TSA policy. 4.2.5.10.3 TSA security considerations. When designing a TSA/TDA service, the following considerations have been identified in the draft that have an impact upon the validity or "trust" in the time stamp token. 1. When there is a reason to believe that the TSA can no longer be trusted, the authority's certificate must be revoked and placed on the appropriate CRL. Thus, at any future time the tokens signed with the corresponding key will not be valid. 2. The TSA private key is compromised and the corresponding certificate is revoked. 3. In this case, any token signed by the TSA using that private key cannot be trusted. For this reason, it is imperative that the TSA's private key be guarded with proper security and controls in order to minimise the possibility of compromise. In case the private key does become compromised, an audit trail of all tokens generated by the TSA may provide a means to discriminate between genuine and false tokens. 4. The TSA signing key must be of a sufficient length to allow for a sufficiently long lifetime. 5. Even if this is done, the key will have a finite lifetime. Thus, any token signed by the TSA should be time stamped again (if authentic copies of old CRLs are available) or notarised (if they aren't) at a later date to renew the trust that exists in the TSA's signature. Time stamp tokens could also be kept with an Evidence Recording Authority to maintain this trust. 6. An application using the TSA service should be concerned about the amount of time it is willing to wait for a response. A “man-in-the-middle” attack can introduce delays. Thus, any Time Stamp Token that takes more than an acceptable period of time should be considered suspect. 7. In certain circumstances, a TSA may not be able to produce a valid response to a request (for example, if it is unable to compute signatures for a period of time). 8. In these situations the TSA must wait until it is again able to produce a valid response before responding, if this is possible. If this is not possible, it must ignore the requests and not respond. Under no circumstances shall a TSA produce an unsigned response to Security Subgroup

Page 169 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

a request. 9. A CA shall normally conduct a test for proof of possession for each user's signing private key (including the TSA signing private key). 10. However, in some environments, the CA might not perform a proof-of-possession of the private key when issuing certificates. In these instances, in order to prevent certain attacks and to properly check the validity of the binding between an end entity and a key pair, a certificate identifier of the TSA shall be included as a signed attribute.

4.2.6 PKI Certification Policy and CPS (Certificate Policy Statements) We know a certificate binds a public-key value to an entity (a person a organisation a site….). This entity is known as the subject of the certificate. A certificate is used by a ”certificate user” or relying party that needs to use, and rely upon the accuracy, of the public key distributed via that certificate. The degree to which a certificate user can trust the binding embodied in a certificate depends on several factors: - the practice followed by the certification authority CA in authenticating the subject; - the CA’s operating policy, procedures and security controls; - the subject obligations (for example in protecting the private key); - the stated undertakings and legal obligations of the CA (for example, warranties and limitations of liability); A version 3 X.509 certificate may contain a field declaring that one or more specific certificate policies applies to that certificate. A certificate policy is a named set of rules that indicates the applicability of a certificate to a particular community and/or class of application with common security requirements. So, a certificate policy may be used by a certificate user to help in deciding whether a certificate is sufficiently trustworthy for a particular application. A detailed description of the practices followed by a CA in issuing and otherwise managing certificates may be contained in a Certification Practice Statement (CPS) published and referenced by the CA. We are going to establish the relationship between certification policies and CPSs. 4.2.6.1 Definitions and Certificate Policies Concepts Some introductory definitions for fix the terms used: - Activation data; data values, other than keys, that are required to operate cryptographic modules and that need to be protected (e.g., a PIN, a passphrase, a manually held key share). - Set of provisions; a collection of practice and/or policy statements, spanning a range of standard topics, for use in expressing a CPS. - Relying party; interchangeable with certificate user. - Policy qualifier; policy-dependent information that accompanies a certificate policy identifier in a certificate. - Subject CA; in the context of a particular certificate, is the CA whose public key is bound into the certificate.

Security Subgroup

Page 170 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

4.2.6.1.1 Certificate Policies We already know what a certificate policy is. The recognition of a certificate policy needs by both the issuer and the user of a certificate, is represented in a certificate by a unique, registered OID. The registration process follows procedures specified in ISO/IEC and ITU standards. The party that registers the object identifier also publishes a textual specification of the certification policy for examining by certificate users. Any one certificate will typically declare a single certificate policy or, possibly, be issued consistent with a small number of different policies. Certificate policies also constitutes a basis for accreditation of CAs. Each CA is accredited against one or more certificate policies which is recognised is implementing. When one CA issues a CA-certificate for another CA, the issuing CA must asses the set of certificate policies for which it trust the subject CA. The assessed set of certificate policies is then indicated by the issuing CA in the CA-certificate. The X.509 certification path processing logic employs these certificate policy indications in its well defined trust model. 4.2.6.1.2 Certificate policy examples For example purposes, suppose that IATA undertakes to define some certificate policies for use throughout the airline industry, in a public-key infrastructure operated by IATA in combination with public-key infrastructures operated by individual airlines. Two certificate policies are defined - the IATA General-Purpose policy, and the IATA Commercial-Grade policy. The IATA General-Purpose policy is intended for use by industry personnel for protecting routine information (e.g., casual electronic mail) and for authenticating connections from World Wide Web browsers to servers for general information retrieval purposes. The key pairs may be generated, stored, and managed using low-cost, software-based systems, such as commercial browsers. Under this policy, a certificate may be automatically issued to anybody listed as an employee in the corporate directory of IATA or any member airline who submits a signed certificate request form to a network administrator in his or her organisation. The IATA Commercial-Grade policy is used to protect financial transactions or binding contractual exchanges between airlines. Under this policy, IATA requires that certified key pairs be generated and stored in approved cryptographic hardware tokens. Certificates and tokens are provided to airline employees with disbursement authority. These authorised individuals are required to present themselves to the corporate security office, show a valid identification badge, and sign an undertaking to protect the token and use it only for authorised purposes, before a token and a certificate are issued. 4.2.6.1.3 X.509 Certificate Policy Fields The following extension fields are used in X509 to support certificate policies: • Certificate policies extension field The Certificate Policies extension has two variants - one with the field flagged non-critical and one with the field flagged critical. The purpose of the field is different in the two cases. A non-critical Certificate Policies field lists certificate policies that the certification authority declares are applicable. However, use of the certificate is not restricted to the purposes indicated by the applicable policies. Using the example of the IATA GeneralSecurity Subgroup

Page 171 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Purpose and Commercial-Grade policies defined in section 3.2, the certificates issued to regular airline employees will contain the object identifier for certificate policy for the General-Purpose policy. The certificates issued to the employees with disbursement authority will contain the object identifiers for both the General-Purpose policy and the Commercial-Grade policy. The Certificate Policies field may also optionally convey qualifier values for each identified policy; use of qualifiers is discussed in Section H.H. The non-critical Certificate Policies field is designed to be used by applications as follows. Each application is pre-configured to know what policy it requires. Using the example in Section 3.2, electronic mail applications and Web servers will be configured to require the General-Purpose policy. However, an airline's financial applications will be configured to require the Commercial-Grade policy for validating financial transactions over a certain dollar value. When processing a certification path, a certificate policy that is acceptable to the certificate-using application must be present in every certificate in the path, i.e., in CA-certificates as well as end entity certificates. The policy validation of every piece in the chain is in charge of the application API’s. If the Certificate Policies field is flagged critical, it serves the same purpose as described above but also has an additional role. It indicates that the use of the certificate is restricted to one of the identified policies, i.e., the certification authority is declaring that the certificate must only be used in accordance with the provisions of one of the listed certificate policies. This field is intended to protect the certification authority against damage claims by a relying party who has used the certificate for an inappropriate purpose or in an inappropriate manner, as stipulated in the applicable certificate policy definition. For example, the Internal Revenue Service might issue certificates to taxpayers for the purpose of protecting tax filings. The Internal Revenue Service understands and can accommodate the risks of accidentally issuing a bad certificate, e.g., to a wrongly-authenticated person. However, suppose someone used an Internal Revenue Service tax-filing certificate as the basis for encrypting multi-million-dollar-value proprietary secrets which subsequently fell into the wrong hands because of an error in issuing the Internal Revenue Service certificate. The Internal Revenue Service may want to protect itself against claims for damages in such circumstances. The critical-flagged Certificate Policies extension is intended to mitigate the risk to the certificate issuer in such situations. • Policy mappings extension field The Policy Mappings extension may only be used in CA-certificates. This field allows a certification authority to indicate that certain policies in its own domain can be considered equivalent to certain other policies in the subject certification authority's domain. For example, suppose the ACE Corporation establishes an agreement with the ABC Corporation to cross-certify each others' public-key infrastructures for the purposes of mutually protecting electronic data interchange (EDI). Further, suppose that both companies have pre-existing financial transaction protection policies called ace-ecommerce and abc-e-commerce, respectively. One can see that simply generating cross certificates between the two domains will not provide the necessary interoperability, as the two companies' applications are configured with and employee certificates are populated with their respective certificate policies. One possible Security Subgroup

Page 172 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

solution is to reconfigure all of the financial applications to require either policy and to reissue all the certificates with both policies. Another solution, which may be easier to administer, uses the Policy Mapping field. If this field is included in a crosscertificate for the ABC Corporation certification authority issued by the ACE Corporation certification authority, it can provide a statement that the ABC's financial transaction protection policy (i.e., abc-e-commerce) can be considered equivalent to that of the ACECorporation (i.e., ace-e-commerce). • Policy constraints extension field The Policy Constraints extension supports two optional features. The first is the ability for a certification authority to require that explicit certificate policy indications be present in all subsequent certificates in a certification path. Certificates at the start of a certification path may be considered by a certificate user to be part of a trusted domain, i.e., certification authorities are trusted for all purposes so no particular certificate policy is needed in the Certificate Policies extension. Such certificates need not contain explicit indications of certificate policy. However, when a certification authority in the trusted domain certifies outside the domain, it can activate the requirement for explicit certificate policy in subsequent certificates in the certification path. The other optional feature in the Policy Constraints field is the ability for a certification authority to disable policy mapping by subsequent certification authorities in a certification path. It may be prudent to disable policy mapping when certifying outside the domain. This can assist in controlling risks due to transitive trust, e.g., a domain A trusts domain B, domain B trusts domain C, but domain A does not want to be forced to trust domain C. • Policy Qualifiers The Certificates Policies extension field described above has a provision conveying, along with each certificate policy identifier, additional policy-dependent information in a qualified field. The X.509 standard does not prescribe syntax for this field nor does mandate any purpose for which the field is to be used. 4.2.6.2 CPS: Certification Practice Statement A certification practice statement may take the form of a declaration by the certification authority of the details of its trustworthy system and the practices it employs in its operations. Also, it may be a statute or regulation applicable to the certification authority and covering similar subject matter. A CPS may also be part of the contract between the certification authority and the subscriber. Finally, a certification practice statement may be comprised of multiple documents, a combination of public law, private contract, and/or declaration. 4.2.6.2.1 Relationship between certificate policy and certification practice statement The concepts of certificate policy and CPS come from different sources and were developed for different reasons. A certification practice statement is a detailed statement by a certification authority about its practices, and that potentially needs to be understood and consulted by subscribers and certificate users (relying parties). The CPS is generally more detailed than certificate

Security Subgroup

Page 173 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

policy definitions. Indeed , CPSs may be quite comprehensive, robust documents providing a description of the precise service offerings, detailed procedures of the life-cycle management of certificates and other details that characterise the implementation of the services. This detail may be indispensable to adequately disclose or to make a full assessment of trustworthiness in the absence of accreditation or other recognised quality metrics but a detailed CPS does not play any sense in order to form a suitable basis for interoperability between CAs operated by different organisations. Rather, certificate policies best serve as the vehicle on which to base common interoperability standards and common assurance criteria on an industry-wide basis. A CA with a single CPS may support multiple certificate policies used for different application purposes and/or by different certificate user communities. Also different CA with different CPS’s may support the same specific certificate policy. Then the differences can be summarised as follows: 1. Most organisations that operate public or international certification authorities will document their own practices in CPSs. The CPS is one of the organisation's means of protecting itself and positioning its business relationships with subscribers and other entities. 2. There is a strong incentive for a certificate policy to apply more broadly than to just a single organisation. If a particular certificate policy is widely recognised and imitated, it has a great potential basis for automated certificate acceptance in many systems. 4.2.6.2.2 Set of provisions A set of provisions is a collection of practice and/or policy statements, spanning a range of standard topics, for use in expressing a certificate policy definition or CPS employing the approach described in this framework. A certificate policy can be expressed as a single set of provisions, with each component addressed to the requirements of one or more certificate policies, or, alternatively, as an organised collection of sets of provisions. For example a CPS could be expressed as a combination of the following: 1. a list of certificate policies supported by the CPS; 2. for each certificate policy in (1), a set of provisions, which contains statements that refine that certificate policy by filling in details not stipulated in that policy or expressly left to the discretion of the CPS by that certificate policy; such statements serve to state how this particular CPS implements the requirements of the particular certificate policy; 3. a set of provisions that contains statements regarding the certification practices on the CA, regardless of certificate policy; 4.2.6.3 Set of provisions contents checklist Here it will be described a list of topics for the content of a set of provisions. Consequently these are candidate topics for inclusion in a certificate policy or in CPS. It is recommended than each component and sub-component be included in a certificate policy or CPS, even if there is “no stipulation” for it; this protects against inadvertent omission of a topic, while facilitating comparison of different certificate policies or CPS. 4.2.6.3.1 Introduction This component identifies and introduces the set of provisions, and indicates the types of Security Subgroup

Page 174 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

entities and applications for which the specification is targeted. • Overview This sub-component introduces a description of the specification. • Identification This sub-component provides any applicable names or other identifiers, including the ANS.1 object identifiers, for the set of provisions. • Community and Applicability This sub-component describes the types of entities that issue certificates or that are certified as subject CAs, the types of entities that perform RA functions, and the types of entities that are certified as subject end entities or subscribers. Also it contains: - A list of applications for which the issued certificates are suitable. Examples of applications are: e-mail, contracts, travel order, … - A list of applications for which use of the issued certificates is restricted. - A list of applications for which use of issued certificates is prohibited. • Contact Details This sub-component includes the name and mailing address of the authority that is responsible for the registration, maintenance, and interpretation of this certificate policy or CPS. It also includes the name, e-mail, telephone number and fax number of a contact person. 4.2.6.3.2 General Provisions This component specifies applicable presumptions on a range of legal and general practice topics. • Obligations This sub-component contains, for each entity type, any applicable provisions regarding the entity's obligations to other entities. Such provisions may include: - CA and/or RA obligations: Notification of issuance of a certificate to the subscriber who is the subject of the certificate being issued; Notification of issuance of a certificate to others than the subject of the certificate; Notification of revocation or suspension of a certificate to the subscriber whose certificate is being revoked or suspended; Notification of revocation or suspension of a certificate to others than the subject whose certificate is being revoked or suspended. - Subscriber obligations: Accuracy of representations in certificate application; Protection of the entity’s private key; Restrictions on private key and certificate use; Notification upon private key compromise. - Relying party obligations: Purposes for which certificate is used; Digital signature verification responsibilities; Revocation and suspension checking responsibilities;

Security Subgroup

Page 175 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Acknowledgement of applicable liability caps and warranties. - Repository obligations: Timely publication of certificates and revocation information. • Liability Applicable provisions regarding apportionment of liability: - Warranties and limitations; - Kinds of damaged covered (e.g. indirect, special, incidental, ..) and disclaimers ; - Loss limitations (caps) per certificate or per transaction; - Other exclusions (e.g., other party responsibilities) • Financial responsibility This contains for CAs, repository, and RAs any applicable provisions regarding financial responsibilities: - Indemnification of CA, RA by relying parties; - Fiduciary relationships (or lack thereof) between various entities; - Administrative processes (E.g., accounting, auditing) • Interpretation and Enforcement This sub-component contains provisions regarding interpretation and enforcement of the certificate policy or CPS: - Governing law; - Severability of provisions, survival, merger, and notice; - Dispute resolution procedures. • Fees This sub-component contains provisions regarding fees charged by CAs repositories, Ras: - Certificate issuance or renewal fees; - Certificate access fee; - Revocation or status information fee; - Fees for other services such as policy information; - Refund policy. • Publication and Repositories This sub-component contains provisions regarding interpretation and enforcement of the certificate policy or CPS: - A CA obligation to publish information regarding its practices, its certificates, and the current status of such certificates; - Frequency of publication; - Access control on published information objects including certificate policy definitions, CPS, certificates, certificate status, and CRLs; - Requirements pertaining to the use of repositories operated by CAs or by other independent parties. • Compliance audit This sub-component contains provisions regarding interpretation and enforcement of the

Security Subgroup

Page 176 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

certificate policy or CPS: - Frequency of compliance audit for each entity; - Identity/qualificators of the auditor; - Auditor’s relationship to the entity being audited; - List of topics covered under the compliance audit; - Actions taken as a result of a deficiency found during compliance audit; - Compliance audit results: who they are shared with (e.g., CA, RA, end entities), who provides them (entity being audited or auditor), how are communicated. • Confidentiality Policy This sub-component addresses the following: - Types of information that must be kept confidential by CA or RA; - Types of information that are not considered confidential; - Who is entitled to be informed of reasons for revocation and suspension of certificates; - Policy on release of information to law enforcement officials; - Information that can be revealed as part of civil discovery; - Conditions upon which CA or RA may disclose upon owner request; and - Any other circumstances under which confidential information may be disclosed. • Intellectual Property Rights This sub-component addresses specifications, names, and keys.

ownership

rights

of

certificates,

practice/policy

4.2.6.3.3 Identification and Authentication This sub-component describes the procedures used to authenticate a certificate applicant to a CA or RA prior to certificate issuance. It also describes how parties requesting rekey or revocation are authenticated. Also naming practices may be addressed, including name ownership recognition and name dispute resolution. • Initial Registration This sub-component includes the following elements regarding identification and authentication procedures during entity registration or certificate issuance: - Types of names assigned to the subject; - Whether names have to be meaningful or not; - Rules for interpreting various name forms; - Whether names have to be unique; - How name claim disputes are resolved; - Recognition, authentication and role of trademarks; - If and how the subject must prove the possession of the companion private key for the public key being registered; - Authentication requirements for a person acting on behalf of a subject (CA, RA, end entity); - Authentication requirements for organisational identity of a subject (CA, RA, end entity). •

Routine Rekey

Security Subgroup

Page 177 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

This sub-component describes the identification and authentication procedures for routine rekey for each subject type (CA, RA, end entity). • Rekey after Revocation -- No Rekey Compromise This sub-component describes the identification and authentication procedures for rekey for each subject type (CA, RA, end entity) after the subject certificate has been revoked. • Revocation Request This sub-component describes the identification and authentication procedures for a revocation request by each subject type (CA, RA, end entity). 4.2.6.3.4 Operational Requirements This component is used to specify requirements imposed upon issuing CA, subject CAs, RAs, or end entities with respect to various operational activities. • Certificate Application This sub-component is used to state requirements regarding subject enrolment and request for certificate issuance. • Certificate Issuance This sub-component is used to state requirements regarding issuance of a certificate and notification to the applicant of such issuance. • Certificate Acceptance This sub-component is used to state requirements regarding acceptance of an issued certificate and for consequent publication of certificates. • Certificate Suspension and Revocation This sub-component addresses the following issues: - Circumstances under which a certificate may be revoked or suspended; - Who can request the revocation/suspension of the entity certificate; - Procedures used for certificate revocation/suspension request; - Revocation/suspension request grace period available to the subject; - How long a suspension may last; - If a CRL mechanism is used, the issuance frequency; - On-line revocation status checking availability; - Requirements on relying parties to perform on-line revocation status checks; - Other forms of revocation advertisement available; - Requirements on relying parties to perform other forms revocation status checks; - Any variations on the above stipulations when the suspension or revocation is the result of private key compromise, • Security Audit Procedures This sub-component is used to describe event logging and audit systems implemented for the purpose of maintaining a secure environment. Elements include the following: - Types of events recorded; Security Subgroup

Page 178 of 214

Security Subgroup

-

-

Security Guidelines for Inter-Regional Applications

20 October 1999

Frequency with which audit logs are processed or audited; Period for which audit logs are kept; Protection of audit logs: * Who can view audit logs; * Protection against modification of audit log; * Protection against deletion of audit log. Audit log back up procedures; Whether the audit log accumulation system is internal or external to the entity; Whether the subject who caused an audit event to occur is notified of the audit action; Vulnerability assessments.

• Records Archival This sub-component is used to describe general records archival (or records retention) policies, including the following: - Types of events recorded; - Retention period for archive; - Protection of archive: * Who can view the archive; * Protection against modification of archive; * Protection against deletion of archive. - Archive backup procedures; - Requirements for time-stamping of records; - Whether the archive collection system is internal or external; - Procedures to obtain and verify archive information. • Key changeover This sub-component describes the procedures to provide a new public key to a CAs users. • Compromise and Disaster Recovery This sub-component describes requirements relating to notification and recovery procedures in the event of compromise or disaster. Each of the following circumstances may need to be addressed separately: - The recovery procedures used if computing resources, software, and/or data are corrupted or suspected to be corrupted. These procedures describe how a secure environment is re-established, which certificates are revoked, whether the entity key is revoked, how the new entity public key is provided to the users, and how the subjects are re-certified. - The recovery procedures used if the entity public key is revoked. These procedures describe how a secure environment is re-established, how the new entity public key is provided to the users, and how the subjects are re-certified. - The recovery procedures used if the entity key is compromised. These procedures describe how a secure environment is re-established, how the new entity public key is provided to the users, and how the subjects are re-certified. - The CAs procedures for securing its facility during the period of time following a Security Subgroup

Page 179 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

natural or other disaster and before a secure environment is re-established either at the original site or a remote hot-site. For example, procedures to protect against theft of sensitive materials from an earthquake-damaged site. • CA Termination This sub-component describes requirements relating to procedures for termination and for termination notification of a CA or RA, including the identity of the custodian of CA and RA archival records. 4.2.6.3.5 Physical, Procedural, and Personal Security Controls This component describes non-technical security controls (that is, physical, procedural, and personnel controls) used by the issuing CA to perform securely the functions of key generation, subject authentication, certificate issuance, certificate revocation, audit, and archival. This component can also be used to define non-technical security controls on repository, subject CAs, RAs, and end entities. The non technical security controls for the subject CAs, RAs, and end entities could be the same, similar, or very different. These non-technical security controls are critical to trusting the certificates since lack of security may compromise CA operations. For example, in the creation of certificates or CRLs with erroneous information or the compromise of the CA private key. 4.2.6.3.6 Physical Controls In this sub-component, the physical controls on the facility housing the entity systems are described.(21) Topics addressed may include: - Site location and construction; - Physical access; - Power and air conditioning; - Water exposures; - Fire prevention and protection; - Media storage; - Waste disposal; and - Off-site backup. • Procedural Controls In this sub-component, requirements for recognising trusted roles are described, together with the responsibilities for each role. For each task identified for each role, it should also be stated how many individuals are required to perform the task (n out m rule). Identification and authentication requirements for each role may also be defined. • Personnel Controls This sub-component may address the following: - Background checks and clearance procedures required for the personnel filling the trusted roles; - Background checks and clearance procedures requirements for other personnel, including janitorial staff; - Training requirements and training procedures for each role; Security Subgroup

Page 180 of 214

Security Subgroup

-

-

Security Guidelines for Inter-Regional Applications

20 October 1999

Any retraining period and retraining procedures for each role; Frequency and sequence for job rotation among various roles; Sanctions against personnel for unauthorised actions, unauthorised use of authority, and unauthorised use of entity systems; Controls on contracting personnel, including: * Bonding requirements on contract personnel; * Contractual requirements including indemnification for damages due to the actions of the contractor personnel; * Audit and monitoring of contractor personnel; and * Other controls on contracting personnel. Documentation to be supplied to personnel.

4.2.6.3.7 Technical Security Controls This component is used to define the security measures taken by the issuing CA to protect its cryptographic keys and activation data (e.g., PINs, passwords, or manuallyheld key shares). This component may also be used to impose constraints on repositories, subject CAs and end entities to protect their cryptographic keys and critical security parameters. Secure key management is critical to ensure that all secret and private keys and activation data are protected and used only by authorised personnel. This component also describes other technical security controls used by the issuing CA to perform securely the functions of key generation, user authentication, certificate registration, certificate revocation, audit, and archival. Technical controls include lifecycle security controls (including software development environment security, trusted software development methodology) and operational security controls. This component can also be used to define other technical security controls on repositories, subject CAs, RAs, and end entities. • Key Pair Generation and Installation Key pair generation and installation need to be considered for the issuing CA, repositories, subject CAs, RAs, and subject end entities. For each of these types of entities, the following questions potentially need to be answered: 1. Who generates the entity public, private key pair? 2. How is the private key provided securely to the entity? 3. How is the entity's public key provided securely to the certificate issuer? 4. If the entity is a CA (issuing or subject) how is the entity's public key provided securely to the users? 5. What are the key sizes? 6. Who generates the public key parameters? 7. Is the quality of the parameters checked during key generation? 8. Is the key generation performed in hardware or software? 9. For what purposes may the key be used, or for what purposes should usage of the key be restricted (for X.509 certificates, these purposes should map to the key usage flags in the Version 3, X.509 certificates)? • Private Key Protection Requirements for private key protection need to be considered for the issuing CA, repositories, subject CAs, RAs, and subject end entities. For each of these types of entity,

Security Subgroup

Page 181 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

the following questions potentially need to be answered: 1. Which standards, if any, are required for the module used to generate the keys? For example, are the keys certified by the infrastructure required to be generated using modules complaint with the US FIPS 140-1? If so, what is the required FIPS 1401 level of the module? 2. Is the private key under n out of m multi-person control?(18) If yes, provide n and m (two person control is a special case of n out of m, where n = m = 2)? 3. Is the private key escrowed? (19) If so, who is the escrow agent, what form is the key escrowed in (examples include plaintext, encrypted, split key), and what are the security controls on the escrow system? 4. Is the private key backed up? If so, who is the backup agent, what form is the key backed up in (examples include plaintext, encrypted, split key), and what are the security controls on the backup system? 5. Is the private key archived? If so, who is the archival agent, what form is the key archived in (examples include plaintext, encrypted, split key), and what are the security controls on the archival system? 6. Who enters the private key in the cryptographic module? In what form (i.e., plaintext, encrypted, or split key)? How is the private key stored in the module (i.e., plaintext, encrypted, or split key)? 7. Who can activate (use) the private key? What actions must be performed to activate the private key (e.g., login, power on, supply PIN, insert token/key, automatic, etc.)? Once the key is activated, is the key active for an indefinite period, active for one time, or active for a defined time period? 8. Who can deactivate the private key and how? Example of how might include, logout, power off, remove token/key, automatic, or time expiration. 9. Who can destroy the private key and how? Examples of how might include token surrender, token destruction, or key overwrite. • Other aspects of Key Pair Management Other aspects of key management need to be considered for the issuing CA, repositories, subject CAs, RAs, and subject end entities. For each of these types of entity, the following questions potentially need to be answered: 1. Is the public key archived? If so, who is the archival agent and what are the security controls on the archival system? The archival system should provide integrity controls other than digital signatures since: the archival period may be greater than the crypto-analysis period for the key and the archive requires tamper protection, which is not provided by digital signatures. 2. What are the usage periods, or active lifetimes, for the public and the private key respectively? • Activation Data Activation data refers to data values other than keys that are required to operate cryptographic modules and that need to be protected. (20) Protection of activation data potentially needs to be considered for the issuing CA, subject CAs, RAs, and end entities. Such consideration potentially needs to address the entire life-cycle of the activation data from generation through archival and destruction. For each of the entity types (issuing CA, repository, subject CA, RA, and end entity) all of the questions listed in 4.6.1 through 4.6.3 potentially need to be answered with respect to activation data rather than with respect to keys. Security Subgroup

Page 182 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

• Computer Security Controls This sub-component is used to describe computer security controls such as: use of the trusted computing base concept, discretionary access control, labels, mandatory access controls, object reuse, audit, identification and authentication, trusted path, security testing, and penetration testing. Product assurance may also be addressed. A computer security rating for computer systems may be required. The rating could be based, for example, on the Trusted System Evaluation Criteria (TCSEC), Canadian Trusted Products Evaluation Criteria, European Information Technology Security Evaluation Criteria (ITSEC), or the Common Criteria. This sub-component can also address requirements for product evaluation analysis, testing, profiling, product certification, and/or product accreditation related activity undertaken. • Life Cycle Technical Controls This sub-component addresses system development controls and security management controls. System development controls include development environment security, development personnel security, configuration management security during product maintenance, software engineering practices, software development methodology, modularity, layering, use of failsafe design and implementation techniques (e.g., defensive programming) and development facility security. Security management controls include execution of tools and procedures to ensure that the operational systems and networks adhere to configured security. These tools and procedures include checking the integrity of the security software, firmware, and hardware to ensure their correct operation. This sub-component can also address life-cycle security ratings based, for example, on the Trusted Software Development Methodology (TSDM) level IV and V, independent life- cycle security controls audit, and the Software Engineering Institute's Capability Maturity Model (SEI-CMM). • Network Security Controls This sub-component addresses network security related controls, including firewalls. • Cryptographic Module Engineering Controls This sub-component addresses the following aspects of a cryptographic module: identification of the cryptographic module boundary, input/output, roles and services, finite state machine, physical security, software security, operating system security, algorithm compliance, electromagnetic compatibility, and self tests. Requirements may be expressed through reference to a standard such as U.S. FIPS 140-1. 4.2.6.3.8 Certificate and CRL profiles This component is used to specify the certificate format and, if CRLs are used, the CRL format. Assuming use of the X.509 certificate and CRL formats, this includes information on profiles, versions, and extensions used. • Certificate Profile This sub-component addresses such topics as the following (potentially by reference to a separate profile definition, such as the PKIX Part I profile):

Security Subgroup

Page 183 of 214

Security Subgroup

-

Security Guidelines for Inter-Regional Applications

20 October 1999

Version number(s) supported; Certificate extensions populated and their criticality; Cryptographic algorithm object identifiers; Name forms used for the CA, RA, and end entity names; Name constraints used and the name forms used in the name constraints; Applicable certificate policy Object Identifier(s); Usage of the policy constraints extension; Policy qualifiers syntax and semantics; and Processing semantics for the critical certificate policy extension.

• CRL Profile This sub-component addresses such topics as the following (potentially by reference to a separate profile definition, such as the PKIX Part I profile): - Version numbers supported for CRLs; and - CRL and CRL entry extensions populated and their criticality. • Specification Administration This component is used to specify how this particular certificate policy definition or CPS will be maintained. • Specification change procedures It will occasionally be necessary to change certificate policies and Certification Practice Statements. Some of these changes will not materially reduce the assurance that a certificate policy or its implementation provides, and will be judged by the policy administrator as not changing the acceptability of certificates asserting the policy for the purposes for which they have been used. Such changes to certificate policies and Certification Practice Statements need not require a change in the certificate policy Object Identifier or the CPS pointer (URL). Other changes to a specification will change the acceptability of certificates for specific purposes, and these changes will require changes to the certificate policy Object Identifier or CPS pointer (URL). This sub-component contains the following information: - A list of specification components, sub-components, and/or elements thereof that can be changed without notification and without changes to the certificate policy Object Identifier or CPS pointer (URL). - A list of specification components, subcomponents, and/or elements thereof that may change following a notification period without changing the certificate policy Object Identifier or CPS pointer (URL). The procedures to be used to notify interested parties (relying parties, certification authorities, etc.) of the certificate policy or CPS changes are described. The description of notification procedures includes the notification mechanism, notification period for comments, mechanism to receive, review and incorporate the comments, mechanism for final changes to the policy, and the period before final changes become effective. - A list of specification components, sub-components, and/or elements, changes to which require a change in certificate or policy Object Identifier or CPS pointer (URL). • Publication and notification policies This sub-component contains the following elements: Security Subgroup

Page 184 of 214

Security Subgroup

-

Security Guidelines for Inter-Regional Applications

20 October 1999

A list of components, sub-components, and elements thereof that exist but that are not made publicly available; Descriptions of mechanisms used to distribute the certificate policy definition or CPS, including access controls on such distribution.

• CPS approval procedures In a certificate policy definition, this sub-component describes how the compliance of a specific CPS with the certificate policy can be determined . 4.2.6.4 Legal Issues: digital signatures and TTP’s Public key cryptography-based products are mature enough to enable network administrators to meet both business and security needs; PKI is ready for controlled corporate environments such as intranets and exchanges with selected known business partners. However, there are still many political hurdles and legal issues dealing with crosscertification and digital signatures legal liaisons, state and international requirements, all of which limit PKI for larger open environments, at least for the time being. • Directives and common rules to assign legal validity to electronic documents • Specification of procedures and necessary transactions to guarantee trust in electronic certification of personal identities • Regional models (schemata) of Certification Authorities, hierarchies, cross certifications with other regions (i.e which are the necessary conditions so that the certification of identity of a citizen in Lombardia could be recognised in Catalunya and viceversa) • Certification of the provided security in equipment ,software, installations and Certification Authorities

Security Subgroup

Page 185 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5 Security evaluation This document has been adapted from the Common Criteria Project: The “Common Criteria” work is an international initiative by BSI (Germany), CESG (UK), CSE (Canada), NIST (USA), NLNCSA (Netherlands), NSA (USA) and SCSSI (France) to combine the best aspects of the existing criteria for the security evaluation of information technology systems and products: US TCSEC (Orange book, 1985.), ITSEC (Europe, 1991) and CTCPEC (Canada). Version 1.0 was published in 01/1996 and version 2.0 in 05/1998. A release CC/FDIS was submitted to ISO (Final draft IS 15408) for standardisation in November 1998. See: http://csrc.nist.gov/cc and http://www.cesg.gov.uk/cchtml

5.1 Introduction: Information Technology (IT) has come to play an important, and often vital, role in almost all sectors of organised societies. As a consequence, security has become an essential aspect of Information Technology. In this context, IT security means: confidentiality (prevention of the unauthorised disclosure of information), integrity (prevention of the unauthorised modification of information) and availability (prevention of the unauthorised withholding of information or resources). An IT system or product will have its own requirements for maintenance of confidentiality, integrity and availability. In order to meet these requirements it will implement a number of technical security measures (for example access control, auditing, and error recovery). Appropriate confidence in these functions will be needed: this is referred to as assurance, whether it is confidence in the correctness of the security enforcing functions (both from the development and the operational points of view) or confidence in the effectiveness of those functions. Users of systems need confidence in the security of the system they are using. They also need a yardstick to compare the security capabilities of IT products they are thinking of purchasing. Although users could rely upon the word of the manufacturers or vendors of the systems and products in question, or they could test them themselves, it is likely that many users will prefer to rely on the results of some form of impartial assessment by an independent body. Such an evaluation of a system or product requires objective and welldefined security evaluation criteria and the existence of a certification body that can confirm that the evaluation has been properly conducted. System security targets will be specific to the particular needs of the users of the system in question, whereas product security targets will be more general so that products that meet them can be incorporated into many systems (with similar but not necessarily identical security requirements). For a system, an evaluation of its security capabilities can be viewed as a part of a more formal procedure for accepting an IT system for use within a particular environment: the Accreditation. An important work about Security evaluation, and a precursor, was the Trusted Computer System Evaluation Criteria [TCSEC], commonly known as the TCSEC or "Orange Book", published in 1985 and used for product evaluation by the US Department of Defence. Other countries, mostly European, also have significant experience in IT security evaluation and have developed their own IT security criteria: the "Green Book" [DTIEC]

Security Subgroup

Page 186 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

in UK, [ZSIEC] in Germany and "Blue-White-Red Book" [SCSSI] in France In May 1990, France, Germany, the Netherlands and the United Kingdom recognised that this work needed to be approached in a concerted way, and that common, harmonised IT security criteria: ITSEC (Information Technology Security Evaluation Criteria) should be put forward. Maximum applicability and compatibility with existing work, most notably the US TCSEC, was a constant consideration in this process. Though it was initially felt that the work would be limited to harmonisation of existing criteria, it has sometimes been necessary to extend what already existed. One reason for producing these internationally harmonised criteria is to provide a compatible basis for certification by the national certification bodies within the four co-operating countries, with an eventual objective of permitting international mutual recognition of evaluation results. The Information Technology Security Evaluation Manual: ITSEM completed this work in September 1993. See: http://www.cordis.lu/infosec/src/crit.htm The Common Criteria (CC) occasionally (and somewhat incorrectly) referred to as the Harmonised Criteria, is a multinational effort to write a successor to the TCSEC and ITSEC that combines the best aspects of both. An initial version (V 1.0) was released in January of 1996. The CC has a structure closer to the ITSEC than the TCSEC and includes the concept of a "profile" to collect requirements into easily specified and compared sets. Version 2.0 takes account of extensive review and trials in 1996 and 1997 and was available since May 1998. The CC presents requirements for the IT security of a product or system under the distinct categories of functional requirements and assurance requirements. This document is based on the Common Criteria. See for documents download: http://csrc.nist.gov/cc/ccv20/ccv2list.htm http://www.cesg.gov.uk/cchtml

5.2 CC Key concepts 5.2.1 Functional requirements: Functional requirements define desired security behaviour. Assurance requirements are the basis for gaining confidence that the claimed security measures are effective and implemented correctly. Confidence in IT security can be gained through actions that may be taken during the process of development, evaluation and operation. 5.2.2 Target of Evaluation: Target of Evaluation (TOE) is used to refer to a product or system to be evaluated. A TOE can be constructed from several components. Some components will contribute to satisfying the security objectives of the TOE; these components are called security enforcing. There may be also some components that are not security enforcing but must nonetheless operate correctly for the TOE to enforce security; these are called security relevant. The combination of both the security enforcing components and the security relevant components of a TOE is often referred to as a Trusted Computing Base (TCB). 5.2.3 Protection profile: A Protection profile (PP) defines an implementation-independent set of security requirements and objectives for a category of products or system, which meet similar consumers needs for IT security. It is intended to be reusable. Protection profiles are under development for firewalls, relational databases, etc. and to enable backward compatibility with TCSEC or ITSEC. Security Subgroup

Page 187 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.2.4 Security Target A Security Target (ST) contains the IT security objectives and requirements of a specific identified TOE (Target of Evaluation) and defines the functional and assurance measures offered by that TOE to meet stated requirements. The Security target may claim conformance to one or more of Protection Profiles and form the basis for an evaluation. 5.2.5 Components: Security functional components are used to express a wide range of security functional requirements within Protection Profiles and Security Targets. Families are groups of components that share security objectives (e.g. Security Audit Trail Protection). Classes are groups of components that share a common intent (e.g. Audit) The Common Criteria defines a set of constructs which classify security components into related sets: Component Operations Each component identifies and defines any permitted operations, the circumstances under which it may be applied and the result of the application. Permitted operations are: assignment, selection and refinement. Component Dependencies Dependencies may exist between components. They arise when a component is not selfsufficient and relies upon the presence of another component. Dependencies may exist between functional components, assurance components and, rarely, between functional and assurance components. Component Naming Convention In defining the security requirements for a system, the user/developer needs to consider the threats to the IT environment. The CC contains a pool of components, that the developers of PPs and STs can collate to form the security requirements definition of a trusted product or system. The organisation of these components into a hierarchy helps the user to locate the right components to combat threats. The class name is 3 characters in length (e.g. Audit = FAU). Prefix “F” indicates a functional class and prefix “A” an assurance class. Families within each class are named by the addition of an underscore and a further 3 characters (e.g. Audit Automatic Response = FAU_ARP). Components within families are numbered (e.g. FAU_ARP.2.1)

5.3 Security functionality Note: TSF = TOE security functions 5.3.1 Functionality classes (11) 5.3.1.1 Security audit (FAU) Security auditing involves recognising, recording, storing and analysing information related to security activities. Audit records can be examined to determine their security relevance. This class is made up of 6 families: - Security audit automatic response (FAU_ARP) Security Subgroup

Security Audit

FAU_ARP AutomaticResponse

1

2

3

Page 188 of 214

Security Subgroup

-

Security Guidelines for Inter-Regional Applications

20 October 1999

Security audit data generation (FAU_GEN) Security audit analysis (FAU_SAA) Security audit review (FAU_SAR) Security audit event selection (FAU_SEL) Security audit event storage (FAU_STG)

5.3.1.2 Communications (FCO) The communication class provides 2 families concerned with non-repudiation by the originator and by the recipient of data: - Non-repudiation of origin (FCO_NRO) - Non-repudiation of receipt (FCO-NRR) 5.3.1.3 Cryptographic support (FCS) The TSF may employ cryptographic functionality to help satisfy several high-level security objectives. These include (but are not limited to): identification, authentication, nonrepudiation, trusted path, trusted channel and data separation. Implementation of the cryptographic functions could be in hardware, firmware and/or software. This class is made up of 2 families - Cryptographic key management (FCS_CKM) - Cryptographic operation (FCS_COP) 5.3.1.4 User data protection (FDP) These 12 families specify requirements relating to the protection of user data within the TOE during import, export and storage, in addition to related security attributes. - Access control policy (FDP_ACC) - Access control functions (FDP_ACF) - Data authentication (FDP_DAU) - Export to outside TSF control (FDP_ETC) - Information flow control policy (FDP_IFC) - Information flow control functions (FDP_ITC) - Internal TOE transfer (FDP_ITT) - Residual information protection (FDP_RIP) - Rollback (FDP_ROL) - Stored data integrity (FDP_SDI) - Inter-TSF user data confidentiality transfer protection (FDP_UCT) - Inter-TSF user data integrity transfer protection (FDP_UIT) 5.3.1.5 Identification and authentication (FIA) The requirements for identification and authentication ensure the unambiguous identification of authorised users and the correct association of security attributes with users and subjects. Families in this class deal with determining and verifying user identity, determining their authority to interact with the TOE, and with the correct association of security attributes with the authorised user. - Authentication failures (FIA_AFL) - User attribute definition (FIA_ATD) - Specifications of secrets (FIA_SOS) Security Subgroup

Page 189 of 214

Security Subgroup

-

Security Guidelines for Inter-Regional Applications

20 October 1999

User authentication (FIA_UAU) User identification (FIA_UID) User-subject binding (FIA_USB)

5.3.1.6 Security management (FMT) This class is intended to specify the management of several aspects of the TSF: security attributes, TSF data and functions. The different management roles and their interaction, such as separation of capability, can be specified. This class is made up of 6 families: - Management of functions in TSF (FMT_MOF) - Management of security attributes (FMT_MSA) - Management of TSF data (FMT_MTD) - Revocation (FMT_REV) - Security attribute expiration (FMT_SAE) - Security management roles (FMT_SMR) 5.3.1.7 Privacy (FPR) Privacy requirements provide a user with protection against discovery and misuse of his identity by other users. The 4 families in this class are concerned with anonymity, pseudonymity, unlinkability and unobservability. - Anonymity (FPR_ANO) - Pseudonymity (FPR_PSE) - Unlinkability (FPR_UNL) - Unobservability (FPR_UNO) 5.3.1.8 Protection of the TOE Security Functions (FPT) This class is focuses on protection of TSF data, rather than of user data. The class relates to the integrity and management of the TSF mechanisms and data. It is made up of 15 families: - Underlying abstract machine test (FPT_AMT) - Fail secure (FPT_FLS) - Availability of exported TSF data (FPT_ITA) - Integrity of exported TSF data (FPT_ITI) - Internal TOE TSF data transfer (FPT_ITT) - TSF physical protection (FPT_PHP) - Trusted recovery (FPT_RCV) - Replay detection (FPT_RPL) - Reference mediation (FPT_RVM) - Domain separation (FPT_SEP) - State synchrony protocol (FPT_SSP) - Time stamps (FTP_STM) - Inter-TST TSF data consistency (FPT_TDC) - Internal TOE TSF data replication consistency (FTP_TRC) - TSF selftest (FPT_TST)

Security Subgroup

Page 190 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.3.1.9 Resource utilisation (FRU) Resource utilisation provides 3 families that support the availability of required resources, such as processing capability and storage capacity. - Fault tolerance (FRU_FLT) - Priority of service (FRU_PRS) - Resource allocation (FRU_RSA) 5.3.1.10 TOE Access (FTA) This class specifies functional requirements, in addition to those specified for identification and authentication, for controlling the establishment of a user’s session. The requirements for TOE access govern such things as limiting the number and scope of user sessions, displaying the access history and the modification of access parameters. This class is made up of 6 families: - Limitation on scope of selectable attributes (FTA_LSA) - Limitation on multiple concurrent sessions (FTA_MCS) - Session locking (FTA_SSL) - TOE access banners (FTA_TAB) - TOE access history (FTA_TAH) - TOE session establishment (FTA_TAH) 5.3.1.11 Trusted path/channels (FTP) This class is concerned with trusted communications paths between the users and the TSF, and between TSFs. Trusted paths are constructed from trusted channels, which exist for inter-TSF communications; this provides a mean for users to perform functions through a direct interaction with the TSF. The user or TSF can initiate the exchange, which is guaranteed to be protected from modification by untrusted applications. This class is made up of only 2 families: - Inter-TSF trusted channel (FTP_ITC) - Trusted path (FTP_TRP) 5.3.2 Assurance classes The CC philosophy is to provide assurance based upon an evaluation (active investigation) of the IT product or system that is to be trusted. Th CC propose measuring the validity of the documentation and of the resulting IT product or system by expert evaluation with increasing emphasis on scope, depth, and rigour. Active investigation is an evaluation of an IT product or system in order to determine its security properties. Classes and families are used to provide a taxonomy for classifying assurance requirements, while components are used to specify assurance requirement in a Protection Profile and a Security Target. Seven assurance classes and two evaluation assurance classes are defined :

Security Subgroup

Page 191 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.3.2.1 Configuration Management (ACM) Configuration management requires that the integrity of the TOE is adequately preserved. Specifically, configuration management provides confidence that the TOE and documentation used for evaluation are the ones prepared for distribution. The families in this class are concerned with the capabilities of the configuration management, its scope and automation: - CM automation (ACM_AUT) - CM capabilities (ACM_CAP) - CM scope (ACM_SCP) 5.3.2.2 Delivery and Operation (ADO) This provides 2 families concerned the measures, procedures and standards for secure delivery, installation and operational use of the TOE, to ensure that the security protection offered by the TOE is not compromise during these events: - Delivery (ADO_DEL) - Installation, generation and start-up (ADO_IGS) 5.3.2.3 Development (ADV) The 7 families of this class are concerned with the refinement of the TSF from the specification defined in the Security Target to the implementation, and a mapping from the requirements to the lowest level representation. - Functional specification (ADV_FSP) - High-level design (ADV_HLD) - Implementation representation (ADV_IMP) - TSF internals (ADV_INT) - Low-level design (ADV_LLD) - Representation correspondence (ADV_RCR) - Security policy modelling (ADV_SPM) - Guidance Documents (AGD) - Guidance documents are concerned with the secure operational use of the TOE, by the users and administrators. Its comprises 2 families: - Administrator guidance (AGD_ADM) - User guidance (AGD_USR) 5.3.2.4 Life Cycle Support (ALC) The requirements of ALC families include life-cycle definition, tools and techniques, the developers security and the remediation of flaws found by TOE consumers. This class is made up of 4 families: - Development security (ALC_DVS) - Flaw remediation (ALC_FLR) - Life cycle definition (ALC_LCD) - Tools and techniques (ALC_TAT) 5.3.2.5 Maintenance of Assurance (AMA) The maintenance of assurance class provides requirements that are intended to be applied Security Subgroup

Page 192 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

after a TOE has been certified against CC. These requirements are aimed at assuring that the TOE will continue to meet its security target as changes are made to the TOE or its environment. Such changes include the discovery of new threats or vulnerabilities, changes in user requirements, and the correction of bugs found in the certified TOE. This class comprises 4 families: Assurance maintenance plan (AMA_AMP) TOE components categorisation report (AMA_CAT) Evidence of assurance maintenance (AMA_EVD) Security impact analysis (AMA_SIA)

5.3.2.6 Tests (ATE) This class is concerned with demonstrating that the TOE meets its functional requirements. The 4 families address issues of coverage, depth, TOE requirements and independent testing: Coverage (ATE_COV) Depth (ATE_DPT) Functional tests (ATE _FUN) Independent testing (ATE_IND) 5.3.2.7 Vulnerability Assessment (AVA) This class defines requirements directed at the identification of exploitable vulnerabilities, which could be introduced by construction, operation misuse or incorrect configuration of TOE. Its families are concerned with identifying vulnerabilities through covert channel analysis, analysis of the configuration of the TOE, examining the strength of mechanisms of the security functions, and identifying flaws introduced during development of the TOE. This class is made up of 4 families: Covert channel analysis (AVA_CCA) Misuse (AVA_MSU) Strength of TOE security functions (AVA_SOF) Vulnerability analysis (AVA_VLA) 5.3.3 Evaluation assurance classes (2) Assurance classes are provided for the evaluation of Protection Profiles and Security targets. All of the requirements in the relevant class need to be applied for a Protection Profile or Security target evaluation. The criteria need to be applied in order to find out whether the Protection Profile or the Security target is a meaningful basis for a TOE evaluation. 5.3.3.1 Protection Profile Evaluation (APE) The goal here is to demonstrate that the Protection Profile is complete, consistent and technically valid. Further, the Protection Profile needs to be a statement of the requirements for an evaluatable TOE. An evaluated Protection Profile is suitable for use as the basis for the development of Security targets. The families in this class are concerned with the Security Environment, the Security Objectives and the TOE Security Requirements. This class is made up of 6 families: Security Subgroup

Page 193 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

TOE description (APE_DES) Security environment (APE_ENV) Protection profile introduction (APE_INT) Security objectives (APE_OBJ) IT security requirements (APE_REQ) Explicitly stated IT security requirements (APE_SRE) 5.3.3.2 Security Target Evaluation (ASE) The goal here is to demonstrate that the Security Target is complete, consistent and technically valid, and is a suitable basis for the TOE evaluation. The families in this class are concerned with the Security Environment, the Security Objectives, any Protection Profile Claims, the TOE Security Requirements and the TOE Summary Specification. This class comprises 8 families: TOE description (ASE_DES) Security environment (ASE_ENV) Security target introduction (ASE_INT) Security objectives (ASE_OBJ) Protection Profile claims (ASE_PPC) IT security requirements (ASE_REQ) Explicitly stated IT security requirements (ASE_SRE) TOE summary specification (ASE_TSS)

5.3.4 Inter-class dependencies AMA internal dependencies are shown in table below AMA Component Names AMP.1 CAT.1 EVD.1 SIA.1-2

A M P

C A T 1

1

1 1

E V D

S I A

1

In the next table, the left column represents groupings of specific components (using only the last three digits of the component name and an indicator of component number or range of numbers). Each non-empty box in the table indicates a specific component, identified by its name at the top of the column and the number in the box, on which the component in the left column is dependent. Bold numbers represent direct dependencies. Italicised numbers represent indirect dependencies. Dependencies from AMA components to assurance components are included in this table.

Security Subgroup

Page 194 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

CompoA C S D I F H I nent U A C E G S L M Names T P P L S P D P AUT.1-2 3 1 CAP.1-2 CAP.3-4 1 CAP.5 1 SCP.1-3 3 DEL.1 DEL.2-3 3 1 IGS.1-2 1 FSP.1-4 HLD.1-2 1 HLD.3-4 3 HLD.5 4 IMP.1-2 1 2 IMP.3 1 2 INT.1-2 1 2 1 INT.3 1 2 2 LLD.1 1 2 LLD.2 3 3 LLD.3 4 5 RCR.1-3 SPM.1-3 1 ADM.1 1 USR.1 1 DVS.1-2 FLR.1-3 LCD.1-3 TAT.1-3 1 2 1 COV.1-3 1 DPT.1 1 DPT.2 1 DPT.3 1 FUN.1-2 IND.1 1 IND.2-3 1 CCA.1-3 2 MSU.1-3 1 SOF.1 1 VLA.1 1 1 VLA.264 1 2 1 AMP.1 2 CAT.1 2 EVD.1 SIA.1-2

Security Subgroup

20 October 1999

I L R S A U D F L T C D F I C M S V N L C P D S V L C A O P U N C S O L T D R M M R S R D T V T N D A U F A 1 1 2 1 1

1 1 1 1 1

1 1 1 2 3 1 1 1 1 1 2 3

1

1 1 1 1

1 1 1

1 1 1 1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1

1 1

1 1 1 1

1 1

Page 195 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.4 Security assurance The CC contains a set of define assurance levels constructed using components from the assurance families. These levels are intended partly to provide backward compatibility to source criteria and to provide internally consistent general purpose assurance packages. Other groups of components are not excluded. To meet specific objectives an assurance level can be augmented by one or more additional components. Assurance levels define a scale for measuring the criteria for the evaluation of Protection Profiles and Security Targets. Evaluation assurance Levels (EAL) are constructed from the assurance components. Every assurance family (except ADO_DEL and ALC_FLR) contributes directly to the assurance that a TOE meets its security claims. Evaluation Assurance Levels provide a uniformly increasing scale that balances the level of assurance obtained with the cost and feasibility of acquiring that degree of assurance. The increase in assurance across the levels is accomplished by substituting hierarchically higher assurance components from the same assurance family, and by the addition of assurance components from other assurance families. 5.4.1 Evaluation assurance level (Security level) CC level EAL0

TCSEC level D: Minimal protection

ITSEC level E0

EAL1 Functionally tested EAL2 Structurally tested

F-C1,E1

EAL3

F-C2,E2

EAL4 EAL5 EAL6 EAL7

C1:Discretionary Security Protection Methodically tested and checked C2: Controlled access protection Methodically designed, tested and B1: Labeled Security reviewed protection Semiformally designed and tested B2: Structured protection Semiformally verified design and B3: Security Domains tested Formally verified design and tested A1: Verified design

F-B1,E3 F-B2,E4 F-B3,E5 F-A1,E6

Table below describes the relationship between the evaluation assurance levels and the assurance classes, families and components.

Security Subgroup

Page 196 of 214

Security Subgroup

Assurance Class

Security Guidelines for Inter-Regional Applications

Assurance family EAL 1

Configuration Management Delivery and Operation

Development

Guidance documents Life cycle support

Tests

Vulnerability assessment

ACM_AUT ACM_CAP ACM_SCP ADO_DEL ADO_IGS ADV_FSP ADV_HLD ADV_IMP ADV_INT ADV_LLD ADV_RCR ADV_SPM AGD_ADM AGD_USR ALC_DVS ALC_FLR ALC_LCD ALC_TAT ATE_COV ATE_DPT ATE_FUN ATE_IND AVA_CCA AVA_MSU AVA_SOF AVA_VLA

1

1 1

1 1 1

Assurance Components by Evaluation Assurance Level EAL EAL EAL EAL EAL 2 3 4 5 6 1 1 2 2 3 4 4 5 1 2 3 3 1 1 2 2 2 1 1 1 1 1 1 1 2 3 3 1 2 2 3 4 1 2 3 1 2 1 1 2 1 1 1 2 2 1 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 2

1 2

2 1 1 2

1 1 2 1 1 2

1 1

1 1 1

2 1 2

1

1

20 October 1999

2 2 2 2 1 2 1 2 1 3

2 3 3 2 2 2 2 3 1 4

EAL7 2 5 3 3 1 4 5 3 3 2 3 3 1 1 2 3 3 3 3 2 3 2 3 1 4

5.4.2 EAL1- functionally tested EAL1 is the lowest assurance level for which evaluation is meaningful and economically justified. EAL1 is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious. It is intended that an EAL1 evaluation could be successfully conducted without assistance to the developer of the TOE, and for minimal outlay.

Security Subgroup

Page 197 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

An EAL1 evaluation provides analysis of the security functions, using a functional and interface specification of the TOE, to understand the TOE’s security behaviour.

Assurance class

Assurance components

Configuration management Delivery and operation Development

ACM_CAP.1 Version numbers ADO_IGS.1 Installation, generation, and start-up procedures ADV_FSP.1 Informal functional specification ADV_RCR.1 Informal correspondence demonstration AGD_ADM.1 Administrator guidance AGD_USR.1 User guidance ATE_IND.1 Independent testing - conformance

Guidance documents Tests

5.4.3 EAL2 – Structurally tested This is the highest assurance level that can be used without imposing other than minimal additional tasks on the developer. If the developer applies reasonable standards of care, EAL2 may be feasible with no developer involvement other than support for security functional testing. EAL2 is therefore applicable in those circumstances where developers or users require a low to moderate level of independently assured security in the absence of ready availability of the complete development record. Such a situation may arise when securing legacy systems, or where access to the developer may be limited. An EAL2 evaluation provides analysis of the TOE security functions, using its functional and interface specification as well as the high-level design of the subsystems of the TOE. Independent testing of the security functions is performed, and the evaluators review the developer’s evidence of “black box” testing, and a search for obvious vulnerabilities. Assurance class Configuration management Delivery and operation Development

Guidance documents

Tests Vulnerability assessment

Assurance components ADO_DEL.1 Delivery procedures ADO_IGS.1 Installation, generation, and start-up procedures ACM_CAP.2 Configuration items ADV_FSP.1 Informal functional specification ADV_HLD.1 Descriptive high-level design ADV_RCR.1 Informal correspondence demonstration AGD_ADM.1 Administrator guidance AGD_USR.1 User guidance ATE_COV.1 Evidence of coverage ATE_FUN.1 Functional testing ATE_IND.2 Independent testing - sample AVA_SOF.1 Strength of TOE security function evaluation AVA_VLA.1 Developer vulnerability analysis

5.4.4 EAL3 – Methodically tested and checked EAL3 permits a conscientious developer to gain maximum assurance from positive

Security Subgroup

Page 198 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

security engineering at the design stage without substantial alteration of existing sound development practices. EAL3 is applicable in those circumstances where developers or users require a moderate level of independently assured security, and require a thorough investigation of the TOE and its development without substantial re-engineering. An EAL3 evaluation provides an analysis supported by “grey box” testing, selective independent confirmation of the developer test results, and evidence of a developer search for obvious vulnerabilities. Development environment controls and TOE configuration management are also required. 5.4.5 EAL4 – Methodically designed, tested and reviewed EAL4 permits a developer to gain maximum assurance from positive security engineering based on good commercial development practices which, though rigorous, do not require substantial specialist knowledge, skills, and other resources. EAL4 is the highest level at which it is likely to be economically feasible to retrofit an existing product line. EAL4 is therefore applicable in those circumstances where developers or users require a moderate to high level of independently assured security in conventional commodity TOEs and are prepared to incur additional security-specific engineering costs. 5.4.6 EAL5 – semiformally designed and tested EAL5 permits a developer to gain maximum assurance from security engineering based upon rigorous commercial development practices supported by moderate application of specialist security engineering techniques. Such a TOE will probably be designed and developed with the intent of achieving EAL5 assurance. It is likely that the additional costs attributable to the EAL5 requirements, relative to rigorous development without the application of specialised techniques, will not be large. EAL5 is therefore applicable in those circumstances where developers or users require a high level of independently assured security in a planned development and require a rigorous development approach without incurring unreasonable costs attributable to specialist security engineering techniques. 5.4.7 EAL6 – semiformally verified design and tested EAL6 permits developers to gain high assurance from application of security engineering techniques to a rigorous development environment in order to produce a premium TOE for protecting high value assets against significant risks. EAL6 is therefore applicable to the development of security TOEs for application in highrisk situations where the value of the protected assets justifies the additional costs. 5.4.8 EAL7 – formally verified design and tested EAL7 is applicable to the development of security TOEs for application in extremely high-risk situations and/or where the high value of the assets justifies the higher costs. Practical application of EAL7 is currently limited to TOEs with tightly focused security functionality that is amenable to extensive formal analysis.

5.5 Approach to evaluation The evaluation process may be carried out in parallel with or after the development of the TOE. The major input to evaluation is a Security Target describing the security functions

Security Subgroup

Page 199 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

of the TOE, which may reference any protection Profiles to which conformance is claimed.

PROTECTION PROFILE

PP Introduction

PP Identification PP overview

TOE Description Assumptions

TOE Security environment

Threats Organisational security policies Security objectives for the TOE

Security objectives

Security objectives for the environment

TOE Security

IT Security requirements

requirements

TOE security functional requirements TOE security assurance requirements

Security requirements for the IT environment PP application notes Security objectives rationale Rationale

Security requirements rationale

5.5.1 CC protection profiles A Protection Profile (PP) defines an implementation-independent set of IT security requirements for a category of TOEs. Such TOEs are intended to meet common consumer needs for IT security. Consumers can therefore construct or cite a PP to express their IT security needs without reference to any specific TOE. A PP should be presented as a user-oriented document that minimises reference to other material that might not be readily available to the PP user. 5.5.2 Protection Profile Content The TOE description provides context for the evaluation. The information presented in the TOE description will be used in the course of the evaluation to identify inconsistencies. As a PP does not normally refer to a specific implementation, the described TOE features may be assumptions The statement of TOE security requirements shall define the functional and assurance security requirements. The assurance requirements consist of an assurance package, which is normally an Evaluation Assurance Level augmented by additional assurance components when necessary. An optional statement can be included to identify the security requirements for the IT environment.

Security Subgroup

Page 200 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The statement of TOE security environment shall describe the security aspects of the environment in which the TOE is intended to be used and the manner in which it is expected to be employed. This statement shall include the following: • a description of assumptions • a description of threats .A threat shall be described in terms of an identified threat agent, the attack, and the asset that is the subject of the attack. • a description of organisational security policies If security objectives are derived from only threats and assumptions, then the description of organisational security policies may be omitted. Where the TOE is physically distributed, it may be necessary to discuss the security environmental aspects (assumptions, threats, and organisational security policies) separately for distinct domains of the TOE environment. The statement of security objectives shall define the security objectives for the TOE and its environment. The security objectives shall address all of the security environment aspects identified and reflect the stated intent. They shall be suitable to counter all identified threats and cover all identified organisational security policies and assumptions. The objectives rationale demonstrates that the security objectives address all of the environmental aspects identified and that the objectives are effective and provide complete coverage. The security objectives rationale shall demonstrate that the stated security objectives are traceable to all of the aspects identified in the TOE security environment and are suitable to cover them. The security requirements rationale shall demonstrate that the set of security requirements (TOE and environment) is suitable to meet and traceable to the security objectives. 5.5.3 CC security targets A Security Target (ST) contains the IT security requirements of an identified TOE and specifies the functional and assurance security measures offered by that TOE to meet stated requirements. The Security Target for a TOE is a basis for agreement between the developers, evaluators and, where appropriate, consumers on the security properties of the TOE and the scope of the evaluation. The audience for the ST is not confined to those responsible for the production of the TOE and its evaluation, but may also include those responsible for managing, marketing, purchasing, installing, configuring, operating, and using the TOE. The ST may incorporate the requirements of, or claim conformance to, one or more Protection Profiles.TOE description of the ST shall describe the TOE as an aid to the understanding of its security requirements, and shall address the product or system type. The scope and boundaries of the TOE shall be described in general terms both in a physical way (hardware and/or software components/ modules) and a logical way (IT and security features offered by the TOE).The TOE description provides context for the evaluation. The information presented in the TOE description will be used in the course of the evaluation to identify inconsistencies. The statement of TOE security environment shall describe the security aspects of the environment in which the TOE is intended to be used and the manner in which it is expected to be employed.

Security Subgroup

Page 201 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

The statement of security objectives shall define the security objectives for the TOE and its environment. The security objectives for the TOE shall be clearly stated and traced back to aspects of identified threats to be countered by the TOE and/or organisational security policies to be met by the TOE. The security objectives for the environment shall be clearly stated and traced back to aspects of identified threats not completely countered by the TOE and/or organisational security policies or assumptions not completely met by the TOE. SECURITY TARGET

ST Introduction

ST identification ST overview CC conformance

TOE Description Assumptions Threats TOE Security environment

Organisational security policies

Security objectives for the TOE Security objectives

Security objectives for the environment TOE security functional requirements

TOE Security requirements IT Security requirements

TOE summary specifications

TOE security assurance requirements

Security requirements for the IT environment

TOE security functions Assurance measures

PP reference PP refinements PP claims

PP additions

Security objectives rationale Rationale

-

Security requirements rationale

Security requirements include the TOE functional and assurance requirements. An optional statement of the security requirements of the IT environment can be included. The ST may optionally make a PP claim that the TOE conforms with the requirements of one (or possibly more than one) PP. For any PP conformance claims made, the ST shall include a PP claims statement that contains the explanation, justification, and any other supporting material necessary to substantiate the claims.

Security Subgroup

Page 202 of 214

Security Subgroup

-

Security Guidelines for Inter-Regional Applications

20 October 1999

The TOE summary specification shall define the instantiation of the security requirements for the TOE. This specification shall provide a description of the security functions and assurance measures of the TOE that meet the TOE security requirements. Rationale demonstrates that the ST contains an effective and suitable set of countermeasures, which is complete and cohesive.

5.5.4 Evaluation An evaluation is an assessment of an IT product or system against defined criteria. The evaluation process may be carried out in parallel with or after the development of the Target of Evaluation (TOE). Evaluation against a common standard facilitates comparability of evaluation outcomes. The principal input to evaluation is an Security Target (ST) describing the security functions of the TOE, which may reference any Protection Profiles (PP) to which conformance is claimed. Testing, design review and implementation review contribute significantly to reducing the risk that undesired behaviour is present in the TOE. Distinct stage of evaluation are identified, corresponding to the principal layers of TOE representation: - Protection Profile evaluation: carried out against the evaluation criteria for PPs. - Security Target evaluation: carried out against the evaluation criteria for STs. - TOE evaluation: carried out against the evaluation criteria using an evaluated ST as the basis. The new release of CC contains no Protection Profile (PP) registry, instead a system of linked national registries will be implemented. PPs may be defined by developers when formulating security specifications for TOE, or by user communities. Example of PPs are found in CC – Version 1.0 Part4 or in ECMA standards 205/271.

5.6 CC protection profiles 5.6.1 Protection Profile overview The PP is a CC construct, which allows users to describe re-usable sets of security requirements of proven utility. All of the source criteria contain some sets of standardised requirements for functions and assurance. The PP brings those two types of requirements together with a statement of the security problem that a compliant product is intended to solve, so that prospective users can determine its applicability to specific uses. Each PP consists of the following key parts: The security environment is a narrative statement of the security problem to be solved by a TOE compliant to the PP. The environment is described in terms of anticipated threats in such an environment, security policies to be enforced, and usage assumptions about the TOE. The security objectives are a set of statements that summarises the security problem to be solved and are the basis for definition of the requirements. Functional requirements are components. Assurance requirements consist of an Evaluation Assurance Level, augmented as necessary to meet specific needs by addition of assurance components. An additional part of the PP is the rationale, which is evaluation evidence to demonstrate that the relationship between the requirements and objectives exists and is valid.

Security Subgroup

Page 203 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.6.2 Commercial security 1 (CS1) - Basic Controlled Access Protection Keywords: Access control, discretionary access control, general-purpose operating system, information protection The FC CS1 was developed to directly correspond to the TCSEC C2 as it had come to be interpreted at the time of FC publication. CS1 specifies a baseline set of security functions and assurances for workstations, generalpurpose multi-user operating systems, database management systems, and other applications. CS1 compliant products support access controls that are capable of enforcing access limitations on individual users and data objects. CS1 provides for a level of protection which is appropriate for an assumed non-hostile and well managed user community which requires protection against threats of inadvertent or casual attempts to breach the system security. CS1 compliant products are suitable for use in both commercial and government environments. CS1 is generally applicable to distributed IT systems but does not address the security requirements which arise specifically out of the need to distribute the IT resources within a network. 5.6.3 Commercial security 3 (CS3) – Role-Based Access Protection(RBAC) Keywords: Access control, role-based access, non-discretionary controls, general-purpose operating system, information protection. CS3 specifies a strong set of security functions and assurances for general purpose multiuser operating systems, database management systems, and other applications in sensitive environments. CS3 is intended for environments in which access to programs, transactions, and information must be restricted according to the assigned organisational role(s) of users. CS3 supports a variety of organisation specific non-discretionary integrity and confidentiality policies calling for role-based access controls. CS3 provides strong authentication mechanisms and administrative tools. CS3 compliant products are expected to be used in sensitive commercial and governmental environments where system failure is not tolerated and a relatively high degree of confidence is required. For governmental environments, CS3 compliant products are intended to process sensitive unclassified or single-level classified but not multi-level classified information. 5.6.4 Traffic Filter Firewall Keywords: Access control, firewall, packet filter, network layer, transport layer, OSI, TCP, IP, IPX, SPX The intent of this Protection Profile is to specify functions and assurances applicable to most commercially available packet filter firewalls. The intent of this profile is more focused on reflecting current market practices, rather than mandating security functions not already present in most products. The only exception may be slightly stronger auditing functions than currently available, to reflect increased user demands for accountability and monitoring of their network connections. Although most commercially available products are targeted at the TCP/IP protocol stack, there is nothing inherent in the requirements as stated that mandate a compliant firewall to use that protocol stack. The purpose of a packet filter firewall is to provide a point of defence and controlled and audited access to services, both from within and outside an organisation’s private network, by permitting and/or denying the flow of packets through the firewall.

Security Subgroup

Page 204 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.6.5 Application Level Firewall Keywords: Access control, firewall, traffic-filtering, proxy, application layer, OSI, FTP, Telnet This Protection Profile specifies the minimum requirements for application level, or proxy server firewalls. An application level firewall mediates traffic among clients and servers located on the different networks governed by the firewall. Application level firewalls are often used in conjunction with traffic-filtering controls to impose additional restrictions on application level protocol traffic (e.g., FTP, Telnet). Application level firewalls may employ proxies to screen traffic. Proxy servers take requests, such as FTP and Telnet, and screen them according to the site’s security policy. Proxy clients request services from proxy servers. Only valid requests are relayed by the proxy server to the actual server The application level firewall profile applies to devices that are capable of screening traffic at the application protocol level, in addition to the network and transport levels, and authenticating end-users. The application level profile also contains some additional auditing requirements beyond those of the traffic filter. This protection profile is sufficient for operational environments in which the threat of malicious attacks aimed at discovering exploitable vulnerabilities is considered low. The intent of the requirements is to provide the capability to control the flow of packets through the firewall in order to limit the ability of potentially malicious users from gaining access to the internal, protected network(s), or to specific hosts within the internal, protected network(s). The Application Firewall Protection profile requires the same level of assurance as Traffic Filter Firewall protection profile. 5.6.6 ECMA Extended Commercially Oriented Functionality Class (E-COFC) The Extended Commercially Oriented Functionality Class (E - COFC) is an ECMA standard, which specifies security evaluation criteria for interconnected IT systems. The systems are interconnected through a communication network, which is considered "a priori" not trusted. The systems may be located at different sites, cities or countries, and are connected through leased lines, public networks or private networks. The E - COFC Standard applies to the security of data processing in a commercial business environment, independent of hardware and software platforms of the participating systems. Its functions are selected to satisfy the minimal set of security requirements for typical business applications of interconnected systems. The E - COFC is based on an IT Security Policy of a commercial enterprise taking typical environmental and organisational constraints into account. As in reality the IT Security Policy is based on a Confidentiality Policy, an Integrity Policy, an Accountability Policy and an Availability Policy. These dedicated policies are enforced by an appropriate IT security architecture which is decomposed into different domains, such as network security, systems security and application security. This IT security architecture provides a specific set of security services and the associated security management. The security services and the security management are based on a specific set of protocols and mechanisms (security enforcing functions) which may be realised by non-cryptographic (access control) and cryptographic means (symmetric methods, public key methods).

Security Subgroup

Page 205 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

For consistency and ease of operation, a specific key management may be an integral part of the security management, supporting specific security services and security mechanisms. With respect to the various system services applied, the security management system activates the adequate security enforcing functions. If cryptographic means are applied, the associated keys and parameters are protected, distributed, and revocated such that unauthorised persons can't have access to them. With respect to the commercial requirements, the E - COFC is partitioned into three hierarchical classes of commercial security requirements: - The Enterprise Business class (EB-class) (includes COFC requirements). - The Contract Business class (CB-class) (includes the EB-class and COFC requirements). - The Public Business class (PB-class) (includes the CB-class, EB-class and COFC requirements). 5.6.6.1

The Enterprise Business class

All users are employees of a single enterprise (legal entity). The usage of the IT systems which are part of the TOE is regulated by the employee contract. Only one legal party is responsible for all business actions. In case of outsourcing, the responsibility can be partly delegated to other legal parties on the basis of special contracts. The exchange of business information is done on behalf of the involved system users. The security of the exchange is enabled by the security services provided by management. Conflict mediation is provided by management actions on the basis of the employee contracts.

Security Subgroup

E-COFC Public Business Class Contract Business Class Enterprise Business Class

COFC Hierarchical classes

Page 206 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

5.6.6.2 The Contract Business class The CB-class is built on top of the EB-class. It adds to the network security requirements of the EB-class those requirements identified for business information exchange between independent enterprises which constitute a closed user group. The enterprises agree in a contract on a defined mode of operation, the business conditions and the security rules, which are the foundations of their business information exchange. They establish a "Regulatory Board" (RB – Notary) which acts as impartial judge to mediate conflicts within the user group and acts also as a "Trust Center" to handle security matters such as key management. All business partners who sign the contract constitute a closed user group. Users shall only get access to the systems if they belong to a business partner that has signed the contract. 5.6.6.3 The Public Business class The PB-class is built on top of the CB-class, which is built on top of the EB-class. It adds on those requirements, which are typical for public business in an open environment (no closed user group). Public business typically covers areas like selling of goods, tickets and other merchandise, but also home banking, insurance, network based information services and others. The terms Customer and Provider are used in this class. A provider system is offering a dedicated service containing a set of business actions. For each business action the term Originator and Destination can be applied as outlined in the CB-class. The Public Business class is characterised by business on the basis of pre-existing contracts that legally connect the Provider and the Customer for a set of pre-defined business actions. The Provider or the Customer is a user or a process on behalf of a user that generates, processes, transmits, or receives business information requests. In contrast to the CB-class there is no RB which resolves possible conflicts. Conflicts have to be resolved on the basis of the business or consumer law. Secure business transactions between the Customer and Provider are based on "Trust Centres" (TC) which provide as independent organisations the required key management and distribution services. In contrast to the EB-class and the CB-class the business relationships may be beyond the contractual relationships. Different scenarios of contractual relations are possible. Also in the case of electronic advertising the contracts are provided by business and consumer law. The formal contractual relationship between the Customer and the Provider is either direct or through other business organisations. ECMA-271 standard shows examples of public business with different contractual relations. 5.6.7 Others profiles Some others protection profiles have been added recently : • Printed circuit for smart card : level EAL4+ • Firewall with high protection : level EAL5+ • Public Key Infrastucture : level EAL4+ • Tools for message security (Message servers and PKI) : level EAL5+ • EDI : level EAL3+

Security Subgroup

Page 207 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

6 Annexes : 6.1 Bibliography 6.1.1 Overview Microsoft Internet Security Framework http://technet.microsoft.com/cdonline/defaultf.asp?target=http://technet.microsoft.com/CDONLINE/CONTENT/COMPLETE/TECHN OL/INTERNET/ISFRAME.HTM Security and Privacy related links ( Standards, Cryptography, Systems, Steganography, Alert sites, … ) http://www.cryptosoft.com/html/privacy.htm http://www.manos.com/sec_links.html VPN http://www.ins.com/knowledge/whitepapers/vpn0399.asp CERT Coordination Center http://www.cert.org/ 6.1.2 Security Evaluation Common Criteria http://csrc.nist.gov/cc/ ITSEC - ITSEM - TCSEC http://www.securteam.it/bbs/document.htm (in Italian and English) 6.1.3 Cryptography http://www.rsa.com/rsalabs/faq/html/sections.html http://www.al.unipmn.it/RSAEuro/RSAEuro/rsaann.html downloading)

(Cryptographic

program

6.1.4 Legal issues http://www.clusif.asso.fr/rubriques/crypto/cryptopri.htm IPR http://www.cordis.lu/ipr-helpdesk/en/s_001_en.htm Privacy management http://130.206.16.236/aesopian/advanced/10010000.htmStandards http://www.ietf.org/ (RFC and Internet-drafts) for RFC, preferably use your nearest anonymous FTP server (/pub/rfc directory….) http://www.semper.org/sirene/outsideworld/standard.html Protocols : IPSec, SSL, TLS, sHTT, sMIME,…. http://www3.tsl.uu.se/~micke/ssl_links.html http://www.consensus.com/security/ssl-talk-faq.html http://www.infoseceng.com/ipsec.htm http://www.ietf.org/html.charters/ipsec-charter.html

Security Subgroup

Page 208 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

http://csgrad.cs.vt.edu/~mfali/chap16/shttp.html http://www.homeport.org/~adam/shttp.html http://www.rsa.com/smime/html/faq.html PGP http://www.uk.pgp.net/pgpnet/pgp-faq http://www.sover.net/~greywolf/pgpguide/ http://www.heureka.clara.net/sunrise/pgpsec.htm http://www.heureka.clara.net/sunrise/pgp.htm 6.1.5 Security management organisation http://130.206.16.236/aesopian/advanced/10030000.htm Password management http://www.aitl.uc.edu/im&s/support/password.htm X509 PKI http://www.ietf.org/ids.by.wg/X.509.html Certification Authorities, CA Initiatives and Authentication Services http://www.qmw.ac.uk/~tl6345/ca.htm Kerberos http://sol.usc.edu/~laura/kerb_refs.html 6.1.6 Glossary, Acronym http://www.aba.net.au/solutions/SecurEcommerce/glossary.html http://www.cnet.com/Resources/Info/Glossary/num.html http://www.csrstds.com/acro-a-d.html

Security Subgroup

Page 209 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

6.2 Index

Security Subgroup

Page 211 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

Access Control, 48

EB-class, 279

Active protection, 107

ECMA, 278

AES, 55

EDI, 159

assurance, 265

E-Mail, 76

Assurance classes, 259

E-mail Security, 78

attacks, 17

Encryption, 51

Authentication, 47

Enterprise Business class, 279

Availability, 49

evaluation, 252

browsers, 76

Evaluation assurance classes, 261

Business aspects, 159

Evaluation assurance level, 265

CB-class, 279

Evaluation assurance levels, 50

CC, 253

Firewalls, 68

CC Key concepts, 254

Frames, 146

CC protection profiles, 271, 275

Hash functions, 52

Certificate Management Protocols, 202

Hyperlinks: Deep v Surface, 145

Certification Policy, 229

ICC, 58

CMP, 202

ICE/TEL, 180

Commercial security 1, 276

IDEA, 55

Commercial security 3, 276

Industrial Drawings, 147

Common Criteria, 253

integrity, 48

confidentiality, 48

Intellectual Property Rights, 135

Content Integrity, 165

IPSec, 81

Contract Business class, 280

ITSEC, 253

Copyright, 136

ITSEM, 253

CPS, 229

Kerberos, 64

Data confidentiality, 48

Legal issues, 122

Data integrity, 48

Malicious Modification, 165

DCE, 64, 66

management organisation, 102

DES, 53

MARIA service, 28

Diffie-Hellman, 53

MD5, 56

Digital certificate, 53, 128

Metatags, 145

Digital signature and certificate, 52

methodology, 27

Digital signatures, 124

MIME, 92, 93

Digital timestamping, 53

models, 26

Security Subgroup

20 October 1999

Page 212 of 214

Security Subgroup

Security Guidelines for Inter-Regional Applications

MOSS, 92

SDSI, 189

Network security management, 117

Security assurance, 265

Non-repudiation, 48

Security attacks, 17

Non-repudiation of Origin, 166

Security evaluation, 252

OCSP, 224

Security level, 265

Organisational aspects, 102

Security models, 26

OSF DCE, 66

Security Target, 254

OSF DCE, 64

SHTTP, 94

Password Management, 111

Signatures, 122

Password Selection, 111

Smart Card, 58

Patentability of computer programs, 148

SPKI,, 186

PB-class, 279

SSL, 89

PEM, 91, 178

Staff organisation, 104

PGP, 95, 177

Standards, 81

PGP/MIME, 93

Target of Evaluation, 254

Physical Security, 119

TCSEC, 253

PKI, 167

The Domain Name System, 143

Policy, 21

Threat of Prosecution, 148

Privacy, 95, 133, 163

Time Stamp protocol, 226

Protection profile, 254

time-stamping service, 128

Provider Liability, 147

TLS, 89

Public Business class, 280

Trademarks, 143

Public Key Infrastructure, 167

Trusted Third Parties, 129

Public key infrastructures, 57

TSA, 226

RBAC, 276

TTP, 129

RC2, 55

Tunnelling, 75

RC4, 56

value chains, 35

RC5, 56

Virtual Private Network, 74

Risk analysis management, 113

Watermarks, 56

Risks, 76

Web browsers, 76

RSA, 53

Web servers, 78

S/MIME, 93

Web servers Security, 80

SASL, 88

X.509 PKI, 193

Security Subgroup

20 October 1999

Page 213 of 214

Security Subgroup

Security Subgroup

Security Guidelines for Inter-Regional Applications

20 October 1999

Page 214 of 214