Table 5-1: Three Theories of Privacy - DCU School of Computing

“Cookies” are files that Web sites send to and retrieve from the computers of Web users. ▫ Cookies technology enables Web site owners to collect data about ...
653KB taille 89 téléchargements 4461 vues
Privacy and Cybertechnology 



Privacy concerns affect many aspects of our lives – from commerce to healthcare to work. We have categories such as:   

consumer privacy, medical/healthcare privacy, employee/workplace privacy.

Privacy and Cybertechnology (Continued) 



Privacy issues involving cybertechnology affect each of us, whether or not we have ever owned or even used a networked computer. Consider the information about us that can be acquired from our commercial transactions in a bank or in a store.

Privacy and Cybertechnology (Continued) 





Even if we use the Internet solely for recreational purposes, our privacy is threatened. Personal data, including data about our interests, can now easily be acquired by organizations whose need for this information is not always clear. A user’s personal data acquired via his/her online activities can be sold to third parties.

Privacy and Cyberspace  

Are any privacy issues unique to cybertechnology? Privacy concerns have been exacerbated by cybertechnology in at least four ways, i.e., by the: 1. amount of personal information that can now be collected; 2. speed at which personal information can now be transferred and exchanged; 3. duration of time in which personal information can now be retained; 4. kind of personal information (such as transactional information) that can be acquired.

What is Personal Privacy  

Privacy is a concept that is difficult to define. We sometimes speak of an individual’s privacy as something that can be: 

    

lost, diminished, intruded upon, invaded, violated, breached.

What is Privacy (continued)? 





Privacy is sometimes viewed in terms of something that can be diminished (i.e., as a repository of personal information that can be eroded gradually) or lost altogether. Privacy is sometimes also construed in terms of the metaphor of a (spatial) zone that can be intruded upon or invaded. Privacy is also sometimes analyzed in terms of concerns affecting the confidentiality of information, which can be breached or violated.

Classic Theories of Privacy 

Three classic theories have tended to view privacy in terms of either:   

non-intrusion, non-interference, control over/restricting access to one’s personal information.

Non-intrusion Theory of Privacy 

The non-intrusion theory views privacy as either: 





being let alone, being free from government intrusion.

This view is also sometimes referred to as accessibility privacy (DeCew, 1997).

Non-intrusion Theory of Privacy (Continued) 

The rationale for the non-intrusion theory can be found in both: 



the Fourth Amendment to the U.S. Constitution (i.e., search and seizure re one’s papers, affects, and so forth); a seminal article on the Right to Privacy by Warren and Brandeis in the Harvard Law Review (1890).

Non-interference Theory of Privacy 

The non-interference theory views privacy as freedom from interference in making decisions. 



This view emerged in the 1960s, following the Griswold v. Connecticut (U.S. Supreme Court) case in 1965.

The non-interference theory of privacy is also sometimes referred to as decisional privacy.

The Control and Limited Access Theories of Informational Privacy 





Informational privacy is concerned with

protecting personal information in computer databases. Most people wish to have some control over their personal information. In some cases, “privacy zones” have been set up to restrict or limit access to one’s personal data.

Table 5-1: Three Views of Privacy Accessibility Privacy

Privacy is defined in terms of one's physically "being let alone," or freedom from intrusion into one's physical space.

Decisional Privacy

Privacy is defined in terms of freedom from interference in one's choices and decisions.

Informational Privacy

Privacy is defined as control over the flow of one's personal information, including the transfer and exchange of that information.

A Comprehensive Account of Privacy 

James Moor (2004) has framed a privacy theory that incorporates key elements of the three classic theories:   

accessibility privacy (non-intrusion), decisional privacy (non-interference), informational privacy (controlling/restricting access to one’s personal information).

Moor’s Comprehensive Theory of Privacy 

According to Moor: “an individual has privacy in a situation if in that particular situation the individual is

protected from intrusion, interference, and information access by others.” [italics added]

Moor’s Theory of Privacy (continued) 



A key element in Moor’s definition is his notion of a situation, which can apply to a range of contexts or “zones.” For Moor. a situation can be an “activity,” a “relationship,” or the “storage and access of information” in a computer or on the Internet.

Moor’s Privacy Theory (continued) 

Moor also distinguishes between “naturally private” and “normatively private” situations required for having:  

(a) natural privacy (in a descriptive sense); (b) a right to privacy (in a normative sense).

Moor’s Natural vs. Normative Privacy Distinction 

Using Moor’s natural/normative privacy distinction, we can further differentiate between a:  

loss of privacy, violation of privacy.

Two Scenarios in the Textbook: One Involving Natural Privacy, and One Involving Normative Privacy 

Scenario 1: Tom walks into the computer lab (at 11:30 PM, when no one else is around) and sees Mary typing on a computer. 



In Scenario 1, Mary’s privacy is lost but not violated.

Scenario 2: Tom peeps through the

keyhole of Mary’s apartment door and sees Mary typing at her computer. 

In Scenario 2, Mary’s privacy is not only lost but is also violated.

Helen Nissenabum’s Theory of Privacy as “Contextual Integrity” 

Nissenbaum’s privacy framework requires that the processes used in gathering and disseminating information are  

(a) “appropriate to a particular context” and (b) comply with norms that govern the flow of personal information in a given context.

Nissenbaum’s Theory (Continued) 

Nissenbaum (2004a) refers to these two types of informational norms as:  

norms of appropriateness norms of distribution.

Nissenbaum’s Theory (Continued) 







Norms of appropriateness determine whether a given type of personal information is either appropriate or inappropriate to divulge within a particular context. Norms of distribution restrict or limit the flow of information within and across contexts. When either norm is “breached,” a violation of privacy occurs. Conversely, the contextual integrity of the flow of personal information is maintained when both kinds of norms are “respected”

Nissenbaum’s Theory (Continued) 

Like Moor’s privacy model, Nissenbaum’s theory demonstrates why we must always attend to the context in which information

flows, not the nature of the information itself,



in determining whether normative protection is needed. Examine the scenario on Professor Robert’s seminar (in the textbook), which intends to illustrate the role of “contextual integrity.”

Why is Privacy Important, Why Is it Valued, and What Kind of Value is It? 

Three distinct questions:  



What kind of value is privacy? Is privacy universally valued? Is privacy valued mainly in Western industrialized societies, where greater importance is placed on the individual than on the values and objectives of the broader community?

Is Privacy an Intrinsic Value or an Instrumental Value? 

Is privacy something that is valued for its own sake 



– i.e., is privacy an intrinsic value?

Is privacy valued as a means to some further end 

-- i.e., is privacy an instrumental value?

Privacy as an Intrinsic vs. an Instrumental Value (Continued) 



Privacy does not seem to be valued for its own sake, and thus does not appear to have intrinsic worth. But privacy also seems to be more than merely an instrumental value 

because it is necessary (rather than merely contingent) for achieving important human ends (Fried, 1990).

Privacy as an Intrinsic or Instrumental Value (Continued) 



Charles Fried notes that privacy is necessary for important human ends such as trust and friendship. Moor views privacy as an expression of a “core value” – viz., security, which is essential for human flourishing.

Privacy as a Universal Value 

Privacy has at least some importance in all societies, but it is not valued the same in all cultures. 



Privacy is less valued in many non-Western nations, as well as in many rural societies in Western nations. Privacy is also less valued in some democratic societies where national security and safety are considered more important than individual privacy (e.g., as in Israel).

The Value of Privacy as a “Shield” 



Judith DeCew (2006) notes that privacy acts as a “shield” by providing for freedom and independence. Privacy also shields us from pressures that preclude self-expression and the development of relationships.

Privacy as a “Shield” (Continued) 

DeCew believes that the loss of privacy leaves us vulnerable and threatened because we are likely to become:  

more conformist, and less individualistic.

Privacy as a “Shield” (Continued) 

DeCew notes that privacy also protects (i.e., shields) us from:    

scrutiny, interference, coercion, pressure to conform.

Privacy as an Important Social Value 



Priscilla Regan (1995) notes that we tend to underestimate the importance of privacy as an important social value (as well as an individual value). Regan believes that if we frame the privacy debate in terms of privacy as a social value (essential for democracy), as opposed to an individual good, the importance of privacy is better understood.

Three Cybertechology-related Techniques that Threaten Privacy 





(1) data-gathering techniques used to collect and record personal information, often without the knowledge and consent of users. (2) data-exchanging techniques used to transfer and exchange personal data across and between computer databases, typically without the knowledge and consent of users. (3) data-mining techniques used to search for patterns implicit in large databases in order to generate consumer profiles based on behavioral patterns discovered in certain groups.

Cybertechnology Techniques Used to Gather Personal Data 



Personal data has been gathered at least since Roman times (census data). Roger Clarke uses the term dataveillance to capture two techniques made possible by cybertechnology: 



(a) surveillance (data-monitoring), (b) data-recording.

Internet Cookies as a Surveillance Technique 





“Cookies” are files that Web sites send to and retrieve from the computers of Web users. Cookies technology enables Web site owners to collect data about those who access their sites. With cookies, information about one’s online browsing preferences can be “captured” whenever a person visits a Web site.

Cookies (Continued) 





The data recorded via cookies is stored on a file placed on the hard drive of the user's computer system. The information can then be retrieved from the user's system and resubmitted to a Web site the next time the user accesses that site. The exchange of data typically occurs without a user's knowledge and consent.

Can the Use of Cookies be Defended? 

Many proprietors of Web sites that use cookies maintain that they are performing a service for repeat users of their sites by customizing a user's means of information retrieval. 

E.g., some point out that, because of cookies, they are able to provide a user with a list of preferences for future visits to that Web site.

Arguments Against Using Cookies 



Some privacy advocates argue that activities involving the monitoring and recording an individual's activities while visiting a Web site violates privacy. Some also worry that information gathered about a user via cookies can eventually be acquired by or sold to online advertising agencies.

RFID Technology as a Surveillance Technique 

RFID (Radio Frequency IDentification) consists of a tag (microchip) and a reader. 



The tag has an electronic circuit, which stores data, and antenna that broadcasts data by radio waves in response to a signal from a reader. The reader contains an antenna that receives the radio signal, and demodulator that transforms the analog radio into suitable data for any computer processing that will be done (Lockton and Rosenberg, 2005).

RFID Technology (Continued) 





RFID transponders in the form of “smart labels” make it much easier to track inventory and protect goods from theft or imitation. RFID technology also poses a significant threat to individual privacy. Critics worry about the accumulation of RFID transaction data by RFID owners and how that data will be used in the future.

RFID Technology (Continued) 



Simpson Garfinkel (2004) notes that roughly 40 million Americans carry some form of RFID device every day. Privacy advocates note that RFID technology has been included in chips embedded in humans, which enables them to be tracked.

RFID Technology (Continued) 



Like Internet cookies (and other online data gathering and surveillance techniques), RFID threatens individual privacy. Unlike cookies, which track a user’s habits while visiting Web sites, RFID technology can track an individual’s location in the off-line world. 

RFID technology also introduces concerns involving “locational privacy” (see Chapter 12).

Cybertechnology and Government Surveillance 

As of 2005, cell phone companies are required by the FCC to install a GPS (Global Positioning System) locator chip in all new cell phones. 



This technology, which assists 911 operators, enables the location of a cell phone user to be tracked within 100 meters.

Privacy advocates worry that this information can also be used by the government to spy on individuals.

Computerized Merging Techniques 



Computer merging is a technique of

extracting information from two or more unrelated databases and incorporating it into a composite file. Computer merging occurs whenever two or more disparate pieces of information contained in separate databases are combined.

Computer Merging (Continued) 

Imagine a situation in which you voluntarily give information about yourself to three different organizations, by giving information about your: 1. income and credit history to a lending institution in order to secure a loan; 2. age and medical history to an insurance company to purchase life insurance; 3. views on certain social issues to a political organization you wish to join.

Computer Merging (Continued) 

Each organization has a legitimate need for information to make decisions about you; for example: 



insurance companies have a legitimate need to know about your age and medical history before agreeing to sell you life insurance; lending institutions have a legitimate need to know information about your income and credit history before agreeing to lend you money to purchase a house or a car.

Computer Merging (Continued) 





Suppose that information about you in the insurance company's database is merged with information about you in the lending institution's database or in the political organization's database. When you gave certain information about yourself to three different organizations, you authorized each organization to have specific information about you. However, it does not follow that you thereby authorized any one organization to have some combination of that information.

Computer Merging (Continued) 

Review the scenario (in the textbook) involving Double-Click (an online advertising company that attempted to purchase Abacus, Inc., an off-line database company). 

If it succeeded, Double-Click would have been able to merge on- and off-line records.

Computer Matching 



Computer matching is a variation of

computer merging. Matching is a technique that crosschecks information in two or more databases that are typically unrelated to produce "matching records" or "hits."

Computer Matching (Continued) 

In practices involving federal and state government organizations, computerized matching has been used by various agencies and departments to identify:  

potential law violators; individuals who have actually broken the law or who are suspected of having broken the law (welfare cheats, deadbeat parents, etc.).

Computer Matching (Continued) 



Income tax records could be matched against state motor vehicle registration records (looking for individuals reporting low incomes but owning expensive automobiles). Consider an analogy in physical space where your mail is matched (and opened) by authorities to catch criminals suspected of communicating with your neighbors.

Computer Matching (Continued) Some who defend matching argue:



If you have nothing to hide, you have nothing to worry about.

Others use the following kind of argument:

 1. 2. 3.

4.

Privacy is a legal right. Legal rights are not absolute. When one violates the law (i.e., commits a crime), one forfeits one's legal rights. Therefore, criminals have forfeited their right to privacy.

Computer Matching (Continued) 





At Super Bowl XXXV (January 2001), a facialrecognition technology was used to scan the faces of individuals entering the stadium. The digitized facial images were instantly matched against images contained in a centralized database of suspected criminals and terrorists. This practice was, at the time, criticized by many civil-liberties proponents.

Data Mining 





Data mining involves the indirect gathering of personal information via an analysis of implicit patterns discoverable in data. Data-mining activities can generate new and sometimes non-obvious classifications or categories. Individuals whose data is mined could become identified with or linked to certain newly created groups that they might never have imagined to exist.

Data Mining (Continued) 





Current privacy laws offer individuals little-tono protection for how personal information that is acquired through data-mining activities is subsequently used. Yet, important decisions can be made about individuals based on the patterns found in the personal data that has been “mined.” Some uses of data-mining technology raise special concerns for personal privacy.

Data Mining (Continued)  



Why is mining personal data controversial? Unlike personal data that resides in explicit records in databases, information acquired about persons via data mining is often derived from implicit patterns in the data. The patterns can suggest "new" facts, relationships, or associations about that person, such as that person's membership in a newly "discovered" category or group.

Data Mining (Continued) 



Much personal data collected and used in data-mining applications is generally considered to be information that is neither confidential nor intimate. So, there is a tendency to presume that personal information generated by or acquired via data mining techniques must by default be public data.

Data Mining (Continued) 

Consider a hypothetical scenario (in the text) involving Lee, a 35-year old executive:   

Lee applies for an automobile loan for a BMW; Lee has an impeccable credit history; A data-mining algorithm “discovers” that Lee: 





(i) belongs to a group of individuals likely to start their own business, and (ii) people who start business in this field are also likely to declare bankruptcy within the first three years;

Lee is denied the loan for the BMW based on data-mining algorithms, despite his credit score.

Data Mining (Continued) 

Although the scenario involving Lee is hypothetical, an actual case occurred in 2008, where an individual had two credit cards revoked and had the limit on a third credit card reduced because of associations with: (1) where this person shopped and (2) where he lived and did his banking (Stuckey 2009). 

E.g., a data-mining algorithm “discovered” that this person: 



(a) purchased goods at a store where typical patrons who also purchased items there defaulted on their credited card payments; (b) lived in an area that had a high rate of home foreclosures, even though he made his mortgage payments on time.

Web Mining: Data Mining on the Web 





Traditionally, most data mining was done in large “data warehouses” (i.e., off-line). Data mining is now also used by commercial Web sites to analyze data about Internet users, which can then be sold to third parties. This process is sometimes referred to as “Web mining.” 

Examine the scenario (in the textbook) involving “Facebook Beacon” as an example of Web mining.

Table 5-2: Three Techniques Used to Manipulate Personal Data Data Merging

A data-exchanging process in which personal data from two or more sources is combined to create a "mosaic" of individuals that would not be discernable from the individual pieces of data alone.

Data Matching

A technique in which two or more unrelated pieces of personal information are crossreferenced and compared to generate a match or "hit," that suggests a person's connection with two or more groups.

Data Mining

A technique for "unearthing" implicit patterns in large databases or "data warehouses," revealing statistical data that associates individuals with non-obvious groups; user profiles can be constructed from these patterns.

Public vs. Non-Public Personal Information 

Non-Public Personal Information (or NPI) refers to sensitive information such as in one’s financial and medical records. 



NPI currently enjoys some legal protection.

Many privacy analysts are now concerned about a different kind of personal information called Public Personal Information (or PPI). 

PPI is non-confidential and non-intimate in character, and is generally not legally protected.

Privacy Concerns Affecting PPI 





Why does the collection of PPI by organizations generate privacy concerns? Suppose some organization learns that that you are a student at Technical University; you frequently attend university basketball games; and you are actively involved in your university’s computer science club. In one sense, the information is personal because it is about you (as a person);but it is also about what you do in the public sphere.

PPI (Continued) 



In the past, it was assumed that there was no need to protect the kind of information we now call PPI, because it was viewed as simply public information. Helen Nissenbaum (2004b) believes that our assumptions about not needing to protect PPI are no longer tenable because of what she views as a misleading assumption: There is a realm of public information about persons to which no privacy norms apply.

PPI (Continued) 

Consider two hypothetical scenarios (described in the textbook):  



(a) Scenario 1: Shopping at SuperMart; (b) Scenario 2: Shopping at Nile.com.

Both reveal problems with regard to protecting privacy in public, in an era when data mining is typically used.

Search Engines and Personal Information 

Search engines can be used to: 



Acquire personal information about individuals:  Review the Amy Boyer scenario in Chapter 1 (in the textbook);  Examine the discussion of Gawker/Stalker in the text book. Reveal to search facilities data about which Web sites you have visited (as in the Google vs. Bush Administration case where users’ search requests were subpoenaed by the U.S. Government).

Accessing Public Records via the Internet 





What are public records, and why do we have them? In the past, one had to go to municipal buildings to get public records. In the Amy Boyer case, would it have made a difference if Liam Youens had to go to a municipal building to get the information he sought about Boyer?

Accessing Public Records via the Internet (continued) 

Review two scenarios in the textbook (on accessing online public records): 





(1) The State of Oregon’s Motor Vehicle Department (selling information about license plate numbers for state residents to an e-commerce site); (2) The city of Merrimack, NH (making home property records, and layouts of houses, available online).

Should those records have been made available online to the public?

Can Technology Be Used to Protect Personal Privacy? 





Privacy advocates argue for stronger privacy legislation. Groups in the commercial sector oppose strong privacy laws and lobby instead for voluntary industry self-regulation. Do Privacy Enhancing Tools, or PETs, provide a compromise solution?

Privacy Enhancing Technologies (PETs) 

PETs are tools that users can employ to protect: 



(a) their personal identity, while navigating the Web; (b) the privacy of their communications (such as email) sent over the Internet.

PETs (Continued) 

Two challenges involving PETs include: 



(1) educating users about the existence of these tools; (2) preserving the principle of informed consent when using these tools.

Educating Users About PETs 



How are Users supposed to find about PETs? With PETs, the default has been that users must:  



discover that these tools actually exist learn how to use them.

Is this expectation re PETs and users a reasonable one?

PETS and the Problem of Informed Consent 

Users can enter into an agreement with Web sites have a privacy policy. 





Currently, users must “opt out” of having data about them collected The default view is that users have opted in, unless they specifically indicate otherwise.

It is not clear that PETs protect against secondary and future uses of personal data. 

E.g., review the Toysmart scenario in the textbook.

Privacy Legislation and Industry Self-Regulation 



Can industry adequately self-regulate privacy through voluntary controls, instead of strong privacy legislation? What kinds of assurances from vendors do online consumers need regarding the protection of their privacy? 

Again, review the scenario involving Toysmart.com (described in the textbook) to see some of the challenges that can emerge.

Privacy Laws and Data Protection 

Privacy laws and data-protection principles in Europe and the U.S. include the:  

European Union (EU) 1995 Privacy Directive; U.S. Privacy Act of 1974, and HIPAA (Health Insurance Portability and Accountability Act).

Towards a Comprehensive Privacy Policy 

We saw that the theory advanced by Moor (2000) requires that rules for setting up normatively private situations be “public” and open to debate. 

Moor’s Publicity Principle states that the rules and conditions governing private situations should be “clear and known to persons affected by them.” 



A critical element in Moor’s privacy model is openness, or transparency, so that all parties in the “situation,” or context, are kept abreast of what the rules are at any given point in time.

So, Moor’s publicity principle provides a key element in any comprehensive privacy policy that incorporates legislation, self-regulation, and privacy-enhancing tools.