Digital Imaging 101 Full Handout

2) some applications do not need a digital signal processor embedded in the chip because .... additional transistor is added to the architecture and that enables a ... electron generated in the photodiode will try to escape to the power supply.
287KB taille 1 téléchargements 239 vues
Digital Imaging TECHNOLOGY 101 John Coghill, DALSA Based on material developed by Albert THEUWISSEN Chief Technology Officer DALSA Corp.

© 2003 Albert Theuwissen

Overview This handout is supplemental to the presentation first given at CineGear 2004 and the accompanying whitepaper titled “Image Sensor Technologies for Digital Cinematography”. It is intended to provide more detail than the main presentation as well as illustrate some of the key concepts presented in both documents. This is not an exhaustive technical treatment but rather a balanced general overview of a very complex subject, written for a specific audience; the relatively non-technical individual that is interested in the basics of digital imaging and how they apply to professional motion picture production. A few short years ago the primary topic of discussion in the industry was “how many K is OK?” however the industry has become much more sophisticated since that time and moved on to other more important measures of image quality such as dynamic range and MTF. Over the past 12 months another consideration has been added to the mix and that is the choice of semiconductor technology used in the image sensor itself. The “CCD vs. CMOS” debate that arose in various industrial and consumer imaging markets during the late ‘90’s has become the hot topic in this community and cinematographers are being inundated with sometimes conflicting and usually confusing “information”. The intent of this overview is to peel away the hype and present the basics in an unbiased fashion so that cinematographers can assess the impact of their decision to chose one technology over the other. As with most things, there are pros and cons to each technology so beware of blanket statements that claim one is defacto superior to the other. The fact is that the choice should be application specific. While a CMOS sensor may be the perfect choice for a given use, the very attributes that made it the obvious selection may in fact render it less than ideal in another application. DALSA is one of the very few companies in the world that has expertise in the design, development and manufacturing of both CMOS and CCD image sensor technology both for standard products and custom designs. Our experience tells us that for applications where image quality is the overriding consideration CCD technology is the superior choice at this point in time, just look at the market forces at work. If CMOS based sensors were actually higher performance then most of the high-end professional applications would not be using CCD technology. The following pages attempt to illustrate the basic physics of why this is the case and why it will remain so for quite some time into the future. © 2003 Albert Theuwissen

Table of Contents •

Photon Sensing Basics



CCD Image Sensor



CMOS Image Sensors



CMOS Scaling



CMOS versus CCD



Dynamic Range



Light Sensitivity

– Photogates and Photodiodes – pixel basics and 2D sensor architectures – pixel basics and 2D sensor architectures – overview of semiconductor technology history – pros & cons of each technology – factors impacting dynamic range – factors impacting sensitivity

© 2003 Albert Theuwissen

AREA CCD IMAGERS

© 2003 Albert Theuwissen

Photon Sensing Metallurgical junction Photodiode photon

Voltage-induced junction Photogate photon VG > 0 V gate SiO2

n-Si

depletion layer p-Si

© 2003 Albert Theuwissen

Photon sensing can be done with either of two basic photosensitive structures but both rely on the same basic principle: Einstein’s Photoelectric effect wherein photons hitting the silicon lattice dislodge electrons from the structure thereby creating a “charge”. 1) Photodiode: a metallurgical junction (pn-junction in the silicon substrate) is used to form a photodiode. A depletion layer, or “potential well” is formed around this junction by means of diffusing impurities or “implants” into the silicon. 2) Photogate: is a voltage induced junction by means of a MOS capacitor. A voltage is applied to a polysilicon gate to induce a potential well in the silicon. In either case, photon-generated charge will decrease the width of the depletion layer (fill the well) proportional to the amount of light hitting the pixel. The pixel becomes “saturated” when the amount of light being absorbed by the pixel results in more free electrons being generated than is required to fill the well. Above this level, charge will spill over into adjacent pixels resulting in “blooming” or “blown out highlights” in photographic terms.

Charge Transport 0V

10 V

0V

0V

p-Si

Φ 0V

10 V

10 V

10 V

0V

0V

5V

10 V

0V

0V

10 V

0V

Pulse Amplitude

Φ 0V

Φ

10 V

Φ

0V

Φ

0V

10 V 0V

third gate second gate Time © 2003 Albert Theuwissen

A CCD cell is composed of various very closely spaced MOS capacitors. A simplified example of such a structure is shown here (top left) : four gates next to each other, of which the second one is biased at 10 V and all the others at 0 V. The high-biased gate creates the potential well in the silicon, and a charge packet can be nicely stored underneath this particular gate. To move charge from gate 2 to gate 3, the potential on the third gate is changed from 0 V to 10 V so that a potential well is created underneath the third gate as well. The two individual potential wells will then form one large potential well in which the stored electrons will distribute (right top and left middle). If the second gate is then pulsed from 10 V back to 0 V (right middle), the charge will be forced from underneath the second gate to underneath the third gate. The end result can be seen in the last crosssection with the potential wells (left bottom) : wherein all charge originally stored underneath the second gate has now been transferred to the third gate. This continues along as driven by the voltage pulses applied to the gates. The right bottom illustration shows the digital clock pulses applied to the second and the third gate, as a function of time. The leading edge of pulse on the third gate attracts the electrons towards this particular gate, the falling edge of pulse on the second gate completes the charge transport. The charge transport described here is typical for the working principle of charge-coupled devices : charge packets, representing analog signals, are transported through the bulk silicon from gate to gate, by means of digital pulses supplied from outside of the chip.

Read-out Structures MOS switch photon gate

sensing line

n-Si p-Si

CCD register photon

CCD gates

n-Si p-Si © 2003 Albert Theuwissen

Reading the signals generated in the photodiodes or the photogates can be done through two different architectures: MOS-switch and a sensing line : the photodiode is connected by means of the MOS-transistor gate to a sensing line. The signal is converted to a voltage by the transistor and then transferred to the output node via the sensing line. This readout mode by means of a MOS-transistor and a sensing line is typical for CMOS image sensors. Instead of the photodiode, a photogate construction can be used interchangeably. CCD register : by pulsing the CCD gates to their appropriate voltages, the CCD structure draws the electrons out of the photodiode into a CCD register, typically called a “Vertical Shift Register” or “VCCD”. This charge packet is then transported in the charge domain toward the output node, where it is converted into a voltage. As above, a photodiode can be exchanged for a photogate. In this case, the photo-converting photogate can be just one of the CCD cells that are also used to transfer the charge packets. In summary to this point, we can “photo-generate” charge in either a photodiode or phototgate and then transfer it to the output of the chip as either a voltage or as a charge packet. In the following sections we’ll see how these basic elements are used to build complete image sensors in either CCD or CMOS technology.

photodiode array

vertical CCD registers

Interline-Transfer CCD

• • • • •

Small chip size Low pixel number Low sensitivity Medium smear Advanced technology

output

horizontal readout CCD

© 2003 Albert Theuwissen

An Interline-transfer CCD can be considered as being constructed of several CCD line arrays next to each other. Between these line arrays, are the vertical shift registers, or “VCCD’s”. The VCCD’s are all connected at one end to a horizontal readout CCD or “HCCD”. At the end of the HCCD, the output amplifier is located. The basic operation of this architecture is as follows; After exposure, all photodiodes are emptied simultaneously, and the charge packets are shifted from the photodiodes into the VCCD registers which are shielded from the light by a metal overlayer. At this point, two actions take place simultaneously: 1) a new exposure time starts and new photo-charge is collected in the photodiodes, 2) the charge stored in the VCCD’s is transferred down to the HCCD and on to the output node. An interline-transfer CCD is characterized by its relatively small chip size and is very popular in consumer-type applications like camcorders and digital still cameras. The disadvantage of the IL-CCD is the fact that 50 % of the focal plane area is devoted to the VCCD part of the chip. Generally speaking this design approach limits the practical resolution because the length of the VCCD and HCCD combination dictates how fast charge can be emptied and therefore dictates the exposure time because charge cannot be transferred from the photodiodes into the shift registers until the previous frame has been completely transferred. This architecture also limits sensitivity because of the relatively poor fill factor.

• • • • •

Large chip size High pixel number High sensitivity High smear Standard technology

storage area

photosensitive area

Frame-Transfer CCD

output

horizontal readout CCD © 2003 Albert Theuwissen

The Frame-transfer CCD is essentially two chips in one. This architecture consists of a light sensitive area and a frame store or buffer area of equal size and pixel count. The primary advantage of this layout is the high fill-factor in the lightsensitive part of the chip. The entire photosensitive area is filled with photosites and there are no light shielded “inter-line” structures used to move charge. The frame store part of the chip is completely separated from the imaging section and is covered with a metal light shield. The basic construction of the readout architecture is similar for all 2D CCDs in that a number of VCCD’s are connected to an HCCD that transports charge to the output amplifier. The basic operation of this architecture is as follows; After the exposure, the charge packets collected in the image section are shifted towards the storage area. This frame shift takes place for all charge packets in parallel. Once the charge packets are stored in the lightshielded storage area, two parallel actions take place once again: 1) a new exposure time starts and new photo-charge is collected, 2) the charge stored in the storage area is transferred to the HCCD and on to the output node via what is known as a “parallel to serial transfer”. Due to the high pixel density in the imaging area and the separated storage area, the chip size of the frame-transfer CCD is relatively large, but in the applications where the large chip size is a problem, a solution can be found in the …

photosensitive area

Full-Frame CCD

• • • • •

Small chip size output High pixel number High sensitivity High smear (shutter needed) Standard technology

horizontal readout CCD

© 2003 Albert Theuwissen

...Full-frame CCD : the structure is exactly the same as the frame-transfer CCD, except for the lack of a storage area. This means that during the readout cycle, exposure of a new image is not possible. To prevent the generation of additional photo-electrons during the readout of the CCD, the device needs to be operated in conjunction with a mechanical shutter. This architecture is very attractive in high-resolution, low frame rate applications such as professional digital still photography.

CMOS IMAGE SENSORS

© 2003 Albert Theuwissen

CMOS VERSUS CCD CMOS

digital Signal chain

A/D

analog Signal chain

pixel array

colour filter

lens

CCD

digital control out

© 2003 Albert Theuwissen

Not considering the basic architectural differences of the various imager sensor technologies, the main difference between CMOS and CCD image sensors is in the degree of integration available to the designer when using each technology. In any camera design there are both analogue and digital sections of the design and in the diagram above, yellow areas highlight the analog circuits and the digital circuits are colored green. Simply put, a CCD is comprised of the pixel array and an analog output stage. A CMOS imager on the other hand may consist of not only the pixel array and the analogue output stage, but the complete analog signal chain as well as the complete digital control logic section. Similar to a CCD, first generation CMOS image sensors had only analogue outputs, but soon second generation devices appeared on the market complete with digital outputs. These included an analog-to-digital (ADC) converter on-chip. The dream of every designer is to provide complete camera functionality on a single chip design and this can be realized by including the complete digital signal chain on the same chip as the imager itself. This total solution emerged in third generation CMOS devices in the late 1990’s. Interestingly, the architecture that has been the most commercially successful is the second generation style of device. Below are 3 key reasons to help explain why : 1) the digital signal chain included on-chip is by necessity generic and therefore not always best suited for the application being considered 2) some applications do not need a digital signal processor embedded in the chip because they have the processing power available elsewhere in the system (e.g. laptop or system computer). 3) a large complex digital signal chain requires an advanced, aggressive CMOS process technology. An imaging array prefers a more “relaxed” CMOS process technology as will become apparent in the following sections. So to optimally meet the requirements of both, the digital signal chain is best left off the image sensor chip itself.

Passive Pixel Sensor (PPS)

vertical scan circuit

photodiode array + MOS switches

CMOS Imagers (1)

out horizontal scan circuit © 2003 Albert Theuwissen

In general terms, a two-dimensional CMOS image sensor has an architecture very similar to a SRAM or DRAM chip regardless of the specifics of the pixel architecture. Through the vertical scan circuit any particular row of pixels can be addressed, and with the horizontal scan circuit a single pixel out of the selected row can be addressed. In this way individual pixels can be randomly selected and have their signal sent to the output amplifier. In its most basic form, a CMOS pixel is composed of a photodiode and an addressing transistor. This configuration is also known as a “Passive Pixel Sensor” or PPS. Its construction is simple and compact but unfortunately this type of image sensor suffers from noise problems. The photodiode capacitance is very small compared to the large vertical column and horizontal bus capacitance and this mismatch results in a large noise component being added to the signal during readout of the photodiodes. To solve this problem ….

PPS with column amplifiers

vertical scan circuit

photodiode array + MOS switches

CMOS Imagers (2)

out horizontal scan circuit © 2003 Albert Theuwissen

…column amplifiers are added to the column lines. In this way the output capacitance of the horizontal bus is no longer deteriorating the noise performance of the pixels. Every column gets its own analog amplifier. This configuration is an improvement compared to the previous simple PPS architecture, but it still leaves the large column capacitance connected to the small pixel capacitance during read out. As always, technology relentlessly marches forward...

Active Pixel Sensor (APS)

vertical scan circuit

photodiodes + amplifiers + switches

CMOS Imagers (3)

out horizontal scan circuit © 2003 Albert Theuwissen

In an “Active Pixel Sensor” or APS, every pixel gets its own individual amplifier. This small amplifier boosts the photodiode signal fed to the column line and solves the noise problems of the large column lines. Because of its improvement in noise, the APS architecture is the preferred choice when it comes to 2D CMOS image sensors All amplifiers are analogue in nature, which means that no two amplifiers are perfectly matched as far as gain and off-set are concerned. Therefore, the introduction of an amplifier within every pixel increases the nonuniformity between the various pixels of the array. This effect shows up as fixed-pattern noise. This is a typical disadvantage of CMOS image sensor relative to CCDs as CCDs used for high end photographic work typically have only 1 output amplifier and consequently do not suffer from nonuniformities generated by mismatches in the output stage. Even the most advanced CCDs with multiple output stages have relatively few of these sources of variance to deal with. As will be learned later, the APS pixel has 3 transistors within every pixel and the significance of this will be compared to even more advanced pixel structures containing 4 or even 5 transistors.

APS with on-chip A-to-D

A/D

vertical scan circuit

photodiodes + amplifiers + switches

CMOS Imagers (4)

horizontal scan circuit © 2003 Albert Theuwissen

To convert the analog signal generated by the photodiodes into a digital signal, a single analog-to-digital converter (ADC) can be added on-chip. The presence of this ADC can sometimes limit the maximum pixel frequency of the image sensor especially when a high accuracy (>12 bits) ADC is required. For example, a high performance 12 bit ADC can run at 40MHz which means it can process 40 million pixels per second. A 1MP resolution, the frame rate would be limited to approximately 40fps and at 2MP, 20fps and so on. In order to overcome this potential bottleneck...

8

A/D

A/D

A/D

A/D

A/D

A/D

photodiodes + amplifiers + switches

APS with column A-to-D

vertical scan circuit

CMOS Imagers (5)

horizontal scan circuit © 2003 Albert Theuwissen

... this functionality can be split over the various columns of the image sensor by adding multiple ADC’s. In this example, every column gets its own A-to-D converter. The speed of every converter is reduced by the number of columns so in this simplified example, each ADC would only need to run at 1/6th of the speed of the previous example or conversely, the frame rate x resolution product can increase by a factor of 6. However, as with most things in life, nothing comes for free! There are 3 main challenges with this solution as outlined below: 1) the space available to lay-out each ADC is limited to the horizontal pixel pitch. Squeezing a complete ADC into even a 10um wide column is a significant challenge involving many trade-offs. 2) the total area consumed by the ADC’s starts to become a significant portion of the overall chip size and, 3) the total power consumed by the ADC’s starts to present thermal management issues particularly as higher bandwidth (speed) devices are used. The significance of this particular point becomes apparent later in the presentation.

CMOS Imagers (6)

8

8

8

8

photodiodes + amplifiers + A-to-D

8

vertical scan circuit

8

Digital Pixel Sensor (DPS)

8

horizontal scan circuit © 2003 Albert Theuwissen

And, believe it or not, we can go even one step further in integration resulting in the Digital Pixel Sensor (DPS) architecture In this approach, every pixel has its own ADC. Instead of a single line for every column, the DPS has a complete digital bus for every column, in the example shown here, 8 metal lines enter and leave every pixel. Of course, the wider the bus (i.e. 10bit vs. 8 bit) the more complex the layout. The first sensor built on this principle was fabricated in 0.18 µm CMOS technology, had pixels of 9.8 µm x 9.8 µm, and every pixel contained 37 transistors (Kleinfelder, ISSCC 2001). The following section covers the basic construction details of the most common pixel types.

CMOS Pixels : Passive Pixel column bus

RS

• • • •

1 T cell, 2 lines high fill factor high noise level VVL, Rockwell

p-Si

n+

Φ

© 2003 Albert Theuwissen

The PPS passive pixel is very simple in construction : 1 photodiode, 1 transistor and 2 interconnects. The pixel is characterized by a large fill factor, but unfortunately as outlined earlier it also suffers from high noise levels. The operation of this pixel is very simple. After addressing the pixel by opening the row-select transistor (RS), the pixel is reset along the column bus and through RS so it is ready for the next exposure period.

CMOS Pixels : Photodiode APS VDD

RS

• • • • •

3 T cell, 4 lines low fill factor medium noise level noise versus full-well JPL, Photobit, Agilent

p-Si

column bus

RST

n+

Φ

© 2003 Albert Theuwissen

Recall this pixel structure, the APS pixel, has an amplifier within every pixel. The amplifier is configured as a “source follower” wherein the driver of the source follower is located in the pixel itself and the load of the source follower is placed on the column bus, physically out of the focal plane. The two other transistors in the pixel are used for addressing (row-select or RS) and resetting the pixel (RST). So in summary every pixel has 1 photodiode, 3 transistor and 4 interconnects. The basic operation of this pixel is as follows. After a pixel is addressed, its actual video level is sensed by the source follower and fed to the column bus. Next the pixel is reset and a new exposure period can start. This APS pixel is very popular in CMOS image sensor because it solves a lot of the noise problems associated with the PPS approach. Unfortunately the configuration shown still suffers from a large reset noise component that is difficult to remove from the video signal.

CMOS Pixels : Pinned Photodiode

APS

VDD

• • • • •

4 T cell, 5 lines low fill factor low noise level low full well Kodak-Motorola, Hyundai, Toshiba

RS TX

p+

p-Si

column bus

RST

n+

Φ

© 2003 Albert Theuwissen

The Pinned Photodiode (PPD) APS pixel looks very similar to the photogate APS, with the primary difference being that the photogate is replaced by a “pinned” photodiode. In addition to the photodiode, every pixel has 4 transistors and 5 interconnects. Notice that the photodiode does not have any electrical connection to force the collected charge out of the photodiode. To initiate the charge transport from the photodiode towards the floating diffusion (n+), the transfer gate (TX) is pulsed. If the photodiode was not “pinned” at one end (by the p+ implant), it would be extremely hard if not impossible to completely empty the photodiode of its collected charge. This would result in image “lag”. A very important feature of this pinned photodiode APS structure is the ability to reduce the reset noise levels by means of correlated double sampling, or “CDS”.

CMOS Pixels : PinnedPhotodiode

5T APS VDD

RS

• • • • • •

5 T cell, 5 lines low fill factor low noise level low column FPN low full well Sony

column bus

RST TX

p+

p-Si

n+

Φ

© 2003 Albert Theuwissen

In the standard pinned-photodiode 4T pixel from the previous page, the readout cycle is done on a line-by-line basis and all pixels from one line are processed in parallel. This typically introduces some column-wise fixed pattern noise related to off-set and gain variations in the analog column circuitry. To avoid this source of column FPN, the addressing of pixels can still be done line-by-line, but the readout and processing of the signals is no longer done fully in parallel (all pixels in a line at one time). In this pixel design an additional transistor is added to the architecture and that enables a sequential readout and processing of the pixels, one after another. The advantage of this construction is a low column FPN (processing is no longer done in column-parallel circuitry), but the disadvantage is the requirement for high bandwidth in the analog processing chain that introduces additional thermal noise due to higher power dissipation.

CMOS Pixels : Logarithmic Pixel

RS

• • • • • • •

3 T cell, 3 lines high dynamic range high FPN level poor image contrast image lag only monochrome IMEC, Caltech

p-Si

column bus

VDD

n+

Φ

© 2003 Albert Theuwissen

This pixel configuration is very similar to the basic photodiode APS pixel. The only difference is the fact that the reset transistor is no longer used in a reset mode, but rather its gate is connected to the drain voltage VDD. In this way this transistor is operated in weak “inversion” mode. Every electron generated in the photodiode will try to escape to the power supply and will increase the potential at the photodiode. This potential can never be higher than the source voltage of the MOS transistor. The gate-source voltage of this transistor will be equal to a value needed for the very earliest turn-on voltage of the transistor, otherwise known as operation in weak inversion mode. The relationship between the current (I) through the transistor and its gatesource voltage (V) is a logarithmic one. The naturally linear photogenerated current is converted into a logarithmic voltage by means of the IV characteristic of the MOS transistor operating in weak inversion. This pixel has a very high dynamic range due to this logarithmic response, but it suffers from FPN, image lag and low contrast ratios relative to other pixel designs.

CMOS SCALING

© 2003 Albert Theuwissen

Moore’s Law : CMOS Integration 1 Gb

109 256 Mb 64 Mb

Transistors/Die

108 16 Mb

107

4 Mb

DRAM

80786 Pentium Pro

1 Mb

106

Pentium

256 kb 80386 80286

64 kb

105 16 kb 4 kb 1 kb 104

80486

Microprocessors

8086

8085 8008 8080 4004

103 ‘70

‘74

‘78

‘82

‘86

‘90

‘94

‘98

‘02

Year © 2003 Albert Theuwissen

In the late sixties Gordon Moore predicted the enormous growth in development of CMOS technology in what has become known as Moore’s Law. Moore’s Law states the density of CMOS circuits will improve by a factor of 2 every 18 months. Another way to phrase this is to say that it takes 18 months to develop a new CMOS process and bring it on-line. This chart shows the impact of Moore’s law on DRAM and Microprocessor circuit densities over the last three decades - pretty impressive !!

Moore’s Law : MOSFET Scaling Length or Thickness (µm)

100

10

1

minimum feature length

0.1

junction depth

0.01

gate oxide thickness ≈ 13 % reduction/year

0.001

‘60

‘65

‘70

‘75

‘80

‘85

‘90

‘95

‘00

‘05

Year © 2003 Albert Theuwissen

The progress predicted by Moore’s law has a strong impact on the minimum feature size, the minimum junction depth in the transistors and the minimum gate oxide thickness that can be used in any chip design. This graph shows that all three parameters have been improved by about 13% every year over the same 3 decade period.

Moores’s Law : Chip Area

Chip Area (mm2)

1000

100

10

1 ‘70

‘74

‘78

‘82

‘86

‘90

‘94

‘98

‘02

Year © 2003 Albert Theuwissen

Another area of drastic improvement over the years is the steady increase in the ability to make larger chips. Since 1970 practical chip area has grown from 10 mm2 to close to 10 cm2 these days.

Moore’s Law in Car Industry ?! •



1960 : – – – –

4 cylinder engine power of 100 HP speed of 100 km/h fuel consumption of 12 l/100 km

– – – –

4·109 cylinder engine (same volume) power of 106 HP speed of 106 km/h fuel consumption of 1200 l/100 km

2002 :

© 2003 Albert Theuwissen supplied by H.V., 2002

To gain a better feeling for the impact of Moore’s law on the electronics industry, an analogy with the car industry can be drawn. Imagine a car in 1960 with a 4-cylinder, 100 HP engine could obtain a maximum speed of 100 km/h and consume fuel at the rate of 12 liters for every 100 km driven. If Moore’s law applied to this car …. … today this car would have 4 billion cylinders in one engine, it would be able to deliver 1 million HP and a top speed of 1 million km/h, while its fuel consumption is “only” 1200 l/km. This would certainly save me a lot of travel time between Waterloo and Los Angeles !! So, you are probably asking “what does all this have to do with taking excellent photographs??”

Fill Factor versus Pixel Size

0.13 µm 0.18 µm 0.25 µm

Fill Factor (%)

80 60

0.35 µm 0.5 µm

40

minimum FF

20 0

lens limit

100

10

9

8

7

6

5

4

3

2

Pixel Size (µm) © 2003 Albert Theuwissen

This chart shows the achievable fill factor of a pixel versus the pixel size based on the generation of CMOS process technology being used. It is clear that moving towards a more aggressive technology with smaller features sizes will allow the designer to make smaller pixels with decent fill factor or conversely larger pixels with very high fill factor and a high degree of functionality. Based on these predictions, one could reasonably assume that it will just be a matter of time before all image sensors will be made on 0.13um CMOS process technology. However, once again the recurring theme of this presentation surfaces - everything has a price...

Cross Section of a CMOS IC

copper wiring via plugs aluminum wiring tungsten plugs titanium silicide isolation layers poly-Si p-type S/D n-type S/D p-well n-well p-type substrate

© 2003 Albert Theuwissen

In order to achieve these dramatic gains in performance CMOS technology has clearly become much more sophisticated. To achieve the high circuit densities of today, many more circuit layers have been added, the ability to mix multiple types of circuit interconnects in one device has been developed (i.e. copper, aluminium and tungsten) and device features have shrunk. Above is a cross-section of a typical simple CMOS inverter circuit in a 4 metal-layer process technology. This drawing tries to illustrate some of the difficulties CMOS image sensors will encounter when attempting to utilize more advanced CMOS process technologies in the future: 1) to improve speed of operation, all transistor gates, source and drain areas are implanted with silicides. Unfortunately silicides are not transparent to incoming photons. 2) to enable higher device densities junctions are becoming shallower, which in turn limits light sensitivity. Higher energy green/red photons penetrate deep into the silicon and cannot be captured in shallow wells. 3) shallower junctions also give rise to higher electrical fields, which increase dark current and the associated dark current non-uniformities 4) transistor isolation will be done by shallow “trench” isolation. This technique introduces mechanical stress in the silicon wafers which in turn increases dark current 5) gate oxides are becoming very thin, but in turn also very leaky thus introducing more noise sources 6) increases in the number of metal layers gives rise to a larger “optical stack” above the active pixel area. Large optical stacks will place the color filters and the microlenses far away from the pixels and in turn cause a large amount of optical cross-talk. Also a strong dependence of sensitivity as a function of angle of incidence can be expected. Clearly there are many things to consider…

CCD versus CMOS CMOS versus CCD

© 2003 Albert Theuwissen

CCD versus CMOS (1)

Imaging Performance

CCD

custom CMOS + customized processing steps

analog CMOS digital CMOS

dedicated process

+ capacitors + resistors + MOST matching

Cost © 2003 Albert Theuwissen

CMOS image sensors can “easily” be made in standard digital CMOS processes, but unfortunately standard processes are optimized for good digital signal processing, not good imaging performance. Improvements in imaging performance can be obtained by switching to an analog CMOS process wherein more attention will be paid to the matching of transistors and means will be available to make precision capacitors and resistors. This will provide better imaging performance but cost will increase as you move away from the mainstream processes. Of course the best imaging performance can be obtained by CMOS image sensors that are made on a completely customized process. These are usually a generation or two removed from the mainstream process and don’t suffer from the challenges presented by advanced processes as previously outlined. The challenge now becomes running enough wafer volume through the process to make it economical. The best example of a fully customized imaging process is a CCD. One obvious question then is... if you have access to a great CCD process, why would you go to the expense of developing a customized CMOS process to mimic the same performance?

CMOS versus CCD (2)

CCD sensor design

CMOS sensor design

CCD wafer color filter deposition

on-wafer testing

packaging

final testing

CMOS wafer

© 2003 Albert Theuwissen

Very often it is claimed that CMOS image sensors are fundamentally cheaper than CCD devices because they can be made on a standard process and leverage the economies of scale associated with running large volumes of wafers through the foundry. This is a dangerous argument because even assuming wafers are run on a completely standard process, the cost difference between CMOS and CCD can only be gained back in the diffusion cost of the wafers and there are many more process steps required before a working device can be yielded. For small devices (< 1/3” optical format) most of the cost goes into the deposition of color filters, deposition of micro-lenses, on-wafer testing, packaging and final testing. After all is said and done, the cost advantage of CMOS technology becomes only marginal as far as the image sensors themselves are concerned. Additionally, this consideration is only relevant if the intended application market is very cost sensitive and requires large volumes to service it.

CCD

CMOS

Cost

Image Quality

Power Consumption

CCD versus CMOS (3)

CCD

CMOS

Functionality © 2003 Albert Theuwissen

There are 3 main advantages CMOS has over CCDs in imaging applications as depicted in the graphic above: 1) the cost of the complete imaging system. For some applications it is possible to design a single chip camera and this clearly lowers production costs 2) power consumption is lower 3) on-chip functionality. In applications where space is at a premium a lot of the supporting electronics can be integrated onto the image sensor chip thereby saving space. There is really only advantage CCD has over CMOS in imaging applications as depicted in the graphic above: 1) imaging quality. Because CCD technology is optimized exclusively for imaging there are no compromises made

CMOS versus CCD : Technology CCD Technology

CMOS Technology

gate oxide : 80 nm

gate oxide : 5 nm

p-well depth > 2.5 µm

n-well depth : 1 µm

channel depth : 1 µm

channel depth : 0.1 µm

operating voltage > 10 V

operating voltage : 3.3 V

several poly-Si layers

1 or 2 poly-Si layers

1 or 2 metal layers

several metal layers

© 2003 Albert Theuwissen

CCD versus CMOS : Overview pro’s

CCD Technology

con’s

optimized, mature technology

functionality

market penetration

power consumption

low noise, low FPG

random access number of supplies

low dark current global shutter

system cost

charge-domain operations con’s

CMOS Technology

cost of ownership pro’s

© 2003 Albert Theuwissen

DYNAMIC RANGE

© 2003 Albert Theuwissen

Dynamic Range : Definition Dynamic Range (DR) quantifies the ability of a sensor to adequately image both high lights and dark shadows in the same scene. maximum output swing noise in dark saturation level – dark current = dark shot noise + readout noise Nsat - Nd = with nd = Nd Nd + nr2

Dynamic Range =

Nsat : saturation level Nd : dark current level = f(temp., time) nd : dark current shot noise = f(temp., time) nr : readout noise = f(temp., speed) nd and nr = f(technology, design) © 2003 Albert Theuwissen

Regardless of the process technology used, the formula above defines dynamic range for the image sensor. As with any system, garbage in = garbage out so in order to maximize dynamic range you must start with a clean signal. At the bottom of the slide, each of the variables is defined and associated with its dependencies. From this it is easy to see which variables are important to control in order to maximize dynamic range.

Dynamic Range (1) 12,500

Id = 20 pA/cm2 @ RT nr = 25 e- @ RT, 9MHz

Dynamic Range

10,000

7500

5000

Id x 10 nr x 2

2500 0 10

100

1000

10,000

Integration Time (ms) © 2003 Albert Theuwissen

This graph illustrates the effect of dark current and read-out noise on dynamic range. The solid curve at the top is for a single output 6MP device running at 9MHz output data rate at room temperature. As can be seen, the dynamic range is relatively flat at approximately 9.000:1 up to an integration time of 1sec and then begins to fall off. This is due to the fact that at long integration times dark current noise begins to accumulate and become predominant. The next curve down shows the effect of increasing the dark current by a factor of 10 under the same operating conditions. A decrease in dynamic range of about 30% is suffered. Alternately, if read out noise increased by a factor of 2 (bottom curve) dynamic range suffers almost a 50% degradation.

Dynamic Range (2) 12,500

Id = 20 pA/cm2 @ RT nr = 25 e- @ RT, 9MHz

Dynamic Range

10,000 Id x 10

7500

nr x 2 5000 2500 0 -40

-20

0

20

Temperature

40

60

(oC) © 2003 Albert Theuwissen

The effect of temperature on dynamic range is shown for the same image sensor. Above room temperature the dynamic range decreases steeply with temperature. The main reason is the effect of the dark current and the dark current shot noise, that become dominant above room temperature.

Dynamic Range (3) 12,500

Dynamic Range

10,000

Id = 20 pA/cm2 @ RT nr = 25 e- @ RT, 9MHz

7500

5000

nr x 2

Id x 10

2500 0 0.01

0.1

1

10

100

Pixel Frequency (MHz) © 2003 Albert Theuwissen

Dynamic range also has a design dependency on pixel frequency, or readout rate. This is a result of two design tradeoffs: 1) at lower speeds the thermal noise decreases however that is offset by increasing dark current and dark current shot noise. 2) at higher speeds the dark current and its shot noise component decrease but this is then defeated by an increasing thermal noise component. The struggle between these two noise sources results in a sharp peak in the maximum dynamic range around 6 to 8 MHz for this particular design. Also, as noted in the previous pages an increase in either one of these two parameters results in an overall decrease in dynamic range. Interestingly however, in the case of thermal noise increasing by a factor of two, the peak in dynamic range shifts to lower pixel frequencies. Conversely, in the case of dark current increasing by a factor of ten, the peak is shifted to higher pixel frequencies. In both cases, the overall dynamic range is reduced as noted previously.

Dynamic Range (4) 50,000

Dynamic Range

40,000

Id = 20 pA/cm2 @ RT nr = 25 e- @ RT, 9MHz Nsat nr

30,000

20,000 10,000 0 0.01

Nsat

Nsat - Nd

Nd + nr2

Nd + nr2 0.1

1

10

100

Pixel Frequency (MHz) © 2003 Albert Theuwissen

Exactly the same curve as before is shown (solid line), but with an adapted vertical scale to illustrate a common mistake. The formula used to calculate the dynamic range is shown for reference. Notice the presence of the dark current component in both the numerator and denominator of the formula. If one “forgets” to take the effect of the dark current into account on the voltage swing available for the video signal (numerator in the formula), the error in the calculation of the dynamic range is minor. But that is no longer the case if the dark current shot noise is also omitted (denominator of the formula). Without any dark current effects, the dynamic range increases steeply for lower pixel frequencies, and actually any value quotable for the dynamic range looks correct. This is a huge trap (mis)used by many in the definition of the dynamic range of a device. If the dark current effects are not taken into account, the quoted dynamic range is much higher than ever reachable in reality !

LIGHT SENSITIVITY

© 2003 Albert Theuwissen

Trap : QE

Incident Photons

Fill Factor Interacting Photons Collected Electrons Transferred Electrons

Output Voltage © 2003 Albert Theuwissen

There is also some general misunderstanding in the definition of Quantum Efficiency or “QE”. Technically speaking, quantum efficiency is the ratio of the total number of electrons generated and collected in a pixel over the total number of photons being absorbed by that particular pixel. In other words, how efficient is the active pixel area at converting photons into electrons Notice that this definition does not include all photons that fall on the entire pixel surface which will include varying degrees of non-active regions depending on the pixel design. These photons do not contribute to the generation of any video signal, but unfortunately are part of the incoming optical signal...

Evolution of Sensitivity Sensitivity (mV/lux.s.µm2)

Pixel Area (µm2)

Saturation Voltage (mV)

1000 Saturation Voltage

100 Pixel Area

10 Sensitivity

1 ‘87

‘89

‘91

‘93

‘95

‘97

‘99

‘01

‘03

Year © 2003 Albert Theuwissen

So far we have focussed on the remarkable advances in CMOS technology over the past few decades so it is only fair to examine what has bee happening in CCD too. These curves show the progress made in the development of saturation voltage and light sensitivity relative to pixel area over the last 15 years. It is well known that smaller pixel sizes result in smaller capacitors to store the charge generated and that fill-factor is also reduced as pixels get smaller so how is it that smaller pixels can have a larger output voltage and a higher light sensitivity? The answer lies in the fact that CCD processes have benefited from a singular focus on optimization for imaging. It is still true however that at any given point in time that if a large pixel and a small pixel are made on the same process the larger one will have greater sensitivity and dynamic range.

Summary & Conclusions • CCD optimized for imaging performance • Room to improve in CCDs due to 0.5 µm CCD technology acquisition • CMOS technology adds new features to sensors • CMOS technology opens completely new market segments and is not simply a CCD replacement • DALSA is one of the few vertically integrated imaging companies with both CCD and CMOS capabilities

© 2003 Albert Theuwissen

“There’s More To The Picture Than Meets The Eye” (Neil Young)

© 2003 Albert Theuwissen