Page 1 of 16 D-CINEMA IMAGE REPRESENTATION SIGNAL

Apr 9, 2003 - wavelengths of electromagnetic energy, referred to as visual stimuli. ... reflected off of, or transmitted through, objects in a studio or production set, as .... differences of negative (and positive) film material, the out-putted digital ...
264KB taille 8 téléchargements 257 vues
D-CINEMA IMAGE REPRESENTATION SIGNAL FLOW DIAGRAM COMMENTS

Introduction

By John Silva April 9, 2003/Revised July 23, 2004

The time is right to consider developing a video signal flow diagram for digital cinema. But, wait a moment. Is “video” really the best term to use when referring to image-related signals for digital cinema? For the purposes of this paper, let us at this time refer to the term as “image representation” instead of “video”, because it’s more descriptive of the signals that will be under discussion. We can always change the term to something else later if appropriate. Digital Cinema system image representation signal delineation and progression, and its relationship with the developing colorimetry-related standards involved, can be better understood and kept in mind by having such a signal flow chart with accompanying description at hand for study and review. Having this general understanding and reference material in common with other standards committee participants will ensure that everyone involved will be singing from the same song sheet, and the following benefits will be achieved: 1. It will assist in developing future recommended colorimetry-associated standards. 2. It will aid in checking the validity and accuracy of the above standards proposed for vote. 3. It will assist the associated ad-hoc and general standards committee members in being better prepared to make final judgments in deciding whether pending colorimetry-associated proposals are ready for vote. 4. It will give members of higher-level SMPTE standards committees a feeling of confidence that all members of the originating committees know what they are talking about and have made the right recommendation choices. In all reality, signal flow chart development should be the first step in the process of developing standards that are related to signal flow. For a complete understanding of the complete image representation signal flow involved in digital cinema, it is helpful to extend the flow path to include that for production and postproduction so that there can be a better understanding of the total system concept. Further, in studying the flow of the total system, it very well may reveal production and postproduction elements that need to be added or updated in order to achieve the over-all digital cinema system picture quality goal of ultimately reaching or exceeding that produced in 35mm film production. As such, a draft (as revised 12/16/03) in the form of two digital cinema system image representation signal flow charts (for Film Centric and Data Centric acquisition, respectively) with accompanying written description is respectfully submitted. Page 1 of 16

General Digital Cinema comes in two basic modes: film centric and data centric, which relate to the content form (film or data in storage, respectively), used to assemble (story-tell) the feature. A. The film centric mode includes: 1. Motion picture film acquisition followed by traditional cut negative editing to produce the intended feature film. 2. This is followed by a film-to-digital transfer (digitizing) of the edited feature film content using a telecine film chain or motion picture scanner located in the feature postproduction facility. 3. Once, scanned, the resultant digitized image representation signals are transformed into image representation coded data files and then transferred to the digital film archive to become working digital negatives (“digital reels”). From there the contents are used, as desired and/or needed, by individual workstations in feature post-production. 4. Within the combined array of workstation operations in feature post-production, the following types of signal correction and processing can be made to the delivered digital negative: 1. 2. 3. 4. 5.

Color correction/grading Gamma Cropping Lift Painting/special effects

6. 7. 8. 9. 10.

Dust busting Grain matching Noise reduction Compositing Final editing

In a sense, the film centric mode could be considered to be a hybrid mode, such as film/data centric, in that story telling can be (and is) extended by cinematographers to include color-producing enhancements, special effects, etc., which is performed with digital data in feature post-production. However, at this point in time, it is not. B. The data centric mode includes: 1. Motion picture film acquisition in non-edited original negative or interpositive form. These are film clips shot on designated reels according to the feature script (in bits and pieces). Using this acquisition content, an EDL is developed in an off-line session to determine the actual frames that will be scanned in feature post-production. As in the film centric mode, once scanned, the resultant digitized image representation signals are transformed into image representation coded data files and are then transferred to the digital archive become working digital negatives (“digital reels”). From here, the material is immediately ready for feature post-production, and can be acquired almost instantly to be processed and edited as desired by specifically designated workstations in the group, as listed in paragraph 4 above, describing the film centric mode, and with the addition of the digital feature editing workstation. Page 2 of 16

2. Digital camera origination with play-out in the form of 10 to 16 bit image representation signals in coded data file form. Again, as is the case for data centric film acquisition, the digitized play-out signals are transformed into image representation coded data files and are then transferred to the digital archive to become working digital negatives, which are then immediately available for feature post-production processing and editing, as was described for film acquisition in the data centric mode. C. As a note of interest, due to recent significant advancements in digital technology with regard to high-speed, high-bandwidth, and uncompressed digital data flow, as well as the evolution of the Storage Area Network (SAN) with common file system, the Gigabyte System Network (GSN), High-Speed Data Link (HSDL), and other related technologies, equipment is becoming available that when implemented in feature post-production, will allow immediate acquisition by workstations of digital archive data content, and realtime processing, which heretofore has not been available. Thomas True, of SGI, has recently presented an excellent paper in the most recent October/November issue of the SMPTE Motion Imaging Journal (Conference Issue) titled, “A Datacentric Approach to Cinema Mastering”. His contribution is extremely informative and right on target. He spells out very clearly what has and is happening in mastering methodology, which is currently available to digital cinema and represents good news for its implementation for the present and for the future. This paper is highly recommended reading for all involved with standards for digital cinema. Back to the two-image representation signal flow charts, both collectively illustrate all basic image representation signal paths for film centric acquisition and data centric acquisition, respectively. The upper left corner of both charts show an observer viewing a motionless butterfly close-by on a production set where a camera, in each case, is simultaneously taking a close up of the same object. This is the start of the image representation signal flow. Both charts finalize in the lower right corner with images of a second observer viewing the resultant picture of the original butterfly on a large screen in a digital cinema theater. Comments regarding “color” Digressing briefly, colors are perceptions experienced by observers when viewing visible wavelengths of electromagnetic energy, referred to as visual stimuli. This energy that reaches the eyes most often happens as a result of: 1. 2. 3. 4.

Direct stimuli (light beams and sunlight) Stimuli reflected off of illuminated objects Stimuli transmitted through objects Stimuli emitted from self-luminous objects such as TV screens and monitors

Objects that we see do not have inherent “colored” physical characteristics, such as being red, violet, or blue in color. They instead, have physical characteristics of inherent reflectivity, transmissivity, emissivity, and absorbtivity of wavelengths of incident visual stimuli. Page 3 of 16

For example, a daylight or incandescent-illuminated object that reflects more of the redproducing wavelengths and absorbs more of the other visible wavelengths will be perceived as a “red” object when viewed by an observer. In digital cinema, most image representation signals are initially generated by the presence of visual stimuli reflected off of, or transmitted through, objects in a studio or production set, as captured by motion picture or digital-based, cameras. Computer-generated visual stimuli are also possible initial sources of resultant image representation signals Color Perception As a review, color perception is a bodily sense, just as is hearing, smelling, tasting, etc. Color perception comes as the last event in a series of visual occurrences by human observers starting when wavelengths of electromagnetic energy in the visible spectrum first reach their eyes. As an example, when an observer views a light-emitting object such as a picture on a CRTbased monitor, a multitude of individual wavelengths of electromagnetic energy emanating from objects on the screen reach the eye. From there, they are subsequently focused by the lens onto a relatively small area on the inside surface of the back of the eyeball called the retina. Without going too deeply into the physiology of the eye, it is suffice to say that a mosaic of electromagnetic sensors exist on the surface of the retina called rods and cones. The cone receptors detect chromatic visual stimuli having all but low light levels, while the rod receptors detect achromatic (non-chromatic) visual stimuli at very low light levels. When the cones, in particular, detect this multitude of external stimuli, individual chemicallyformed signals representative of the respective individual streams of external stimuli are generated. These individual chemical signals are then simultaneously processed on the backside structure of the eye throughout a mixture of individual visual processing cells. The resultant signals combine in various fashions and then travel to “ganglion” cells where they further combine and emerge as “opponent channel signals”. These signals then travel to the body’s optical nerve, which provides a direct connection to the brain. When this occurs, the brain sends its own version of these color-sensing channel signals to the body’s nervous system in the form of a bodily sense of color perception. Film Centric Acquisition as represented by the flow chart As mentioned above in film centric acquisition, motion picture cameras provide the origination source material on raw film stock. At the upper left corner of the film centric flow chart, again note the images of an observer, two butterflies and a motion picture camera with lens.

Page 4 of 16

The butterfly is the object in the scene that the camera lens is focused on. It is represented in achromatic form because it has no outward physical properties of color. The butterfly-object instead, reflects specific wavelengths of illuminating visual stimuli (electromagnetic energy in the visual spectrum), which the camera lens then focuses onto successive sprocket-driven frames of raw motion picture film stock having R, G, and B overlaid light-sensitive layers of silver halide crystals, which then form respective R, G, and B- primary latent images, consisting of tiny clusters of metallic silver. In the flow chart, the first observer, shown viewing the butterfly in the studio set, will perceive its colors by virtue of the reflected visual stimuli coming from the illuminated insect. Therefore, the observer’s perceived butterfly-object color is represented in the flow chart as a colored object. Data Centric Acquisition as represented by the flow chart At the upper left corner of the data centric flow chart, similar to what was shown for the film centric flow chart; note the images of an observer, two butterflies, and a digital camera with lens. As is shown in the film centric chart, this is the starting point for the chain of system signals. The butterfly, pictured in gray scale, is the object in the scene the digital camera lens is focused on. As before, it is shown in achromatic form because it has no “color”, per-se. As stated previously, the surface of the butterfly object instead, contains reflected rays of illuminating visual stimuli that the camera detects. On doing so, it then generates perceptualforming signal representations of the visual stimuli entering the digital camera lens. In the data centric flow chart, as shown in the film centric flow chart, the first observer, shown viewing the butterfly in the studio set, will perceive its colors by virtue of the reflected visual stimuli coming from the illuminated insect. Therefore, again the observer’s perceived butterflyobject color is represented in the flow chart as a colored object. Regarding Film Scanners Telecine film chains or film scanners are used to capture and digitize feature film footage. For SDTV and 1920X1080 HDTV content, telecine film chains will get the job done. For 4096 and higher horizontal pixel counts, such as will be employed for D-Cinema, the upgrade to high quality motion picture film scanners will be required. Briefly stated, motion picture film scanners, which are city cousins of telecine film chains, like digital cameras, do not define a color space and associated primary-set. This will be defined as the image representation signals progress along the system where they need to be used to feed a D-Cinema reference display device, such as feature post-production work-station control or screening room projector or monitor. How light translates to dye densities on negative film Figure 1 below shows a cross-section of 35 mm motion picture raw film stock before exposure. It consists of four separate layers, three of which are individual coatings of silverhalide crystal grains which provide super-imposed mosaics of blue, green, and red light sensitive surfaces, all sequentially coated onto the top surface of a transparent support structure underneath.

Page 5 of 16

The top-most layer is blue-light-sensitive. The third layer is green-light-sensitive, and the fourth layer is red-light-sensitive. Figure 1 – Motion Picture Film (Raw Stock)

Blue-light-sensitive

/

Yellow-dye-forming

Yellow filter layer Yellow filter layer Green-light-sensitive /

Magenta-dye-forming

Red-light-sensitive

Cyan-dye-forming

/

Clear support base Each light sensitive layer is chemically treated during the manufacturing process to provide its desired individual spectral sensitivity. The fourth (yellow-colored) layer sequentially coated between the blue and green-light sensitive layers acts as a blue filter protecting the green and red-light sensitive layers which have a discernable sensitivity to blue light. This is due to certain wavelengths of blue spectral sensitivity overlapping with wavelengths of those for green and red. This yellow filter layer will become colorless once the film is chemically processed (developed). When film stock is exposed to scene visual stimuli via a film camera and lens, each layer of silver-halide crystals changes in chemical character in accordance with scene light exposures incrementally reaching each of the over-laid light-sensitive surfaces of each film frame in the form of latent negative images. When the film is chemically processed: • • •

blue layer mosaics of exposures are replaced with equivalent respective mosaics of proportional densities of complementary-colored yellow dye; green layer exposures are replaced with equivalent respective mosaics of proportional densities of complementary-colored magenta dye; and red layer exposures are replaced with equivalent respective mosaics of proportional densities of complementary-colored cyan dye,

The collective summation of these three layers of complementary-colored dye densities, frame-by-frame, make up the original negative film, which in this negative form, contains valid representations of visual stimuli from the original scene. Basic film-scanner action Most transmission scanners are essentially tri-color densitometers. The process begins by transmitting a white-light source through sequential sprocket-driven frames of chemicallyprocessed motion picture film consisting of over-laid mosaics of yellow, magenta, and cyancolored dye densities which serve as light-modulated filters.

Page 6 of 16

The result is, three combined sources of yellow, magenta, and cyan visual stimuli, which represent individual residual amounts of the respective dye-filtered white source light. These three combined sources, which have retained their individual identities to respective sources of visual stimuli from the original scene, are then sorted by color separation optics and then are directed over separate paths into individual associated photo-detector sensors which, in combination, produce triplets of complimentary color-formed, negative analog image representation signals. From here, the triplet of signals are quantized and coded in digital form for subsequent processing down the pipeline; but have yet to define an associated color space and primary-set. This will not be done until the image representation signals reach the point in system flow where they are needed to feed a feature post-production color-control or screening projector or monitor. To make this happen, a matrix will be applied that will translate from film dye code values (valid image representation signals) to the primaries of the display device. As a top-of-the-line film scanner will be designed to distinguish between the finest color differences of negative (and positive) film material, the out-putted digital data will be related to the full colorimetric content of the film. Also, the film scanner will not change the color representation of the film material. This means that if the color space and primary-set of the display includes all possible film colors, the above top-of-the-line film scanner will be compliant with this color space as well. The Generation and Progression of Image Representation Signals in Digital Cameras Strictly speaking, digital cameras do not have primaries. Instead, they have “taking characteristics”, which for practical reasons in manufacture, are altered versions of the calculated ideal color-matching function curves for digital cameras. The plot-points for these ideal curves are calculated starting with the primaries of the control monitor, or projector, used to adjust the digital camera controls to ensure or produce acceptable pictures. These ideal color-matching functions referred to above are spectral responsivity curves related to perceptual color vision of the average, color normal, human observer (i.e., CIE 1931 2 deg. standard observer). Originally in 1931, the plot-points for these ideal curves were determined by the use of a colorimeter which allowed a qualified observer to provide perceptual color matches between two adjacent semi-circular areas, called fields. The first field was formed by a projection of successive pre-determined single, monochromatic, wavelengths of visual stimuli (the reference field). (See Figures 1 and 2 below).

Page 7 of 16

Figure-1

Figures 1 and 2, below.

REFERENCE FIELD

Figure-2

1931 (2o) STANDARD OBSERVER RGB TRISTIMULUS STIMULI

TEST LAMP COUNTER-BALANCING TRI-STIMULUS SOURCES FOR DETERMINING NEGATIVE LOBE VALUES

Ideal RGB Color-matching Function Curves (1931)

TEST FIELD

The second matching field was formed by the resultant visual stimuli produced by superimposed projections of individual and adjustable intensities of a particular set of three independent red, green, and blue primary light sources. The process in 1931 was done by changing the reference stimuli in incremental steps, wavelength by wavelength, throughout the visible spectrum, and providing color matches by the observer adjusting the individual intensities of the three RGB tristimuli. As can be seen, this was a somewhat tedious process. However, in practice it is not necessary to repeat the experiment for different sets of primaries. Instead, the color-matching functions corresponding to any given set of primaries, such as those of a particular display device, e.g., CRT monitor or digital projector, can be readily computed. Digital Camera Image Representation Signal Creation and Processing Implementation In the image acquisition process for digital cameras, preliminary image representation signals are formed from scene visual stimuli through the sequential action of a taking lens, separation optics, RGB optical trim filters, followed by individual R, G, and B pickup devices. The combined optical action of these elements provide a similar but somewhat altered filtering of the primary sources of visual stimuli, in a manner somewhat, but not directly, related to the calculated ideal color-matching function curves as mentioned above. However, in actual practice, using a direct relationship with display color matching functions will not produce the correct or desired results. The reasons for this will now be explained. Calculated color-matching functions corresponding to any set of physically realizable display primaries will have some negative lobes, such as is shown in Figure 1. To compound the issue, in actual practice, the color matching function portions of negativity are even greater than shown in Figure 1. The original CIE experiments were done with monochromatic matching primaries, which produced curves with less negativity that those in real world situations where bandwidths of wavelengths of primary stimuli exist. Page 8 of 16

Since the calculated color-matching functions that define the theoretically desired spectral sensitivities of the digital camera have significant portions that represent negative values, those respective sensitivities cannot be physically realized as such. Because of this, real cameras are built with optical components and sensors having sets of allpositive spectral sensitivities that will be somewhat similar, but not identical, to the CIE XYZ curves as shown in Figure 3 below. Figure 3

The signal values produced by a sensor having such spectral sensitivities are always positive. However, interpolating forward, those sensitivities, and all other sets of all-positive sensitivities, will correspond to display primaries that are not physically realizable. Therefore, real cameras designed under these criteria would correspond to imaginary displays, and real displays would correspond to imaginary cameras. To resolve this dilemma, a matrix is applied to the all-positive signal values from the camera sensor to transform the signal values to those that would have been formed if the camera had been able to implement the theoretical sensitivities corresponding to the color-matching functions of the actual (real) display primaries. It is at this point in the signal path (the output of the matrix) that some negative signal values are created in the process. How and when these negative values are processed as they proceed along the digital cinema pipeline must be determined by system/equipment designed to meet post production reference and theater viewing requirements. For example, they might simply be clipped, or they might be remapped to a produce a more pleasing output. There is nothing that can be done to increase the color gamut defined by the chromaticity boundaries of the actual display devices used. For this reason, the display matrix should be applied as late as possible in the pipeline, or some means will need to be employed to retain the negative values for use with other types of displays having larger color gamuts. In addition to this, the actual spectral sensitivities of a film scanner or digital camera will not correspond directly to any set of color-matching functions. There are many practical reasons, including minimizing noise buildup, why deliberate departures from color-matching functions are made.

Page 9 of 16

In spite of the fact that resultant pictures are obtained using manufacture-skewed takingcharacteristics, as mentioned above, the resultant signals serve to be satisfactorily representative of the color-producing performance of the color reference monitor/projector or the theater projector used. This is not to say that control room monitor or projector development in the future will not produce control display devices having primaries and displayable picture quality equal or comparable to those used in digital cinema projection systems of that time. When this does happen, the new representative display primaries will be used to determine associated and calculated ideal color-matching function curves as a starting point in the process, as was mentioned above. If some or all of the explanations expressed in this section seem like black magic, suffice to say that they are more like velvet magic. For those interested in reaching a further understanding of this subject, as well as for others, the book titled: “Digital Color Management: Encoding Solutions” by Edward Giorgianni and Thomas Madden (Published by Addison Wesley) is highly recommended reading. Digital Source Master Formation of the Digital Source Master (DSM) in the data centric mode is the same as for the film centric mode, with the exception of certain individual processing function differences. For example, as mentioned before, dust busting and grain matching processes are used in the film centric mode, but are not needed for cases of digital camera acquisition. Basically stated, the Digital Source Master (DSM) is a master recording that contains layered files. All necessary distribution formats, such as, NTSC, PAL, DTV, SDTV, HDTV, and Digital Cinema, etc. are derived totally, or in part, from DSM program play-out content, as is archival storage. The Image Digital Cinema Distribution Master (Image DCDM) is derived from DSM image representation play-out signals, the former of which, after being transformed to CIE color space and primary set, and then being further gamma processed and transcoded becomes one of a multi-layered, DPX-formatted, set of production-related data files. To illustrate the formation of the DSM, both film centric and data centric flow charts show these processes as representation boxes contained within the gray-colored post-production boundary marker. They are from beginning to end: (1) Digital Archive, Signal Correction, Processing & Editing (mentioned previously, and represented in the upper-left blue-colored box within the post-production boundary in the flow chart). (2) Signal Correction & Processing log memory (represented in the gold-colored box directly to right of the previous box. (3) Correction Metadata (represented in the blue-colored box to the lower right of the previous box)

Page 10 of 16

The correction metadata, as illustrated in the flow chart, represents the frame-by-frame action of all signal correction & processing applied in feature post-production, and is synchronously embedded along with the respective image representation signals. RGB/CIE RGB Transform Matrix Once the associated image representation signals have been subjected to all the operational elements mentioned above, the Digital Source Master is considered complete after being recorded on production media. On DSM play-out, the signals, which are in output-referenced linear coded RGB form, are then converted by a linear transform matrix to coded linear CIE XYZ primary image representation signals in DCDM CIE XYZ color space and primary set. This is shown as the yellow-colored function box in the flow chart). The start of the transform matrixing process represents the beginning of the system process that serves to ultimately produce the Image Digital Cinema Distribution Master (Image DCDM) file layer. The 3x3 matrix shown below is used in transferring ITU-R Rec. BT.709 RGB primaries to the proposed Image DCDM CIE XYZ primaries: Destination

XDCDM CIE YDCDM CIE ZDCDM CIE

0.4122 0.3577 0.1803 = 0.2125 0.7154 0.0721 0.0193 0.1192 0.9497

x

R 709 G709 B709

Source

(1.1)

As noted above, the transformation is performed with output-referenced linear-coded RGB image representation signals. This is done in order to prevent chroma and luminance signal errors that would occur if nonlinear signals were processed through the matrix, which could affect the overall image quality.

Image DCDM Color Space Primaries and White Point As reported in the SMPTE Technology Committee Digital Cinema-DC28 Status Report, dated February 2001, “The color space will have a new defined set of primaries that will include all the human-visible color space. This color space will also apply to the DSM (Digital Source Master)”. Subsequently, this second requirement has now been rescinded in order to allow owners and suppliers of the program content greater distribution flexibility. The recommended color space is the CIE XYZ color space and primary set having chromaticity coordinates for the encoding primaries as follows:

Primaries Red Green Blue

x y u’ v’ . 1.0000 0.0000 4.0000 0.0000 0.0000 1.0000 0.0000 0.6000 0.0000 0.0000 0.0000 0.0000

Also, the recommended white point chromaticities for this primary set are:

White point (Illuminant) D61

x y 0.3198 0.3360

u’ v’ . 0.2001 0.4730 Page 11 of 16

Inverse 2.6 Gamma Coding Characteristic Application and 12-Bit Transcoding The image representation signal triplets are now ready for “inverse gamma transfer characteristic application”. In the flow chart, this operation is represented by the blue-bordered process box labeled with this text. There are basically three reasons for making sure that this non-linear application is applied individually to the trio of linear XYZ image representation signals: 1. To provide power law non-linearity to the XYZ trios of video signals in a manner that a picture contrast ratio of 10,000 to 1 is obtained, and additionally will result in a maximum delta E* (perceived visual stimuli difference) for a one code-value change that will never exceed the visual threshold of the eye in perceiving a difference between two discrete image representation signal-levels of scene visual excitation, thus eliminating unwanted perceived contouring artifacts in the displayed pictures. 2. To reduce the bit depth (12-bits recommended at this time) required to accomplish items mentioned in Paragraph 1, above, thereby saving bandwidth and storage requirements, the big burdens of linear coding. 3. To minimize noise occurring during quantization, compression, and other transmission elements. To accomplish this, a specific encoding equation is used to program a look-up table used to process the 10/16-bit to 12-bit transcoding operation. It is as follows:

CV = L1/γγ / K, where CV is the code value. L is the luminous output of the display of12.0 ft Lamberts when all XYZ code values are equal to 1.0. K is a scaling constant (suggested to be 0.0026). γ (gamma) is the power coefficient (recommended to be 2.6). The transcoding process involves sampling each successive input code value, as played-out from the DSM (at whatever bit depth is involved), and entering it as an input in the look-up table. It then offers a matching 12-bit code value (or values) in exchange for it (or them), depending on the comparative bit depth of the input and output signal data. With the transformation to 12-bit coding and the inverse 2.6 gamma coding characteristic applied, the Image DCDM, in data file form, is combined with the remaining layered set of DCDM files on recorded media. Once in this completed form, the digital cinema distribution phase can now begin.

Compression, Encryption, and Transportation The combined DCDM DPX-formatted files are now ready for compression (where required), encryption (optional to content owner), media packaging (e.g. DVD, etc.) and then for transport to the intended digital cinema facility via one of the following methods: 1. High-speed terrestrial network 2. Satellite,

3. Low-speed data service, 4. Courier. Page 12 of 16

Theater Data Storage, Server play-out, Decryption, De-compression, and Inverse 2.6 Gamma Coding Characteristic Removal Once the packaged DCDM content is received at the theater facility and recorded on storage disks, the digital cinema distribution phase is complete The exhibition phase begins when this stored DCDM content is played-out for audience viewing by either an associated data-push server, or a data-pull play-out device, (depending on the theater equipment installed). Because the image signals are in a compressed state at this stage, play-out and subsequent signal processing, up through display of image content on the theater screen, will occur in real-time, as will all other supporting signal sources. On play-out, the layered data files enter what has been defined as the media block, where the combined file layers are first separated into individual data files. From here, the Image DCDM file data, for one, is decrypted and then decompressed to return the non-linear, 12-bitbinary-coded R’G’B’ primary image representation signal data back to the same DCDM form that it had at the beginning of the distribution phase. From here, and while within the media block, the coded image signal data is processed through the Inverse 2.6 Gamma Coding Characteristic removal function module where it is transformed into linear CIE XYZ image representation (linearized Image DCDM) signals in Image DCDM CIE color space. The Inverse 2.6 Gamma Coding Characteristic removal formula used for these operations is:

L = (K*CV)

γ

Where γ = 2.6 and K is a scaling constant.

The application of this formula is used to program a look-up table that is used to produce resulting 12-bit linear XYZ image representation signals. From here, the coded signals are passed through a final 3x3 matrix where they are transformed to linear RGB signals in the color space and primary set used by the theater digital projection system that follows (assumed here to be that for Xenon). The above linearization process is necessary for the same reason expressed earlier that applies to primary signal transformation matrixing done prior to the DCDM formulation flow. Linearized signals must be used to process data in the matrix transformation process to ensure that no subsequent matrix output signal data distortion will occur. The linear transform matrix referred above is shown below. DESTINATION PRIMARIES

SOURCE PRIMARIES

R XENON 2.6323 -0.9857 -0.3867 GXENON = -0.8030 1.7064 0.0125 BXENON 0.0412 -0.0876 1.1009

XLIN DCDM X YLIN DCDM ZLIN DCDM

Page 13 of 16

(1.2)

Projection After passing through the complete digital cinema system, these image representation signals which have been progressively changing in form while retaining their intended representation of the original scene visual stimuli content, plus color enhancements and corrections made in feature post production, are now in linear coded form and become input signals for the digital cinema theater projector. The projector, on receiving these input signals, through a very complex and somewhat proprietary process, transforms them into three individual pixel-reflected sources of “nominallylinearized” (to be explained later) RGB primary visual stimuli, which it simultaneously projects onto the digital cinema theater screen. Because the integrity of progressive image representation signals is maintained throughout the entire digital cinema system, the resultant visual stimuli present on the screen will display (as perceived) reasonably accurate replicas (pleasing reproductions) of the visual stimuli that existed during acquisition. For this system end-point in the theater, the flow chart shows an image of the original cameracaptured butterfly, as projected on the screen. Notice that the butterfly appears graphically as an achromatic (non-chromatic) object. This is because the projected image is not a physically-colored object. It instead is an array of pixilated visual stimuli, produced by the projector in accordance with its component RGB input signals, which are progressive image representation signals that were originally developed from the visual stimuli that was reflected off of the butterfly at the starting point in the studio. An observer viewing the theater screen is also shown. His view on the screen is made up of electromagnetic energy in the visible-spectrum (visual stimuli). However, as the flow chart shows, this array of pixilated electromagnetic energy allows the second observer to perceive (in color) a pleasing replica of the image of the original butterfly in the originating studio. Assuming that the complete digital cinema system works as specified, both observers (in the studio and in the theater – as shown in the flow charts) will have acceptably-equivalent color perceptions of the butterfly, as well as for all other picture-objects passing through the system.

Concerning “nominally linearized” RGB pixel triplets of visual stimuli Note that a label adjacent to the digital cinema theater screen in the flow chart reads, “NOMINALLY-LINEARIZED RGB PIXEL TRIPLETS OF VISUAL STIMULI”. To “nominally-linearize” means: to produce a basically linear (but not a completely linear) overall video system transfer characteristic. This will be explained below.

Stretching the Contrast Ratio For many years television production, following the lead of that for film has been using the technique of under-compensating the power function inherent in CRT display devices. Page 14 of 16

This technique is called “stretching the contrast ratio”. It has the effect of restoring (or enhancing) apparent luminance contrast and color saturation of images viewed in a darkened surround (by stretching the darker portions of the image content), when being viewed in the average home. The resulting improvement has been described as giving the images a “bigger than life” appearance. This “under-compensating” feature has generally been performed at the camera. To do this, the camera gamma correction is intentionally reduced by a factor of approximately 1.2. This then results in an overall system gamma transfer characteristic of about 1.2 when considering the inherent gamma characteristic imposed by CRT based display

Digital Cinema Viewing Audiences in digital cinema theaters of the future will be sitting in an even more darkened surround. The Contrast Stretching feature, then, will be desirable, but the output transfer characteristic to accomplish this will need to be increased to somewhere between 1.2 and 1.7, as output-referenced. (See document titled, “D-CINEMA REFERENCE AND THEATER PROJECTOR SYSTEM FLOW.doc”, Black Enhancement Requirement section, Page 2).

Recommendations: 1. The value of the output system transfer characteristic needed for black enhancement should be verified by test, the best point of application verified, and then the results specified in an appropriate digital cinema standard. 2. Tests involving quality of picture content at the theater level resulting from the use of all presently recommended standards involved with colorimetry should continue until all DCinema standards are completed.

References: 1. Wyszecki and Stiles 2, Berns

Color Science

Principals of Color Technology

Second Edition

Wiley Inter-Science

Third Edition

Wiley Inter-Science

3. A. Roberts, B. Eng.

Television Colorimetry: A Tutorial for System Designers BBC R&D

4. Charles Poynton

A Technical Introduction to Digital Video - John Wiley & Sons, Inc.

5. Charles Poynton

Digital Video & HDTV Algorithms & Interfaces - Morgan & Kaufmann Pub.

6. Bob Rast Vice-chairman of the SMPTE Technology Committee on Digital Cinema – DC28: A Status Report SMPTE Journal February, 2001 7. Thomas J. True A Datacentric Approach to Cinema Mastering - SMPTE Motion Imaging Journal October/November 2003 (This paper was originally presented at the 144th Technical Conference in Pasadena, October 23-26, 2002. Copyright 2000 by SMPTE). Page 15 of 16

8. Edward J. Giorgianni and Thomas E. Madden Solutions - Addison Westley

Digital Color Management - Encoding

Acknowledgment To the following people for their comments, recommendations, text error notification, and discussion: Dave Bancroft Howard Lukk Al Barton Jim Mendrala Craig Connally Jerry Pierce Matt Cowan David Richards Chuck Harrison Brad Walker Tom Maier Edward J. Giorgianni

Formatting note for .doc files: Note: Revisions or corrections since last version are indicated with red type. Once reviewed, please feel free to change these individual text modifications back to normal (black text).

Page 16 of 16