Abstract

The color image quality of presentation programs is evaluated and measured using S-CIELAB and CIEDE2000 color difference formulae. A color digital image in its original format is compared with the same image already imported by the program and introduced as a part of a slide. Two widely used presentation programs—Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2—are evaluated in this work.

1. Introduction

In the last two decades, we have seen the development of new communication tools capable of transmitting diverse types of information—numeric, textual, graphics, pictures, movies—and increasing their impact. Existing and emerging technologies have refined and integrated in computer software packages used to display information in a variety of presentations, including education, professional, and for general communication, normally in the form of a slide show. Presentation programs have actually replaced older visual aid technology, such as chalkboards, slides, and overhead transparencies, though the “slide” analogy is still a reference to the nowadays rather obsolete slide projector. Slides can be printed, but more usually, displayed onscreen and navigated through at the command of the presenter.

VCN ExecuVision, created by Visual Communications Network, Inc. in 1982, was the first presentation program for personal computers. The program had a complementary image library that originated the concept of clip art (predesigned images). The benefits and potential utilities of this sort of software programs were rapidly reported by Toong and Gupta [1]. Nowadays, Microsoft PowerPoint and Apple's Keynote are two commonly known and widely used presentation programs. Most of current presentation programs have the ability to import graphic images and color pictures from digital photo archives or image banks. Concerning the quality of displayed color images, much more investigation has been reported on the influence of the projector or other display devices [2, 3] than on the influence of the presentation program (software). However, both influences are important. For those users not very experienced in color science, it may be useful to know that not only does the display device play a role, but also the presentation software. Presentation of color images are widely used in many fields of science and education. In some of them (e.g., presentation of clinical photography in medical lecturing), we have noticed that a good quality of color image display is essential.

In this work, we focus on the color image quality of the digital image imported and processed by a presentation program to be a part of a slide. To this end, we compare a digital image in its original format with the same image already imported by the program and introduced as a part of a slide. The effects of the display device, usually a video projector, are not considered here, but just the effects of the presentation program on the color content of the spatially variant signals. These effects can be evaluated through the measurement of the color reproduction error existing between the original digital color image and the digital color image contained in the slide created by the presentation program of concern.

The CIELAB system (CIE 1978) is an important international standard for measuring color reproduction errors between large uniform patches. However, when it is used to determine the color difference between images on a pixel-by-pixel basis, it tends to produce larger errors at most image points than the perceived ones. For this reason, Zhang and Wandell proposed a spatial extension to the CIELAB color metric, known as the S-CIELAB metric [4], that can be applied to complex stimuli such as digital images when they are viewed at different distances. They use a series of spatial filters in the opponent color space , containing one luminance channel () and two chrominance channels (, ). The filters are smoothing filters consisting of a linear combination of exponential function masks that approximate the contrast sensitivity functions of the human vision system for a given viewing distance. The filtered image is then back transformed to the CIELAB representation. S-CIELAB allows one to measure the perceived color differences by applying the standard CIELAB formula Delta E to the filtered images pixel-by-pixel. S-CIELAB has been used to measure color reproduction errors in images [4], to predict texture visibility of printed halftone patterns [5], to evaluate the effects of image compression [4], to segment color images [6], and to sharpen color images in combination with a second derivative operator [7]. S-CIELAB can be implemented in both the spatial and the frequency domains [8]. The CIEDE2000 color difference formula [9] combined with S-CIELAB has been compared with other existing CIE color difference formula (, , and ) and three different viewing conditions in [8]. The authors proved that CIEDE2000 () tends to produce color difference images (called error image) with the smallest mean and standard deviation when evaluating the color difference of halftone image pairs [8]. In this work, we also use S-CIELAB and CIEDE2000 color difference formulae to measure the color reproduction error between the original digital color image and the digital color image imported by a presentation program to build one of its slides. Two presentation programs are to be analyzed in this work: Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2.

2. Measuring Color Reproduction Errors of Digital Images

With S-CIELAB [4], Zhang and Wandell provided a metric to determine the perceived color differences between image pairs. To implement S-CIELAB, a sequence of steps has to be followed. First of all, it is necessary to transform the input images into a device-independent color space, such as CIE 1931 . In the following, we assume that the input images of a given pair are expressed in the standard color space sRGB [10] and are then transformed into CIE . The second step involves a spatial filtering of the images that is performed in an opponent color space consisting of one luminance channel () and two chrominance channels (, ). The transformation and spatial filters used in S-CIELAB have been estimated from human psychophysical measurements of color appearance [11].

The opponent channels are linear transform from CIE 1931 as shown by

Since the three channels are not completely orthogonal, some color fringes may appear after spatially filtering the image components with different size filters in each channel. These artifacts could be generated when rendering images. However, they were analyzed in applications such as color image sharpening and found of very low significance [7]. Moreover, the effects caused by the lack of orthogonality are not relevant when calculating color differences for the number of applications described in [46].

Once two images are transformed into the opponent color space, they are spatially filtered using filters that approximate the contrast sensitivity functions of the human visual system. In this work, we will carry out this filtering via convolution in the spatial domain. In each opponent channel, the filter is a linear combination of weighted exponential functions and its kernel sums to one. Thus the three filters preserve the mean color value for large uniform areas and S-CIELAB and CIELAB give similar predictions for them. The kernel of each spatial filter is given by where indicates the opponent channel , is the weight, and is the normalized kernel of a function described by the expression

In (3), s is the spread and S is a constant that normalizes the kernel of the function so that it sums to one. In (2), the spread of the exponential functions is and it represents the decreasing in sensitivity that occurs in the human vision system when the viewing distance increases. This blurring effect is represented by the product of the spread expressed in degrees of visual angle () times the number of pixels per degree of visual angle () when the observer is placed at a given distance from the monitor. Table 1 shows the values of weights , which are already adjusted to sum to one [8], and the values of spreads used in S-CIELAB [4].

Let us denote the components of the spatially filtered images in the opponent color space that are produced from the convolution of the spatial filters with the input image components : where symbol * is the convolution operation. The 2-D convolution in the spatial domain of (4) can be more efficiently computed as two 1-D convolutions taking into account that the kernels are separable.

The filtered components in the opponent channels (4) are then transformed back into CIE space using the linear transformation which is the inverse of (1). The filtered images are transformed into the CIELAB space using standard equations [10], for which the tristimulus values of the white point of the display device have to be known through the device characterization (sRGB monitors have D65 white point). Once the CIELAB coordinates are calculated for all the pixels, color differences between filtered images can be computed on a pixel-by-pixel basis. The result is a color difference image where each pixel value represents the perceived color difference at that given point. The standard CIE color difference equation [12] has been traditionally used with S-CIELAB. Since CIEDE2000 () tends to produce color difference images (or error image) with the smallest mean and standard deviation among a set of three other CIE color difference formulae (, , and ) [8], we will also use CIEDE2000 color difference formula to compare the image pairs of this work.

3. Experiment and Results

Figure 1 shows the test image (Printer evaluation target for sRGB, in .TIFF format) of pixels size. This image is imported by the presentation programs Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2 to provide and , respectively. These presentation programs are commanded to read the single image file and give it the same configuration of size, scale, and other common characteristics. In both cases, the image under evaluation is obtained after activating the option “present” (display) of the software program and then it is saved using screenshot (through the Grab version 1.3 for Mac Os X-Panther) in the same conditions. Working in this way, we ensure that the differences between images are only due to the characteristics of the presentation programs under comparison.

Let us consider that these images are to be displayed on a CRT monitor that conforms to sRGB and is controlled by computer. The sRGB color space has been characterized by the International Electrotechnical Commission (IEC) [10]. Thus we consider that the original image is an sRGB color image, that is, it was created using devices that conform to sRGB, and the monitor is sRGB compliant, and is associated with an appropriate color profile. In consequence, the resulting processed images will be consistent across devices and we will be able to compare them with the original in appearance. The formulae to convert between sRGB and tristimulus values for D65 white point are the following (also available on the Internet [10]):

Let us consider that the monitor is capable of displaying p pixels-per-cm (ppc) and it is viewed at L cm. The number of pixels-per-degree of visual angle is then

In our case, we consider that the image is displayed on the monitor with 72 dpi resolution ( ppc) and is to be observed at two different distances  cm so that, according to (7), the pixels-per-degree of visual angle are .

The CIEDE2000 color differences in the S-CIELAB metric between the original image of the test and the image imported by Microsoft PowerPoint are shown in grayscale in Figure 2(a) for the viewing condition of pixels/degree ( cm). The result corresponding to Apple's Keynote for the same viewing condition is shown in Figure 2(b). The color difference has been represented with the same scale in Figures 2(a) and 2(b). Black areas indicate the minimum color difference and white areas indicate the maximum color differences in CIEDE2000 units. For a better visualization of the magnitude of color differences in both figures, pseudocolored versions of Figures 2(a) and 2(b) are also provided in Figures 2(c) and 2(d), respectively. Table 2 contains the statistics (mean, std) of both figures. From these results, it can be clearly seen that the program Microsoft PowerPoint gives better results than Apple's Keynote.

The results Figure 3 have been obtained for a viewing conditions of pixels/degree ( cm). The statistics of the color difference distributions of Figure 3 are also contained in Table 2. Again, for this viewing condition, the better quality of Microsoft PowerPoint is obtained. The results of Figure 3 are very close to those of Figure 2, except for some small regions for which the blurring of a longer viewing distance also smoothes the local color differences.

The regions of the image with largest color errors appear enhanced and segmented in Figure 4 for both presentation programs and both viewing conditions. The pixels with color differences less than 1% appear darkened. Microsoft PowerPoint has few and very small areas of color difference greater than 1% segmented in Figures 4(a), 4(c). This is not the case of Apple's Keynote, for which orange, blue, and magenta are the biggest problems as it can be seen in Figures 4(b), 4(d).

4. Conclusions

The color image quality of the digital image imported and processed by presentation programs can be evaluated and measured using S-CIELAB and CIEDE2000 color difference formulae. We have compared a digital image in its original format with the same image already imported by the program and introduced as a part of a slide. Two widely used presentation programs have been evaluated in this work: Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2. From the results obtained in our numerical experiment, the presentation program of Microsoft PowerPoint shows lower color reproduction error and, therefore, better color image fidelity than Apple's Keynote.

Acknowledgments

The authors acknowledge the Spanish Ministerio de Ciencia y Tecnología and FEDER funds for financial support under Project DPI2006-05479.