Estimating the True Colors of Mars
By Jim Bell and Dmitry Savransky (Cornell University)
These pages display the Pancam team's best current estimate of the "true color" of rocks, soils, and other surface features photographed by the Pancam instruments on the Mars Exploration Rovers Spirit and Opportunity (Bell et al., 2004a,b). Images from Spirit are shown in the left column, and images from Opportunity are shown in the right column. Next to each image is some information about the Sol when the image was acquired on (a sol is a Mars day, which is 24 hours and 39 minutes long), the Pancam sequence number, the Local True Solar Time (LTST) when the image was acquired (assuming 24 "hours" per sol), and the Earth date corresponding to that sol and LTST. You can click on the JPG or TIFF links to download full-resolution versions of each image.
Everyone perceives color differently, and different computer monitors and printers display color differently. However, if viewed on a calibrated monitor or printed on a calibrated printer, these images will provide a good estimate of what we would see if we were there looking over the rovers' shoulder...
Things have different colors because they are made of materials that absorb some kinds of light and reflect others. All the different kinds of light (called wavelengths) combined together make white light. When light from the Sun or from a light bulb hits a material, some wavelengths will be absorbed by the material, while others will be reflected by it. We see only those wavelengths of light which are reflected, which allows us to perceive a certain object as a certain color.
Although you can separate white light into as many different colors or wavelengths as you want, it turns out that any color that you can see can be made by mixing various amounts of the three "primary colors" - red, green and blue. The human eye has millions of cells (called "rods" and "cones"), which are sensitive to light and color. There are three different types of these cells, each of which is responsible for detecting one of the three primary colors. When light is reflected off of an object, some of the light particles (called photons) hit the cells in your eyes. The different types of cells measure just how much of each color was present in the reflected light and transmit this information to your brain, which blends the amounts of primary colors detected to produce the colors that we see. Pretty incredible.
The two Panoramic Cameras (called Pancams) on each of the Mars Exploration Rovers work somewhat like a pair of human eyes. Each camera's light sensitive "cells" are called pixels, and they are part of a light detecting "eye" called a Charge Coupled Device, or CCD. However, unlike the human eye, the Pancams only measure one single wavelength or color at a time. In front of each camera is a filter wheel with eight different filters (seven colors plus one filter for looking at the Sun), each of which allows only certain wavelengths to hit the CCD. The filter wheel in front of the camera in the left Pancam eye of each rover has six filters which span the colors that we can see, from blue to green to red. The other filters can detect colors of light that we cannot see, called "infrared." When we want to take a "true color" picture of Mars, we actually take six pictures of the same exact spot - once with each of the six filters on the Pancam's left eye. Afterwards, we use computer software to combine the separate pictures and to calculate the proportions of primary colors - red, green, and blue - that the rover was seeing. We then combine these three "RGB" images into a single picture which your computer can then display as an estimate of the actual colors that you would see if you were there on Mars. Digital cameras that you can buy in stores work in a similar way, except that the filters are bonded directly onto the CCD, and the RGB images through the different filters are combined automatically for you by the camera's electronics.
The detailed math and computer processing that goes into this process is described in the next section. It's important to point out that this is only an estimate of the true color of each of these scenes from Mars. As mentioned above, everyone perceives color differently, and different computer monitors and printers display color differently. The colors also vary with time of day, and even from day to day because of different amounts of dust and clouds in the Mars atmosphere. And there are also sometimes small calibration problems with the images that can cause errors in the true color calculations. We've done the best job that we can to estimate the colors. Ultimately, the true test of color success will probably have to await the judgment of the real experts: the first astronauts who go there and see the place for themselves sometime in the next few decades...
The process of creating "true color" images involves converting calibrated Spectral Power Distribution (SPD) information to the XYZ color space and then to the sRGB color space. The XYZ color space is modeled closely after a human's perception of colors. The space can accurately describe the vast majority of the colors that can be registered by the human eye. Color representations within this space are strictly linear - a pair of colors separated by some arbitrary geometric distance will appear to be twice as different to the eye as another pair separated by only half the distance within the color space. The XYZ space tristimulus values represent the relative proportions of the three color primaries (Red, Green and Blue) necessary to produce the color being measured. Chromaticity values (x, y, and z) are the X, Y, and Z tristimulus values normalized by their common sum, are often reported to aid in comparison of values from different locations and recorded by different instruments. The Pancam images are calibrated in radiance units (W/m^2/nm/sr) and are taken to be an accurate description of the spectral power distribution of the contents of the image.
The XYZ tristimulus values are equal to the integrals of the products of an emissive source's spectral power distribution and the standard color matching functions over the range of the human visible spectrum. Since the available data recorded by the Pancams includes information about only six discrete wavelengths in the range of the human visible spectrum, information about the rest of the spectrum is manufactured with the aid of piecewise third order polynomial interpolation. The conversion to the XYZ color space is achieved by applying a two point Newton-Cotes formula to the three integrals, as shown in the formulas to the left, where Pi are the interpolated radiance values and x, y, and z are the standard color matching functions defined by the International Commission on Illumination (CIE) in increments of one nanometer, as shown below, (Carter et al., 2004) and N is a common scaling factor.
The scaling factor N used in the calculation of tristimulus values is determined by the luminance (brightness) of the scene being photographed. It is calculated such that the Y tristimulus value is equal to 1.0 for pure white (something that reflects 100% of all wavelengths). Because it is a common factor for all three tristimulus values, and color is determined by the ratios between these values, it is not actually necessary to calculate N to formally describe the color (or, more accurately, the chromaticity) of a scene. However, if you omit N, you end up with values which describe the chromaticity, but not the absolute brightness of the scene being looked at. Because an additional calibration and some assumptions are required, the calculation of N (and thus, an accurate estimate of not only the color but also the brightness of a scene) is significantly more difficult than the calculation of the colors in digital photography. We are continually working on improving our methods for deriving these values and creating the most accurate true color images possible (Bell et al., 2006). The images on these pages are created without an accurate luminance scaling factor calculation, and will therefore sometimes fail to accurately represent the true luminance of their subjects. Because of this, certain images will appear to be too bright or too "orange," while others may appear to be too dark or too "brown." Future work on our algorithms may allow us to add luminance calculations like those initially described for Pancam work by Bell et al. (2006), hopefully improving the accuracy of such images.
Lastly, the sRGB space can be defined in terms of standard CIE colorimetric values, which themselves can be calculated from a defined reference viewing environment and knowledge of the spectral sensitivities of the capture device used. Chromaticities for the red, green, and blue ITU-R BT.709 colorimetry standard reference primaries, and the Standard Illuminant D65 are used to calculate a transformation matrix between XYZ and sRGB tristimulus values (shown to the left). The sRGB tristimulus values are fit to a 2.2 gamma curve which corresponds to the standard used in most CRT monitors, and these nonlinear sRGB values are then scaled to a common maximum and minimum in the range of 0 to 255 (24 bit encoding accepted by most graphics packages and displays). (Stokes et al., 1996)
Citation and Photo Credits
Hits on this page: 4744
Hits on this page: 4744