This is a revised version of a previous article that appeared on the Enterprise Mission web site.


Image Enhancement 101

Or how do you get ...

hoagland2.gif (11715 bytes)
This

realthing1.gif (10585 bytes)
From this?

by
Mike Bara

With the release of new Cydonia images for the first time in 22 years, old (and fallacious) arguments have resurfaced regarding the work done to enhance the new data. It has been asserted in various Internet forums and even in the TEM conference page that certain image enhancement techniques used by the Principal Investigator have resulted in flawed or erroneous data being presented in the "Now Hear This" section. It has further been implied that these supposedly flawed techniques have been used to deliberately mislead the general public as to the true nature of the landforms at Cydonia.

In short, they say that Hoagland has used exotic and extreme measures to distort the images so that they appear more artificial.

The reality is that the techniques used are part of standardized protocols learned and implemented over the 15 plus year course of this ongoing investigation. Further, all of the filters used and steps followed have been conservatively applied in order to ensure just the opposite of what has been claimed – that all results would be easily replicable and well within the limitations of even the least experienced users and most basic software packages.

It seems that most of this noise has come about as a consequence of a general ignorance of the aforementioned processes and protocols. In this section, we will review the basic techniques of image enhancement and then list the step by step process used to obtain the displayed results.

Step 1

"Raw" Data vs. "Processed"

The first step in producing an image enhancement is to determine "From what database do we start?". In the case of the latest Cydonia images, much has been made of the "Fact" that Hoagland’s enhancements show substantially more detail than is visible in the "Raw" images. The implication is that somehow we are "making it up". The reality is that most of these comparisons don't use the raw data either, they use the processed "MIPL" (also known as the infamous "Catbox" version) or "TJP" enhancements posted on the JPL web site.

The actual "raw" data is so badly underexposed and uniform that it almost useless as a basis for enhancement. The subsequent processed versions from NASA are considerably better, but remember that they have been subjected to a contrast "Stretch" and had the vertical banding removed using a high-end computer algorithm. In addition, a "High-Pass" filter was applied, effectively removing low frequency detail and shading  from the image.

Face-Raw.gif (18007 bytes)
The Face on Mars (Raw Data)

 

In spite of this, the "TJP" processed images are actually a solid basis for beginning an enhancement process. Using a variety of established techniques, Tim Parker of JPL created the vastly better image of the Face seen at right.

A few cautions here: One of the acts of processing done on the "MIPL" version involved "Downsizing". What this does is reduce data present in the image. The ostensible reason for this is to "reduce some of the noise", but it can also have the effect removing subtle details from an image. Effectively, we have no way of determining just how much data was removed because there is no paper trail to determine just how "raw" the raw image is, and we do not know the settings used in the creation of the enhancements. We also have no way of determining just what effect the "Vicar" product had on the images produced by Parker.

 

The "MIPL" image was enhanced using the following steps:

(1) The image is sized down by interpolation by a factor of two to
reduce some of the noise.

(2) A long, narrow high-pass filter is applied in a vertical orientation
to help reduce some of the instrument signature. This signature is seen
as the streaking that is noticeable in the original data.

(3) A long, narrow low-pass filter is applied in a horizontal orientation
to create an intensity average for the image.

(4) The results of these filtering operations are the stretched to
approximate a Gaussian distribution.

(5) The results of the high-pass and low-pass processing steps are
averaged together to form the final product.

(6) The image is flipped about the vertical axis to correct for the
camera orientation.

Face2-98-jpl.gif (78635 bytes)
"TJP" enhanced image of the Face.

 

 

Processing Performed by Tim J. Parker, Geologist
Mars Pathfinder Science Support, JPL.

Image Processing Steps:

(1) Vertical banding in raw image removed using Vicar software with long, narrow, highpass box filter, oriented parallel to banding in image.

(2) Performed moderate histogram stretch in Adobe Photoshop on Macintosh desktop computer.

(3) "Flattened" broad shading variations in scene by
copying image and creating a "mask" in Photoshop with
the shading inverted with respect to the original image. This mask was then merged with original scene and a second histogram stretch performed.

(4) Physically stretched image in Photoshop in direction perpendicular to the narrow dimension of foreshortened craters by 151.25% in order to approximate an orthographic view of the scene.


Ultimately, we are left with using the "TJP" versions as the only viable basis from which to work.

Another issue is that many critics are using the medium or high res .GIF's in their evaluations as opposed to the high resolution .TIFF's. Tiff's are superior in many ways to .GIF's, mainly because they support color depth's greater than 8-bit. Both use LZW compression, a "lossless" compression format.  All TEM enhancements are from the full resolution .TIFF versions, so comparisons with any other format or resolution setting are inappropriate.

Step 2

Interpolation

Perhaps no single aspect of the image enhancement process is more misunderstood than "Interpolation". Variously known as "resampling" "resizing" or "Anti-Aliasing", this process should not be confused with pixel replication, the act of enlarging a digital image without modifying its resolution.

In fact, this is what many of the critics have mistakenly done in their attempt to replicate our results.

pixrep.gif (1670 bytes)

What "Pixel Replication" does is simply translate the grayscale value of a pixel to the newly created neighboring pixels, so what you get is a bigger version of the same image.

Interpolation is an entirely different process. Each pixel, or "Picture Element" has a grayscale value for the all the details in the area it covers. The area of a single pixel in the MGS Face image is about 18.67 square feet (it is somewhat better in the subsequent images). Now this is a rather large area that can contain all kinds of things, a couch, a chair, a TV set, a desk, and they could be all different colors or shades. Does the MGS camera catch all this? Well, yes and no.

In a camera like the one on MGS all the details in a given area - in this case the square pixel - are derived from the average shade or "value" in the 18.67 square foot area. This is assigned an eight-digit binary number representing one of the shades of gray in the 256 shade scale. In other words, there is more data in a single pixel than is displayed by that pixel in a raw image.

The figure at right shows what happens as the camera scan passes over the edge of a landform. If one darker area has a value of 88, and the lighter area to the right has a grayscale value of 44, then the optics will record an integrated or averaged value of 66 for the shared pixels. The result of this that the edges and fine details may be lost and unrecognizable in a "raw" transmitted image.

interp3.gif (5892 bytes)


What interpolation does is to take the data in the individual pixels and use their relationships to each other to determine the best solution. These relational values contain more information than is visible to the eye (which can only see about 32 shades of grey) and depending on the algorithm used can  recover much of the "lost" (averaged) data concerning the original shape of the object.

interp.gif (3043 bytes)
Example of Bilinear Interpolation

The various algorithm's then divide the original large pixel into a number of smaller pixels whoose new values are derived (interpolated) from the shared values of the original adjacent pixels. Bilinear Interpolation (illustration) multiplies the original pixel into four, Dipietro and Molenaar's S.P.I.T. multiplies it into eight, and the "Cubic Spline" method used by Carlotto uses twelve surrounding pixels.

This is, in effect a statistical analysis of the data provided by the pixel relationships.

Of the options available in Adobe Photoshop, Bicubic Interpolation is by far the best method and generates the most accurate results.

Whichever Interpolation algorithm is used, the technique is the same. The program examines an individual pixel of "raw" data and then queries the proscribed number of surrounding pixels which in effect cast a "vote" as to the appropriate values for the new pixels. The process then moves on to the next raw pixel and this repeated until the image is complete. If the process has been tested against actual objects for accuracy - which all Interpolative methods used by TEM have - then the result can be considered highly reliable.

This is not to imply that artifacts cannot be introduced into an image through improper use of interpolation or other filters. However, these algorithms are inherently predictive of the type of artifacts they are likely to produce, and it is therefore a fairly straightforward process to include appropriate controls to account for them.

 

What you finally end up with is a far more accurate picture of the shape and shading of an individual object than you can get from "raw" data.

This is how you get "Room Sized" boxes from blotchy pixilated images.

interp2.gif (2771 bytes)


Backbone section.GIF (176595 bytes) Compare the enhanced interpolated version at left of the "Corkscrew Crater" with the "Raw" TJP image at right. The degree of detail brought out by the enhancement process is staggering in comparison, yet each of the major features are evident in the "original". This is effectively a "control", a way of seperating image artifacts from "real" features.

Another key to understanding these images is in noting that the paralell and rectilinear features are not aligned with the scan of the image, indicating an independant origin.

Backbone-Raw.gif (23834 bytes)

Other issues

The "Limits of Resolution"

Another argument making the rounds of the forums is an old one originally raised by Dr. Carl Sagan in his notorious 1985 Parade Magazine article, namely, that researchers are peering at indistinct features at the "Limits of Resolution" and imagining they see true "Artifacts". The implication is that images are being "stressed" and shapes are coming out that do not exist.

The "Limits of Resolution" in this case is the previously noted 18.67 square feet per pixel, and all of the anomalies posted are well beyond that threshold even in the "raw" data. The "Room" in the Main Pyramid is in fact made up of almost 300 pixels in the full resolution TIFF. This argument is as invalid as it was when the late Dr. Sagan made it 13 years ago.

 

Filters and Processes Used

The steps used to create the enhancements seen on TEM web site are listed below. Since different graphics cards and monitors will display images differently, settings may vary.

box.gif (45165 bytes)
Image enhanced by Mike Bara

The image enhancement above was created by Michael Bara using the same techniques as Richard Hoagland on the section of the Main Pyramid from MGS image 25803 (Arcology Model Confirmed - TEM web site). While not an exact match, it is close enough to show that the techniques employed by Hoagland are well within the bounds of "normal" procedures. All filters are PhotoShop 4.0.

  1. Area around the top of  Main Pyramid cropped from cydonia3_tjp_bot.tif.
  2. Image 2x resampled to 144 dpi.
  3. Contrast adjusted to +22.
  4. Brightness increased +5.
  5. Dust & Scratch filter applied. Set to 1 pixel Radius and 30 threshold.
  6. Sharpen filter applied.
  7. Image resampled to 288 dpi.
  8. New layer created.
  9. New layer contrast stretched.
  10. Sharpen filter applied
  11. High-Pass filter applied set to 88 pixels.
  12. Gaussian Blur filter applied to 0.3 pixel radius.
  13. Brightness increased.
  14. Both layers combined using Multiply option.
  15. Histogram Equalized.

That's it! No other filters were used in the production of this image.

 

Summary

As has been shown, all techniques used to enhance the images on the TEM web site are standard and require no exotic processing. The results are consistent and replicable by anyone with PhotoShop or equivalent software. The images produced are well below the resolution threshold of raw data and most probably indicative of the actual shapes of the objects imaged by the spacecraft.


Copyright 1998 Michael Bara - Some portions of this article were taken from "Image Enhancement: What it is and How it works" by Stanley V. McDaniel. Used with permission. Use of these portions does not imply an endorsement by Dr. McDaniel of the contents of this article.