Image Processing

Kirk Martínez

What is Image Processing?

Image processing is the manipulation of "computerised" images. There are many fields in the subject ranging from picture coding for television, object recognition for robots, image restoration, document, satellite and medical image processing. Throughout this diverse subject there are common tools and fundamentals based on the mathematics of signal processing. There is usually a clear distinction between image processing and computer graphics, which is the generation of synthetic images, although the two are sometimes referred to as image computing.

All television signals go through a process to compress them into less data and make effective use of telecommunication channels. The higher definition TV systems being released now depend heavily on digital processing of the images. Moving picture coding has the extra dimension of motion, which can help or hinder. For example compression can make use of the fact that little changes from frame to frame, but there is so much data in one second that processing must be extremely fast: too fast for most computers!

Robot vision systems allow machines with cameras attached to automatically inspect objects on conveyor belts for example, spotting cracked bottles, odd-shaped mouldings etc. Systems to allow freely moving robots to find there way around have been developed but are still fairly primitive (not technologically!). We have a high expectation of robot vision because we have such a good visual processor in our heads, and years of training to use it. Computer vision still has a hard time trying to isolate the objects in a scene, before even trying to categorise them. Typical systems may use two cameras to give stereo vision and may split up the scene into boundaries and edges. This is something we know human vision does, so it seems a good idea to copy it. These areas use pattern recognition techniques and segmentation to split images into distinct objects.

Image restoration and enhancement has been extensively developed by NASA which uses many techniques to improve the images it gets from space probes and satellites. Simple image processing can alter the contrast and tonality of an image to make features easier to see. Speckle or graininess in an image can be supressed by filtering. One of the most spectacular is the de-blurring of images making use of known conditions such as motion. In remote sensing, satellite images which take many images through the spectrum can be processed to characterise crops or soil types for example. Pairs of satellite images can be processed to give a height estimate of the scene, so that maps and computer graphics can be made to represent it. Geometric correction can also be applied to images to correct for distortions due to lenses, viewing position etc.

Document image processing has developed to aid in the management of paper-less filing systems. Document scanners use many techniques to process the image of a page, before applying character recognition to read the actual letters and convert them into computer text rather than just dots (this is also known as Optical character recognition: OCR). FAX machines use a picture compression scheme to take advantage of the fact that most of the page is blank. Specialised data compression techniques also exist for documents.

Many of these techniques can be applied to images of art, although wether some will work has yet to be discovered. Image processing has been used for many years in the form of photographic techniques for example which can enhance images. Digital image processing however has the advantage of precise control and reproducibility. It can also use the wealth of mathematical and signal processing techniques which would be extremely difficult to apply in any other way. For example photographic enlargers can magnify an image and adjust its contrast, even in particular regions, but digitally contrast can be adjusted in a varying way across the image and reduce/enlarge particular regions. However there are areas in image processing which still require years of development particularly in the area of recognition of images: something which people usually expect a computer can do! Recognition of objects in paintings and extracting 3D information may prove to be difficult as they may contain shading and colouring which are acceptable for human recognition, but different to a real image. However techniques based on "training" may be better for recognition of objects in paintings.

How are digital images made?

A digital image of a scene usually starts its life as an analogue signal of continuous levels and must be sampled by an analogue to digital (A/D) converter. This provides a grid of samples (pels: picture elements) represented as numbers, which may correspond to the lightness of the image. For a black and white image one byte is typically used to represent each pel, so levels from 0 to 255 are stored. By convention 0 represents black and 255 is white. If the image is quantised to fewer than around 64 levels (ie 6 bits) then false lines or contours may be seen. Sources which have enough dynamic range can be quantised to more levels, typically 10 bits (1024 levels) or 12 bits (4096 levels). This captures more subtle variations in values but is not worth it for normal TV camera signals because the random noise normally present makes more than about 200 levels meaningless.
 

207 205 179 173 173 165 121  89 141 144 136 199 209 177 154 133
202 201 172 179 190 155  84  63  59 103 119 174 208 164 151 135
193 172 184 167 170 125 110  71  40  75 102 173 208 154  85  64
203 192 176 167 161 151  87  66  44  68  74 114 133  97  25  98
182 162 159 142 146 154 134  74 144 120 135 130  54  38 110 173
188 184 175 166 161 179 190 137 124 107 142 143 127  89 135 189
195 166 130 122 135 158 205 177 211 193 179 140 123 107  89 103
203 131 107 102 115 142 172 143 200 192 165  57 106  92  19  15
177 129 137  71  96 152 180  88 161 174 123  66 180  70   7  62
148 160 117  42  51 152 185 140 170 133  49 148 155  18  19  14
139 154  91  77  68 155 186 157 168 143 112 159  72  66  54  23
130 129  93 134 146 155 201 148 172 147 115 102  37 110  56 121
140 133  83 125 147 186 173 143 169 140 126  67  48  79 151 138
111 132 162 172 194 208 206 174 172 148 116  71  92 127 142  74
190 162 187 197 202 209 201 168 173 146 128 106 107 122  82  12
166 161 185 196 198 198 199 167 164 146 121 117 104  91  63  33
156 160 172 191 184 191 186 162 158 137 128 103  69  58  46   8
Table of values for the small image above
 
 

Television signals are extremely fast (575 lines in one frame in 1/25th of a second or one pel every 0.00000001s) so a framestore is used which can quickly store the incoming data in memory (RAM). This process is popularly known as grabbing and images are scanned top down, left to right. The computer can then read the data at its slower speed to store/process it. Conversely when an image is displayed using a graphics card for example the image is stored in memory which is repeatedly read out and sent to a digital to analogue converter (D/A) and then out as an analogue signal to a cathode ray tube (CRT) for example. Each dot is shown so small that the image appears continuous.

Desktop scanners usually read lines of the image much more slowly, and the computer can read them directly. To make a colour image three images must be taken through red/green/blue filters to mimic the eye. These RGB values can then be sent to the corresponding beams in a colour monitor to produce a colour image. In practice some calibration is needed to make the colour look right, and some colours are impossible to reproduce. Once images are digitised they can be stored on disks, tapes etc permanently and will not change unless the storage medium is damaged or ages. This can be prevented by copying as often as the medium necessitates. As the images are represented as computer numbers they can easily be processed.

Resolution

An image with too few samples looks blocky because the pels are too big. A television picture or camera image is usually stored as 512x512 pels, which is often more than enough for such sources. High resolution CCD cameras can produce 6000x4000 pels by slowly scanning a single line sensor. A 35mm slide scanner usually provides around 6000x4000 pels, which is more than enough for this film (50-100 line pairs per mm). If insufficient resolution is used, then fine detail is lost and may even cause spurious patterns or aliasing. Classical examples of this are the bands produced by overlapping net curtains due to one pattern sampling another by occlusion. Error! Reference source not found. illustrates this in one dimension. Wagon wheels which turn the wrong way on films are aliasing in time, due to the film frames being taken too slowly and catching the wheel at various stages of rotation. Television uses 25 frames per second, which is just unnoticeable by the eye (but try looking at a TV with your peripheral vision, which is more sensitive to motion). Modern displays refresh the screen at more than 70 frames per second.
 
 

Contrast enhancements

One of the simplest image operations is thresholding, where values below a certain level are set to black and the rest are set to white. This is often used to simplify the image to cut out areas or for further recognition. Level slicing is similar but levels between two values become black, with others turning white. This is useful for images where ranges of values are meaningful, for example an image representing temperature or the output of another operation. False colour can show many ranges as different colours, typically low values are shown as blue, with colours of the rainbow assigned to larger value ranges. These give more information and are common in satellite imaging.
 

Fig. 2 - Original

Fig 3. - thresholded

Monitors, cameras and the eye have different responses to their inputs. For example a monitor will produce a brighter step between inputs 0.8-0.9V than from 0.2-0.3V. If a bright scene has a part which emits twice as much light, this will seem to the eye as less different in brightness than a dark scene with an area which emits twice the amount of light.
 

Fig. 4 Gamma=2

Fig. 5 Gamma=0.5

The relationship for each of these cases can be expressed as a power law:

response = input gamma

and can easily be applied to images. A gamma greater than one tends to expand the dark regions and compress the lighter regions. Typical displays have a gamma of 2-3, whereas the eye has a gamma of 0.3-0.5 so they can just cancel each other out (ie: monitor has a square law, eye has square root). In practice some gamma correction is needed to make sure that equal steps of voltage to the monitor give us equal visual steps of brightness, so that the tonal balance is correct.
 
 
 

Figure 6 Histogram and CDF illustrating histogram equalisation.


The histogram of an image, which counts the number of pels of each value, can be used in many ways. Good threshold values can be found by looking for natural groups of values for example. A simple contrast stretch to spread the values over the whole 0-255 range is useful but using the histogram to spread the values out even more produces dramatic effects. This is known as histogram equalisation which attempts to make the histogram of the new image uniform. Figure 6 shows the histogram of an image (the central single peak) and its cummulative histogram (CDF) seen as a stretched S shape, which is made by summing the histogram values from left to right. This curve is used to map the image values, shown coming from the bottom axis to their new value on the vertical axis. Because the CDF jumps where the histogram has peaks these values are spread the most. Figure 7 shows the histogram of a portion of the X-ray image, which shows underdrawing. Fig. 8 shows the effect of histogram equalisation and shows the new histogram.
 

original X-ray image

Fig. 7 histogram of original image

after histogram equalisation

Fig. 8 equalised

 

Normal histogram equalisation works best on images with strongly grouped values. When applied to a whole image with a wide range of values little difference is sometimes seen. However histogram equalisation is certainly not to be applied to every scanned image as it distorts the tonality of the image. It should be reserved for inspecting low-contrast regions for detail hidden to the eye. It is also possible to force the image to have a histogram close to one given, by redistributing the values. A more drastic approach is to measure the histogram in a region around each pel instead of the whole image. This is much slower as it requires a histogram operation for each pel, but enhances localised regions heavily, and hence can be used on whole images as shown in Fig. 11. The size of the local region determines the strength of the effect.

Fig 11 Local histogram equalisation

 

A common enhancement of images which have low contrast is to assign a sequence of new colours to each value in the image. This can show more detail in monochrome images and a "hotness" scale is often used where dark values start as blue become red then white. The change in colour between values is then much more apparent but artificial boundaries can be made (like contouring in an image with too few values). Non-visual information is often treated this way, such as density or infra-red images. Data which has a high dynamic range, for example 10 to 12 bits per pel, would normally have to be compresses to 8 bits hence loosing detail. By using false colour more detail is apparent as because of the colour contrast between the 1024 values (10 bit).
 

Filtering

Taking a weighted sum of the local area around each pel is a convenient way of carrying out many operations. Consider a local region of 9 pels around the first pel of an image. These pels can be averaged and the value placed in a new image. This operation can be represented by the blur "mask" below (ie add each neighbour and divide by 9). If this local average is made for each pel in the image and all the averages are stored in a new image, then a blurred image is made. This is illustrated below for one pel, where the slightly brighter value 15 is reduced to the local average of 10. It is assumed that the result is divided by 9 (sum of mask).
 
1
1
1
 
10
10
10
       
1
1
1
on
5
15
10
=
 
10
 
1
1
1
 
10
10
10
       
Blur mask
 
Some pels
 
New pel

For this filter small variations in the region are averaged out: ie some high frequencies are cut out but low frequencies left, which is why it blurs (also called a low-pass filter). This process of applying a mask is known as convolution.
 
-1
-1
-1
 
1
1
1
 
-1
-1
-1
-1
8
-1
 
-2
-2
-2
 
-1
10
-1
-1
-1
-1
 
1
1
1
 
-1
-1
-1
a) High pass
 
b) Line detector
 
c) sharpen

The high pass mask shown in a) can be summarised as summing the surrounding pels, then subtracting from 8 times the centre, so for the example pels above:

-1 * 10 + -1 * 10 + -1 * 10 + -1 * 5 + 8 * 15 + -1 * 10 +-1 * 10 + -1 * 10 + -1 * 10 = 45

In a smooth picture area this would give zero, where the centre is darker or lighter it gives a large value. Thus it senses where small sized changes are occurring (ie high frequencies), but not smooth areas: hence "high pass". In fact the filter has optimum response to a single pel spot on a different background, such as spot noise. For example if a 255 value pel were surrounded by zeros the filter value would be 255*8. Fig. 12 shows the effect of the blur mask and Fig. 13 shows the high pass (laplacian) mask a).
 

Figure 12 - blurred

Fig. 13 Laplacian high pass


The second filter b) would give zero for smooth areas, but highest values for dark horizontal lines of width one pel. Note how the horizontal lines are picked out in Fig. 14. Such a filter can help detect cracks which appear as dark lines. In practice three other masks made by rotating it to cover the vertical and diagonal lines, then keeping the maximum response of each filter.

Fig. 14 horizontal line detector

Mask c) is really the high pass filter a) with extra weighting to the centre. This can be seen as the high pass image added to two parts of the original, which produces a sharpening effect: see Fig 15. By combining filters like b) with different orientations specific features can be detected. Larger masks detect or affect larger features. So a 10x10 blur mask would severely blur the image. Masks which make negative values such as the three above are often scaled down and shifted up by 128 to make all the values visible (ie 0-255).


Fig. 15 mask c)

Convolution filtering is a fundamental tool in image processing and is mathematically understood to the extent that filters can be designed to cut/boost specific image frequencies. By adding a fraction of the result of filter a) to the original image, a sharper appearance can be made. Filters can be used as edge detectors which give simplified information for subsequent analysis or recognition.

The frequency world
 
 

Like sound can be split into its component frequencies, so can images. Image frequencies are related to how fast the values are changing across the image, so a picture of graph paper contains specific high frequencies related to the spacing of the lines and their thickness. An image of a pool of custard, or a subtly shaded wall would contain mainly low frequencies. In images the frequencies have directions, so graph paper has two main directions. If you think of waves in the sea, large waves have small waves superimposed on them, which may go in different directions. It can be imagined that the many shapes of waves can be made by superimposing many simple waves. The imaginary waves in a still image do not move, but an image can be made with a set of waves of brightness with certain frequencies and amplitudes.

Fig. 16 power spectrum

An image can be split into its frequencies by a Fourier transform, which gives data which is hard to visualise but easy to process. The power spectrum shows the relative amounts of each frequency, so it is a convenient way to look at the data, with low frequencies in the centre and higher ones towards the outside. Figure 14 shows this for the Campin image, with brightness indicating strength. Note that images contain mostly low frequencies. The Fourier transform is reversible, so the frequencies can be recombined back into an image. This means it is possible to remove or boost some frequencies and hence filter the image. Blurring the image simply involves reducing the high frequency components for example. Often a frequency filter will be made in the form of a frequency image, which is multiplied by the image's frequency image. If an image contains a repetitive pattern it is often possible to isolate it in the frequency domain and remove it. Advanced versions of this technique have been used to remove the canvas texture from x-ray images in order to compare them to visual images better.

The convolution filter masks mentioned earlier have a precise interrelationship with their frequency versions. Some filters can be designed in their frequency form then converted into convolution masks and visa versa. The frequency domain can also be used to classify image information as different textures have particular frequency patterns for example.

Moving images can also be considered in the frequency domain. A uniform white image which fades up and down in brightness would have a particular temporal frequency but hardly any spatial frequencies. The three dimensional frequency space of television pictures is of interest to broadcasters as it can be used to compress the information and study defects.

Sampling and aliasing

Very often an image has a greater resolution than necessary and needs "shrinking". Some scanners use a high resolution sensor but to take lower resolution images, skip some CCD values. If this is not carried out carefully, the images will contain spurious patterns, especially in fine detail and sharp borders which will acquire a ripples. This is called aliasing and occurs in time as well as space: backward spinning wheels in films are due to the frame rate of the camera being too slow to capture the fast wheel.
 
 
 


Illustration of aliasing

Samples should be taken at twice the maximum frequency the data contains, to give at least two samples for a cycle (this is called Nyquist sampling). So to record an image of an object which has 0.1mm cracks, the samples should represent at most 0.05mm of object. In practice more will be needed to clearly see the crack and compensate for blurring etc. When a high resolution image is made, some way of viewing the whole image is needed due to limitations in displays (around 2000x2000). This involves simple sampling of the image but taking one in six pels for example is not good enough, as it can cause distortions known as aliasing where the high frequencies cause errors. Ideally all frequencies below half the sampling frequency should be removed first, so that the sampling becomes "Nyquist". This can be simplified to filtering with a suitable low-pass filter such as a block average, of the appropriate size. To shrink a picture by 2:1, 2x2 blocks can be averaged by filtering the whole image and the result sampled every two pels. Alternatively 2x2 blocks can be averaged every two pels, which is quicker. This is not the ideal filter but is generally used for simplicity. Low pass filtering has to be carried out carefully because if the filter is too good and cuts out frequencies completely at one frequency (a "brick wall" filter!) some other frequency components can suddenly appear as ripples or ringing. In practice a smoother cut-off point is use to prevent this. Another danger is that the image is blurred too much and the smaller image appears blurred as well.

To carry out slight image size increases interpolation is used to estimate the new pels and "move" the sampling points. The result is an image which may only contain a few of the original pel values, the rest are estimates, so the interpolation method is critical. Simply taking the nearest neighbour value to the point required is very quick but gives poor quality. A technique called bilinear interpolation using a 2x2 area is good for fairly smooth images and errors are difficult to see. Bilinear interpolation using 4x4 blocks is more complex and hence slower but yields good results. Interpolation is also required to rotate or geometrically correct an image. Programs such as Adobe Photoshop have all three techniques available so the simplest can be used first for speed and checking, then bilinear can be used for the final image.

Resampling an image can intruduce artifacts and blurring so it must be done carefully. If an image is repeatedly rotated or resized it will become progressively distorted. Once an image is shrunk it can not be expanded to its original state without blurring. For this reason it is important to retain original scans of images, preferably of higher resolution than needed.

Image data compression

A digital image starts out with much more data then an analogue one because each sample contains many bits, so to transmit or store it requires more bandwidth or space. Digital images also contain redundant data such as large smooth areas, especially in documents. Picture compression uses many techniques based on removing the redundancies and exploiting the properties of the eye. A lossless technique where no actual degradation occurs can only achieve around 3:1 compression with non-document images, but is ideal for archiving where no loss can be tolerated. Documents containing text with large uniform areas can be compressed such as run-length coding where the number of consecutive identical values are stored instead. A technique which is very useful for text or data with a concentration of some values (a peaked histogram) is Huffman coding. For example some data may have mostly values of 50 with decreasing amounts of values either side. Huffman coding assignes a short code to the value 50, a slightly longer one to 49 and 51, and so on. In practice all the values are arranged in order of probability and assigned longer codes. Clearly the codes must be clearly divisible on decoding, so ambiguous codes such as 1 and 11 could not be used because 11 could be two 1's! The result is that most of the data is compressed considerably (8->1 maximum) and the least frequent values are actually made bigger as they use very long codes (more than 8 bits are needed due to the decoding problem). Typical Huffman codes may look like this: 1, 00, 011, 0100, 01010, 01011, .... etc

One fundamental property of images is that each pel is usually similar to its neighbours, so it is usually possible to predict what it will be from previous pels. The difference between the real value and the prediction is usually very small and its statistics are so peaked that compression is made easier. Figure 15 shows a typical histogram of the difference between the previous and current pel (the centre of the plot is zero). Statistical coding such as Huffman, work well with such data. However a difference image actually starts with more values than the original image (-255 to 255 instead of 0 to 255) which could make the compressed data bigger than the original! By storing the large differences less accurately it is possible to use even fewer bits. This corrupts the image but only where there are big steps, where the eye can not see errors anyway (visual masking occurs). This technique (DPCM: Differential Pulse Code Modulation) can achieve a compression of around 8:1 with little visual effect and is commonly used in broadcasting. Usually a specialised quantiser is used which progressively allocates fewer bits to larger differences.

Transform coding first transforms the image into another space, usually similar to the frequency domain, before removing redundancies and coarsely quantizing some elements. So for example an image may be split into 8x8 pel blocks, each is coded separately by transforming into an 8x8 frequency space, then more bits are assigned to some frequencies than others. This is possible since the eye can not easily see errors in the high frequencies for example. For more compression, the higher frequencies can simply be omitted. A statistical technique is then used to compress these codes even more. On decoding the image from the frequency strengths some errors can occur, usually blurring or block effects. Fig. 19 shows the basic frequencies (basis functions) for a 4x4 Hadamard Transform, which uses square waves instead of smooth sine waves. This makes it computationally easier and hence faster but the DCT used in JPEG is now preferred.
 


 
 
 
 
 
 
 
 
 
 
 



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

JPEG (joint photographic experts group) have produced an image compression standard (ISO/IEC CD 11544). It is based on transform coding technique known as DCT (Discrete Cosine Transform) which uses smooth waves in the basis functions and produces better results than Hadamard. The image is divided into 8x8 pel blocks and each is converted into an 8x8 block of coefficients, which are quantised then statistically compressed. Actually the difference between successive coefficients is coded as was described before for DPCM, then passed to a Huffman coder. To decompress the images the 8x8 coeficient blocks are decoded, then inverse transformed back into pel values. Various compression ratios can be used depending on the severity of the quantisation.

JPEG also compresses colour images by turning them into YUV (lightness and two colour components U & V) first. It then subsamples the U & V colour components by 2:1 (ie the colour component is half the size) so that there is less spatial detail for colour, but this is not seen anyway. This is another example of how the properties of vision are used effectively in picture coding. JPEG can compress colour images by around 16:1 with little visible loss, but this depends on the type of image. Errors tend to be seen as blurring and 8x8 blocks which do not join smoothly. Very smooth images do not show these block boundaries so much.

Because JPEG uses transform coding, progressive decoding is possible as an option. This is where the lower frequencies are decoded first, giving an overall blocky/blurred image, and the higher frequencies are added step by step. In the end the whole image is decoded normally, but this gives a quick overview of the image so that the user can switch more quickly between images. This may be important when retrieving images on a slow machine or via a slow network. The first images produced can also be shown as smaller images (eg: 8:1 smaller because each 8x8 block is first decoded as an average value). Any lossy compression such as JPEG will always degrade the image, which may render it useless for scientific examination as image processing will detect the errors the eye can not see.

A lossless version of JPEG also exists based on the statistical difference techniques mentioned above (DPCM). A 2:1 compression is expected, with higher ratios on simpler images. Although JPEG is computationally demanding for normal personal computers it has been implemented in specialised chips which can code/decode at 10 million pels a second. This would normally require a supercomputer! Several JPEG boards are available for PC/MAC systems and workstations. Apple's quicktime also has a JPEG option.
 
 
 
 
 
 

Image Storage

Images are usually stored as separate files with a header giving information such as the resolution, type etc. The different formats of these headers lead to the incompatible formats such as TIFF, BMP, PCX and PICT. A typical video resolution image of around 512x512 pels, with one byte for each of RGB occupies 768 kbytes (1k=1024). This will fit onto most floppy disks easily and hard disks will store many of these. For example a 1024 MByte disk (1024 million bytes) could hold around 1300 of these images. Images containing all the information on a 35mm slide are typically around 4000x3000 pels or 36MB, so very few can be stored on one hard disk. The problems with data from 10x8" transparencies are even worse. Currently one viable way to have many of these images on line is to use optical disks. By placing many inside a "juke-box" which can automatically select disks huge amounts can be stored. By using data compression, many more images can be stored and because there is less on the disk it is faster to read, as long as the decompression is fast enough. The new high density CDs - DVDs will increase capacity to around 4.7 GB and will become the standard media for images.

What can be done on a personal computer?

Programs like Adobe's Photoshop are available for PCs and Apple Macintosh computers. It has some image processing functions such as filtering and simple contrast enhancements, but can also do warping, basic arithmetic and colour adjustments. These programs contain mainly retouching facilities, resizing, rotating but plug-ins are available for photoshop which do things like special effects. Some "real" image processing packages exist for these machines and accelerator boards are also available. To capture images a flat-bed or slide scanner can be used, as well as a TV camera (with an A/D board). Modern graphics cards with 2-4 MB memory can display 8, 16 and 24 bit colour at up to 1280x1024 pel resolution on a good monitor. Colour Macintosh computers can also handle 8-24 bit images. To save struggling against 8 bit palletes, 24 bit cards are preferable.

Conclusions

Image processing used to be very expensive but can now be carried out on personal computers with commercially available software. There are many techniques, described in papers and books on the subject: this paper can only cover some fundamentals. There are a growing number of researchers using and developing image processing in museums, where images are a fundamental part of life and image processing can be a valuable tool. One area which still requires years of research is pattern recognition and machine vision which could allow more automatic classification and searching of images.

Bibliography

Ballard D.H. and Brown C.M., Computer Vision, Prentice Hall, Englewood Cliffs, NJ. 1982.

J. Cupitt, K. Martinez, "Image processing for Museums", in Interacting with images, eds L. MacDonald and J. Vince, pp 133-147, John Wiley, 1994. ISBN 0 471 939412

Foley J.D., van Dam A., Feiner S.K. and Hughes J.F., Computer Graphics: Principles and Practice, 2nd edition, Addison-Wesley, Reading Mass. 1990.

Gonzalez R.C. and Wintz P., Digital Image Processing, 2nd edition, Addison-Wesley, Reading, Mass. 1987.

Niblack W., An introduction to Digital Image Processing, Prentice Hall, 1986.

ISBN 0-13-480674-3

Pearson, D.E. (Ed.), Image Processing, McGraw-Hill, 1991. ISBN 0-07-707323-1

Pratt W.K., Digital Image processing, Wiley, 1978.

Young T.Y. and Fu K.S., Handbook of Pattern Recognition and Image Processing, Academic Press, Orlando, FL. 1986.

Authors Address: http://www.ecs.soton.ac.uk/~km

Copyright: K.Martinez 1995-99